text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
The fifth-generation wireless (5G) has already started showing its capability to achieve extremely fast data transfer, which makes itself considered to be a promising mobile technology. However, concerns have been raised on adverse health impacts that human users can experience in a 5G system by being exposed to electromagnetic fields (EMFs). This article investigates the human EMF exposure in a 5G system and compare to those measured in the previous-generation cellular systems. It suggests a minimum separation distance between a transmitter and a human user for keeping the EMF exposure below the safety regulation level, which provides consumers with a general understanding on the safe use of 5G communications. | electrical engineering and systems science |
Using the complex scaling and the stabilization method combined with the stochastic variational approach, we have shown that there are narrow resonance states in two-dimensional three particle systems of electrons and holes interacting via screened Coulomb interaction. These resonances are loosely bound systems of excited state excitons with a third particle circling around them. Recent experimental studies of excited state trions might be explained and identified by these resonant states. | condensed matter |
Out-of-Distribution (OOD) generalization problem is a problem of seeking the predictor function whose performance in the worst environments is optimal. This paper makes two contributions to OOD problem. We first use the basic results of probability to prove maximal Invariant Predictor(MIP) condition, a theoretical result that can be used to identify the OOD optimal solution. We then use our MIP to derive inner-environmental Gradient Alignment(IGA) algorithm that can be used to help seek the OOD optimal predictor. Previous studies that have investigated the theoretical aspect of the OOD-problem use strong structural assumptions such as causal DAG. However, in cases involving image datasets, for example, the identification of hidden structural relations is itself a difficult problem. Our theoretical results are different from those of many previous studies in that it can be applied to cases in which the underlying structure of a dataset is difficult to analyze. We present an extensive comparison of previous theoretical approaches to the OODproblems based on the assumptions they make. We also present an extension of the colored-MNIST that can more accurately represent the pathological OOD situation than the original version, and demonstrate the superiority of IGA over previous methods on both the original and the extended version of Colored-MNIST. | statistics |
The entire magnetization process of TlCuCl$_3$ has been experimentally investigated up to 100 T employing the single-turn technique. The upper critical field $H_{c2}$ is observed to be 86.1 T at 2 K. A convex slope of the $M$-$H$ curve between the lower and upper critical fields ($H_{c1}$ and $H_{c2}$) is clearly observed, which indicates that a particle-hole symmetry is broken in TlCuCl$_3$. By quantum Monte Carlo simulation and the bond-operator theory method, we find that the particle-hole symmetry breaking results from strong inter-dimer interactions. | condensed matter |
Combining Bayesian nonparametrics and a forward model selection strategy, we construct parsimonious Bayesian deep networks (PBDNs) that infer capacity-regularized network architectures from the data and require neither cross-validation nor fine-tuning when training the model. One of the two essential components of a PBDN is the development of a special infinite-wide single-hidden-layer neural network, whose number of active hidden units can be inferred from the data. The other one is the construction of a greedy layer-wise learning algorithm that uses a forward model selection criterion to determine when to stop adding another hidden layer. We develop both Gibbs sampling and stochastic gradient descent based maximum a posteriori inference for PBDNs, providing state-of-the-art classification accuracy and interpretable data subtypes near the decision boundaries, while maintaining low computational complexity for out-of-sample prediction. | statistics |
We study the occurrence of symmetry-enforced topological band crossings in tetragonal crystals with strong spin-orbit coupling. By computing the momentum dependence of the symmetry eigenvalues and the global band topology in the entire Brillouin zone, we determine all symmetry-enforced band crossings in tetragonal space groups. In particular, we classify all Dirac and Weyl degeneracies on points, lines, and planes, and find a rich variety of topological degeneracies. This includes, among others, double Weyl points, fourfold-double Weyl points, fourfold-quadruple Weyl points, Weyl and Dirac nodal lines, as well as topological nodal planes. For the space groups with symmetry-enforced Weyl points, we determine the minimal number of Weyl points for a given band pair and, remarkably, find that materials in space groups 119 and 120 can have band pairs with only two Weyl points in the entire Brillouin zone. This simplifies the topological responses, which would be useful for device applications. Using the classification of symmetry-enforced band crossings, we perform an extensive database search for candidate materials with tetragonal space groups. Notably, we find that Ba$_5$In$_4$Bi$_5$ and NaSn$_5$ exhibit twofold and fourfold Weyl nodal lines, respectively, which cross the Fermi energy. Hf$_3$Sb and Cs$_2$Tl$_3$ have band pairs with few number of Weyl points near the Fermi energy. Furthermore, we show that Ba$_3$Sn$_2$ has Weyl points with an accordion dispersion and topological nodal planes, while AuBr and Tl$_4$PbSe$_3$ possess Dirac points with hourglass dispersions. For each of these candidate materials we present the ab-initio band structures and discuss possible experimental signatures of the nontrivial band topology. | condensed matter |
We argue that symmetry and unification can emerge as byproducts of certain physical constraints on dynamical scattering. To accomplish this we parameterize a general Lorentz invariant, four-dimensional theory of massless and massive scalar fields coupled via arbitrary local interactions. Assuming perturbative unitarity and an Adler zero condition, we prove that any finite spectrum of massless and massive modes will necessarily unify at high energies into multiplets of a linearized symmetry. Certain generators of the symmetry algebra can be derived explicitly in terms of the spectrum and three-particle interactions. Furthermore, our assumptions imply that the coset space is symmetric. | high energy physics theory |
Melting kinetics of polycrystalline materials is analyzed on the basis of a new model which explicitly couples homogeneous and heterogeneous melting mechanisms. The distinct feature of this approach lies in its ability to evaluate not only grain-size-distribution effects on the overall melting kinetics but also competitions between the two melting mechanisms. For the first time, we reveal the three-part structure of temperature-time-transformation diagrams for melting of polycrystalline materials, through which it is possible to determine a critical temperature across which the dominant melting mechanism switches. The critical temperature increases as the mean-grain-diameter decreases following a negative power-law. The results are qualitatively consistent with experimental observations. | physics |
The origin of the interstellar object 1I/'Oumuamua has defied explanation. We perform calculations of the non-gravitational acceleration that would be experienced by bodies composed of a range of different ices and demonstrate that a body composed of N2 ice would satisfy the available constraints on the non-gravitational acceleration, size and albedo, and lack of detectable emission of CO or CO2 or dust. We find that 'Oumuamua was small, with dimensions 45 m x 44 m x 7.5 m at the time of observation at 1.42 au from the Sun, with a high albedo of 0.64. This albedo is consistent with the N2 surfaces of bodies like Pluto and Triton. We estimate 'Oumuamua was ejected about 0.4-0.5 Gyr ago from a young stellar system, possibly in the Perseus arm. Objects like 'Oumuamua may directly probe the surface compositions of a hitherto-unobserved type of exoplanet: "exo-plutos". In a companion paper (Desch & Jackson, 2021) we demonstrate that dynamical instabilities like the one experienced by the Kuiper belt, in other stellar systems, plausibly could generate and eject large numbers of N2 ice fragments. 'Oumuamua may be the first sample of an exoplanet brought to us. | astrophysics |
We present a general systematic formalism for describing dynamics of fluctuations in an arbitrary relativistic hydrodynamic flow, including their feedback (known as long-time hydrodynamic tails). The fluctuations are described by two-point equal-time correlation functions. We introduce a definition of equal time in a situation where the local rest frame is determined by the local flow velocity, and a method of taking derivatives and Wigner transforms of such equal-time correlation functions, which we call confluent. We find that the equations for confluent Wigner functions not only resemble kinetic equations, but that the kinetic equation for phonons propagating on an arbitrary background nontrivially matches the equations for Wigner functions, including relativistic inertial and Coriolis forces due to acceleration and vorticity of the flow. We also describe the procedure of renormalization of short-distance singularities which eliminates cutoff dependence, allowing efficient numerical implementation of these equations. | high energy physics theory |
Molecular simulation study of the heat capacity of metastable water between 100K and 300K Molecular simulations have been used to study the heat capacity of metastable liquid water at low temperature adsorbed on a smooth surface. These calculations aim at modelling water properties measured by experiments performed on water films adsorbed on Vycor nanoporous silica at low temperature. In particular, the study focuses on the non-monotonous variation of the heat capacity around between 100 and 300 K. | condensed matter |
Many proposals have emerged as alternatives to the Heckman selection model, mainly to address the non-robustness of its normal assumption. The 2001 Medical Expenditure Panel Survey data is often used to illustrate this non-robustness of the Heckman model. In this paper, we propose a generalization of the Heckman sample selection model by allowing the sample selection bias and dispersion parameters to depend on covariates. We show that the non-robustness of the Heckman model may be due to the assumption of the constant sample selection bias parameter rather than the normality assumption. Our proposed methodology allows us to understand which covariates are important to explain the sample selection bias phenomenon rather than to only form conclusions about its presence. We explore the inferential aspects of the maximum likelihood estimators (MLEs) for our proposed generalized Heckman model. More specifically, we show that this model satisfies some regularity conditions such that it ensures consistency and asymptotic normality of the MLEs. Proper score residuals for sample selection models are provided, and model adequacy is addressed. Simulated results are presented to check the finite-sample behavior of the estimators and to verify the consequences of not considering varying sample selection bias and dispersion parameters. We show that the normal assumption for analyzing medical expenditure data is suitable and that the conclusions drawn using our approach are coherent with findings from prior literature. Moreover, we identify which covariates are relevant to explain the presence of sample selection bias in this important dataset. | statistics |
We study the optimal design of heterogeneous Coded Elastic Computing (CEC) where machines have varying computation speeds and storage. CEC introduced by Yang et al. in 2018 is a framework that mitigates the impact of elastic events, where machines can join and leave at arbitrary times. In CEC, data is distributed among machines using a Maximum Distance Separable (MDS) code such that subsets of machines can perform the desired computations. However, state-of-the-art CEC designs only operate on homogeneous networks where machines have the same speeds and storage. This may not be practical. In this work, based on an MDS storage assignment, we develop a novel computation assignment approach for heterogeneous CEC networks to minimize the overall computation time. We first consider the scenario where machines have heterogeneous computing speeds but same storage and then the scenario where both heterogeneities are present. We propose a novel combinatorial optimization formulation and solve it exactly by decomposing it into a convex optimization problem for finding the optimal computation load and a "filling problem" for finding the exact computation assignment. A low-complexity "filling algorithm" is adapted and can be completed within a number of iterations equals at most the number of available machines. | computer science |
Atomic probe tomography (APT), based on the work of Erwin Mueller, is able to generate three-dimensional chemical maps in atomic resolution. The required instruments for APT have evolved over the last 20 years from an experimental to an established method of materials analysis. Here, we describe the realization of a new instrument concept that allows the direct attachment of APT to a dual beam SEM microscope with the main achievement of fast and direct sample transfer. New operational modes are enabled regarding sample geometry, alignment of tips and microelectrode. The instrument is optimized to handle cryo-samples at all stages of preparation and storage. The instrument comes with its own software for evaluation and reconstruction. The performance in terms of mass resolution, aperture angle, and detection efficiency is demonstrated with a few application examples. | physics |
We study the RKKY interaction between two magnetic impurities located on the surface of a three-dimensional Dirac semimetal with two Dirac nodes in the band structure. By taking into account both bulk and surface contributions to the exchange interaction between the localized spins, we demonstrate that the surface contribution in general dominates the bulk one at distances larger than the inverse node separation due to a weaker power-law decay. We find a strong anisotropy of the surface term with respect to the spins being aligned along the node separation axis or perpendicular to it. In the many impurity dilute regime, this implies formation of quasi-one-dimensional magnetic stripes orthogonal to the node axis. We also discuss the effects of a surface spin-mixing term coupling electrons from spin-degenerate Fermi arcs. | condensed matter |
Contour integral methods for nonlinear eigenvalue problems seek to compute a subset of the spectrum in a bounded region of the complex plane. We briefly survey this class of algorithms, establishing a relationship to system realization techniques in control theory. This connection motivates a new general framework for contour integral methods (for linear and nonlinear eigenvalue problems), building on recent developments in multi-point rational interpolation of dynamical systems. These new techniques, which replace the usual Hankel matrices with Loewner matrix pencils, incorporate general interpolation schemes and permit ready recovery of eigenvectors. Because the main computations (the solution of linear systems associated with contour integration) are identical for these Loewner methods and the traditional Hankel approach, a variety of new eigenvalue approximations can be explored with modest additional work. Numerical examples illustrate the potential of this approach. We also discuss how the concept of filter functions can be employed in this new framework, and show how contour methods enable a data-driven modal truncation method for model reduction. | mathematics |
This paper is a continuation of our previous works where we study maps from $X_0(N)$, $N \ge 1$, into $\mathbb P^2$ constructed via modular forms of the same weight and criteria that such a map is birational (see [12]). In the present paper our approach is based on the theory of primitive elements in finite separable field extensions. We prove that in most of the cases the constructed maps are birational, and we consider those such that the resulting equation of the image in $\mathbb P^2$ is simplest possible. | mathematics |
Due to the large amount of data that point clouds represent and the differences in geometry of successive frames, the generation of motion vectors for an entire point cloud dataset may require a significant amount of time and computational resources. With that in mind, we provide a 3D motion vector database for all frames of two popular dynamic point cloud datasets. The motion vectors were obtained through translational motion estimation procedure that partitions the point clouds into blocks of dimensions M x M x M , and for each block, a motion vector is estimated. Our database contains motion vectors for M = 8 and M = 16. The goal of this work is to describe this publicly available 3D motion vector database that can be used for different purposes, such as compression of dynamic point clouds. | electrical engineering and systems science |
Remotely sensed, spatially continuous and high spatiotemporal resolution (hereafter referred to as high resolution) land surface temperature (LST) is a key parameter for studying the thermal environment and has important applications in many fields. However, difficult atmospheric conditions, sensor malfunctioning and scanning gaps between orbits frequently introduce spatial discontinuities into satellite-retri1eved LST products. For a single sensor, there is also a trade-off between temporal and spatial resolution and, therefore, it is impossible to obtain high temporal and spatial resolution simultaneously. In recent years the reconstruction and spatiotemporal fusion of LST products have become active research topics that aim at overcoming this limitation. They are two of most investigated approaches in thermal remote sensing and attract increasing attention, which has resulted in a number of different algorithms. However, to the best of our knowledge, currently no review exists that expatiates and summarizes the available LST reconstruction and spatiotemporal fusion methods and algorithms. This paper introduces the principles and theories behind LST reconstruction and spatiotemporal fusion and provides an overview of the published research and algorithms. We summarized three kinds of reconstruction methods for missing pixels (spatial, temporal and spatiotemporal methods), two kinds of reconstruction methods for cloudy pixels (Satellite Passive Microwave (PMW)-based and Surface Energy Balance (SEB)-based methods) and three kinds of spatiotemporal fusion methods (weighted function-based, unmixing-based and hybrid methods). The review concludes by summarizing validation methods and by identifying some promising future research directions for generating spatially continuous and high resolution LST products. | physics |
Automated brain tumour segmentation has the potential of making a massive improvement in disease diagnosis, surgery, monitoring and surveillance. However, this task is extremely challenging. Here, we describe our automated segmentation method using 2D CNNs that are based on U-Net. To deal with class imbalance effectively, we have used a weighted Dice loss function. We found that increasing the depth of the 'U' shape beyond a certain level results in a decrease in performance, so it is essential to choose an optimum depth. We also found that 3D contextual information cannot be captured by a single 2D network that is trained with patches extracted from multiple views whereas an ensemble of three 2D networks trained in multiple views can effectively capture the information and deliver much better performance. We obtained Dice scores of 0.79 for enhancing tumour, 0.90 for whole tumour, and 0.82 for tumour core on the BraTS 2018 validation set. Our method using 2D network consumes very less time and memory, and is much simpler and easier to implement compared to the state-of-the-art methods that used 3D networks; still, it manages to achieve comparable performance to those methods. | electrical engineering and systems science |
Context. The GRAVITY instrument on the ESO VLTI is setting a new mark in the landscape of optical interferometers. Long exposures are possible for the first time in this wavelength domain, delivering a dramatic improvement for astrophysics. In particular, faint objects can be observed at the angular resolution of the VLTI, with exposures exceeding by many orders of magnitude the coherence time of atmospheric turbulence. Aims. Modern interferometers, especially those that combine light collected by large telescopes such as the Unit Telescopes of the VLT, benefit from partial correction of atmospheric turbulence. We investigate in this paper the influence of atmospheric turbulence on the maximum field of view of interferometers such as GRAVITY, where wavefronts are filtered with single-mode fibres. The basic question is whether the maximum field of view is restricted to the diffraction limit of single apertures or if it is wider in practice. We discuss in particular the case of the field around Sgr A* , with an emphasis on the detectability of the massive main-sequence star S2. Methods. We theoretically investigated the interferometric and photometric lobes of the interferometer and took atmospheric turbulence into account. We computed the lobe functions with and without partial correction for atmospheric turbulence. Results. The main result of the paper is that the field of view of the interferometer is widened by tip-tilt residues if higher orders of atmospheric turbulence are corrected for. As a particular case, the S2 star can be detected in interferometric frames centred on Sgr A* even when the distance between the two objects is up to about twice the diffraction limit of a single aperture. We also show that the visibilities are not biased in this case if the pointing accuracies of the fibres are of the order of 10 mas. | astrophysics |
Sources of entanglement are an enabling resource in quantum technology, and pushing the limits of generation rate and quality of entanglement is a necessary pre-requisite towards practical applications. Here, we present an ultra-bright source of polarization-entangled photon pairs based on time-reversed Hong-Ou-Mandel interference. By superimposing four pair-creation possibilities on a polarization beam splitter, pairs of identical photons are separated into two spatial modes without the usual requirement for wavelength distinguishability or non-collinear emission angles. Our source yields high-fidelity polarization entanglement and high pair-generation rates without any requirement for active interferometric stabilization, which makes it an ideal candidate for a variety of applications, in particular those requiring indistinguishable photons. | quantum physics |
The hull of a linear code over finite fields is the intersection of the code and its dual, which was introduced by Assmus and Key. In this paper, we develop a method to construct linear codes with trivial hull ( LCD codes) and one-dimensional hull by employing the positive characteristic analogues of Gauss sums. These codes are quasi-abelian, and sometimes doubly circulant. Some sufficient conditions for a linear code to be an LCD code (resp. a linear code with one-dimensional hull) are presented. It is worth mentioning that we present a lower bound on the minimum distances of the constructed linear codes. As an application, using these conditions, we obtain some optimal or almost optimal LCD codes (resp. linear codes with one-dimensional hull) with respect to the online Database of Grassl. | computer science |
Recommender systems have been extensively used by the entertainment industry, business marketing and the biomedical industry. In addition to its capacity of providing preference-based recommendations as an unsupervised learning methodology, it has been also proven useful in sales forecasting, product introduction and other production related businesses. Since some consumers and companies need a recommendation or prediction for future budget, labor and supply chain coordination, dynamic recommender systems for precise forecasting have become extremely necessary. In this article, we propose a new recommendation method, namely the dynamic tensor recommender system (DTRS), which aims particularly at forecasting future recommendation. The proposed method utilizes a tensor-valued function of time to integrate time and contextual information, and creates a time-varying coefficient model for temporal tensor factorization through a polynomial spline approximation. Major advantages of the proposed method include competitive future recommendation predictions and effective prediction interval estimations. In theory, we establish the convergence rate of the proposed tensor factorization and asymptotic normality of the spline coefficient estimator. The proposed method is applied to simulations and IRI marketing data. Numerical studies demonstrate that the proposed method outperforms existing methods in terms of future time forecasting. | statistics |
Variational algorithms for strongly correlated chemical and materials systems are one of the most promising applications of near-term quantum computers. We present an extension to the variational quantum eigensolver that approximates the ground state of a system by solving a generalized eigenvalue problem in a subspace spanned by a collection of parametrized quantum states. This allows for the systematic improvement of a logical wavefunction ansatz without a significant increase in circuit complexity. To minimize the circuit complexity of this approach, we propose a strategy for efficiently measuring the Hamiltonian and overlap matrix elements between states parametrized by circuits that commute with the total particle number operator. We also propose a classical Monte Carlo scheme to estimate the uncertainty in the ground state energy caused by a finite number of measurements of the matrix elements. We explain how this Monte Carlo procedure can be extended to adaptively schedule the required measurements, reducing the number of circuit executions necessary for a given accuracy. We apply these ideas to two model strongly correlated systems, a square configuration of H$_4$ and the $\pi$-system of Hexatriene (C$_6$H$_8$). | quantum physics |
This paper proposes a novel framework of resource allocation in multi-cell intelligent reflecting surface (IRS) aided non-orthogonal multiple access (NOMA) networks, where an IRS is deployed to enhance the wireless service. The problem of joint user association, subchannel assignment, power allocation, phase shifts design, and decoding order determination is formulated for maximizing the achievable sum rate. The challenging mixed-integer non-linear problem is decomposed into an optimization subproblem (P1) with continuous variables and a matching subproblem (P2) with integer variables. In an effort to tackle the non-convex optimization problem (P1), iterative algorithms are proposed for allocating transmission power, designing reflection matrix, and determining decoding order by invoking relaxation methods such as convex upper bound substitution, successive convex approximation, and semidefinite relaxation. In terms of the combinational problem (P2), swap matching-based algorithms are developed for achieving a two-sided exchange-stable state among users, BSs and subchannels. Numerical results demonstrate that: 1) the sum rate of multi-cell NOMA networks is capable of being increased by 35% with the aid of the IRS; 2) the proposed algorithms for multi-cell IRS-aided NOMA networks can enjoy 22% higher energy efficiency than conventional NOMA counterparts; 3) the trade-off between spectrum efficiency and coverage area can be tuned by judiciously selecting the location of the IRS. | electrical engineering and systems science |
We propose a systematic methodology to derive the regularized thirteen-moment equations in the rarefied gas dynamics for a general class of linearized collision models. Detailed expressions of the moment equations are written down for all inverse power law models as well as the hard-sphere model. By linear analysis, we show that the equations are stable near the equilibrium. The models are tested for shock structure problems to show its capability to capture the correct flow structure in strong nonequilibirum. | physics |
Ceramic dual-phase oxygen transport membranes with the composition of 60wt.% Ce0.9Pr0.1O2-{\delta}-40wt.%Pr0.6Sr0.4Fe1-xAlxO3-{\delta} (x = 0.05, 0.1, 0.2, 0.3, 0.4, 0.6, 0.8, 1.0) (60CPO-40PSF1-xAxO) based on 60Ce0.9Pr0.1O2-{\delta}-40Pr0.6Sr0.4FeO3-{\delta} doped Al was successfully synthesized through a modified Pechini method. Crystal structure, surface microtopography and oxygen permeability are investigated systematically. The cell parameters of perovskite phase first increased and then decreased with the increase of Al content, which is related to the radius of the Al3+ and the formation of impurity phase. As x ranges from 0.1 to 0.8, the oxygen permeability of the materials first increases and then decreases, and the maximum value of oxygen permeation rate for 60CPO-40PSF1-xAxO membranes with 0.4mm thickness at 1000 {\deg}C is 1.12 mL min-1 cm-2 when x = 0.4. XRD measurements revealed high temperature stability and CO2-tolerant property of the dual-phase composites. The partial replacement of Fe$^{3+}$/Fe$^{4+}$ by Al$^{3+}$ causes the material not only to exhibit good stability, but also to increase the oxygen permeability of the membranes. | physics |
M-theory on a Calabi-Yau threefold admitting a small resolution gives rise to an Abelian vector multiplet and a charged hypermultiplet. We introduce into this picture a procedure to construct threefolds that naturally host matter with electric charges up to six. These are built as families of Du Val ADE surfaces (or ALE spaces), and the possible charges correspond to the Dynkin labels of the adjoint of the ADE algebra. In the case of charge two, we give a new derivation of the answer originally obtained by Curto and Morrison, and explicitly relate this construction to the Morrison-Park geometry. We also give a procedure for constructing higher-charge cases, which can often be applied to F-theory models. | high energy physics theory |
In this work, we present a new large-scale quantum circuit simulator. It is based on the tensor network contraction technique to represent quantum circuits. We propose a novel parallelization algorithm based on \stepslice . In this paper, we push the requirement on the size of a quantum computer that will be needed to demonstrate the advantage of quantum computation with Quantum Approximate Optimization Algorithm (QAOA). We computed 210 qubit QAOA circuits with 1,785 gates on 1,024 nodes of the the Cray XC 40 supercomputer Theta. To the best of our knowledge, this constitutes the largest QAOA quantum circuit simulations reported to this date. | quantum physics |
Photons produced in the pre-equilibrium/pre-hydro stage of the quark-gluon plasma produced in relativistic heavy-ion collisions were computed using parton distribution functions obtained from solutions of the Boltzmann equation. The effect of the initial gluon momentum anisotropy $\xi$ and the dependence on the saturation momentum $Q_s$ was investigated. We see that small $Q_s$ results in a photon yield enhancement, whereas a larger $Q_s$ results in a pre-equilibrium photon suppression, owing to the strict constraint of matching to experimental energy density. | high energy physics phenomenology |
We propose to add independent pseudo quantization noise to model parameters during training to approximate the effect of a quantization operator. This method, DiffQ, is differentiable both with respect to the unquantized parameters, and the number of bits used. Given a single hyper-parameter expressing the desired balance between the quantized model size and accuracy, DiffQ can optimize the number of bits used per individual weight or groups of weights, in a single training. We experimentally verify that our method outperforms state-of-the-art quantization techniques on several benchmarks and architectures for image classification, language modeling, and audio source separation. For instance, on the Wikitext-103 language modeling benchmark, DiffQ compresses a 16 layers transformer model by a factor of 8, equivalent to 4 bits precision, while losing only 0.5 points of perplexity. Code is available at: https://github.com/facebookresearch/diffq | statistics |
We prove that, for any positive integer $m$, a segment may be partitioned into $m$ possibly degenerate or empty segments with equal values of a continuous function $f$ of a segment, assuming that $f$ may take positive and negative values, but its value on degenerate or empty segments is zero. | mathematics |
Fifth Generation (5G) networks are envisioned to be fully autonomous in accordance to the ETSI-defined Zero touch network and Service Management (ZSM) concept. To this end, purpose-specific Machine Learning (ML) models can be used to manage and control physical as well as virtual network resources in a way that is fully compliant to slice Service Level Agreements (SLAs), while also boosting the revenue of the underlying physical network operator(s). This is because specially designed and trained ML models can be both proactive and very effective against slice management issues that can induce significant SLA penalties or runtime costs. However, reaching that point is very challenging. 5G networks will be highly dynamic and complex, offering a large scale of heterogeneous, sophisticated and resource-demanding 5G services as network slices. This raises a need for a well-defined, generic and step-wise roadmap to designing, building and deploying efficient ML models as collaborative components of what can be defined as Cognitive Network and Slice Management (CNSM) 5G systems. To address this need, we take a use case-driven approach to design and present a novel Integrated Methodology for CNSM in virtualized 5G networks based on a concrete eHealth use case, and elaborate on it to derive a generic approach for 5G slice management use cases. The three fundamental components that comprise our proposed methodology include (i) a 5G Cognitive Workflow model that conditions everything from the design up to the final deployment of ML models; (ii) a Four-stage approach to Cognitive Slice Management with an emphasis on anomaly detection; and (iii) a Proactive Control Scheme for the collaboration of different ML models targeting different slice life-cycle management problems. | computer science |
Renormalization of the inverse square potential usually breaks its classical conformal invariance. In a strongly attractive potential, the scaling symmetry is broken to a discrete subgroup while, in a strongly repulsive potential, it is preserved at quantum level. In the intermediate, weak-medium range of the coupling, an anomalous length scale appears due to a flow of the renormalization group away from a critical point. We show that potentials with couplings in the strongly-repulsive and in the weak-medium ranges can be related by a dynamical supersymmetry. Imposing SUSY invariance unifies these two ranges, and fixes the anomalous scale to zero, thus restoring the continuous scaling symmetry. | high energy physics theory |
Quantum-to-classical transition is a fundamental open question in physics frontier. Quantum decoherence theory points out that the inevitable interaction with environment is a sink carrying away quantum coherence, which is responsible for the suppression of quantum superposition in open quantum system. Recently, quantum Darwinism theory further extends the role of environment, serving as communication channel, to explain the classical objectivity emerging in quantum measurement process. Here, we used a six-photon quantum simulator to investigate classical and quantum information proliferation in quantum Darwinism process. In the simulation, many environmental photons are scattered from an observed quantum system and they are collected and used to infer the system's state. We observed redundancy of system's classical information and suppression of quantum correlation in the fragments of environmental photons. Our results experimentally show that the classical objectivity of quantum system can be established through quantum Darwinism mechanism. | quantum physics |
Nitrogen dioxide (NO$_2$) on Earth today has biogenic and anthropogenic sources. During the COVID-19 pandemic, observations of global NO$_2$ emissions have shown significant decrease in urban areas. Drawing upon this example of NO$_2$ as an industrial byproduct, we use a one-dimensional photochemical model and synthetic spectral generator to assess the detectability of NO$_2$ as an atmospheric technosignature on exoplanets. We consider cases of an Earth-like planet around Sun-like, K-dwarf and M-dwarf stars. We find that NO$_2$ concentrations increase on planets around cooler stars due to less short-wavelength photons that can photolyze NO$_2$. In cloud-free results, present Earth-level NO$_2$ on an Earth-like planet around a Sun-like star at 10pc can be detected with SNR ~5 within ~400 hours with a 15 meter LUVOIR-like telescope when observed in the 0.2 - 0.7micron range where NO$_2$ has a strong absorption. However, clouds and aerosols can reduce the detectability and could mimic the NO$_2$ feature. Historically, global NO$_2$ levels were 3x higher, indicating the capability of detecting a 40-year old Earth-level civilization. Transit and direct imaging observations to detect infrared spectral signatures of NO$_2$ on habitable planets around M-dwarfs would need several 100s of hours of observation time, both due to weaker NO$_2$ absorption in this region, and also because of masking features by dominant H$_2$O and CO$_2$ bands in the infrared part of the spectrum. Non-detection at these levels could be used to place upper limits on the prevalence of NO$_2$ as a technosignature. | astrophysics |
The electronic anomalous Hall effect (AHE), where charge carriers acquire a velocity component orthogonal to an applied electric field, is one of the most fundamental and widely studied phenomena in physics. There are several different AHE mechanisms known, and material examples are highly sought after, however in the highly conductive (skew scattering) regime the focus has centered around ferromagnetic metals. Here we report the observation of a giant extrinsic AHE in KV$_3$Sb$_5$, an exfoliable, Dirac semimetal with a Kagome layer of Vanadium atoms. Although there has been no reports of magnetic ordering down to 0.25 K, the anomalous Hall conductivity (AHC) reaches $\approx$ 15,507 $\Omega^{-1}$cm$^{-1}$ with an anomalous Hall ratio (AHR) of $\approx$ 1.8$ \%$; an order of magnitude larger than Fe. Defying expectations from skew scattering theory, KV$_3$Sb$_5$ shows an enhanced skew scattering effect that scales quadratically, not linearly, with the longitudinal conductivity ($\sigma_{xx}$), opening the possibility of reaching an anomalous Hall angle (AHA) of 90$^{\circ}$ in metals; an effect thought reserved for quantum anomalous Hall insulators. This observation raises fundamental questions about the AHE and opens a new frontier for AHE (and correspondingly SHE) exploration, stimulating investigation in a new direction of materials, including metallic geometrically frustrated magnets, spin-liquid candidates, and cluster magnets. | condensed matter |
In this paper we consider unitary highest weight irreducible representations of the `Large' $\mathcal{N}=4$ superconformal algebra $A_\gamma$ in the Ramond sector as infinite-dimensional graded modules of its zero mode subalgebra, $\mathfrak{su}(2|2)$. We describe how representations of $\mathfrak{su}(2|2)$ may be classified using Young supertableaux, and use the decomposition of $A_\gamma$ as an $\mathfrak{su}(2|2)$ module to discuss the states which contribute to the supersymmetric index $I_1$, previously proposed in the literature by Gukov, Martinec, Moore and Strominger. | high energy physics theory |
The missing data issue is ubiquitous in health studies. Variable selection in the presence of both missing covariates and outcomes is an important statistical research topic but has been less studied. Existing literature focuses on parametric regression techniques that provide direct parameter estimates of the regression model. In practice, parametric regression models are often sub-optimal for variable selection because they are susceptible to misspecification. Machine learning methods considerably weaken the parametric assumptions and increase modeling flexibility, but do not provide as naturally defined variable importance measure as the covariate effect native to parametric models. We investigate a general variable selection approach when both the covariates and outcomes can be missing at random and have general missing data patterns. This approach exploits the flexibility of machine learning modeling techniques and bootstrap imputation, which is amenable to nonparametric methods in which the covariate effects are not directly available. We conduct expansive simulations investigating the practical operating characteristics of the proposed variable selection approach, when combined with four tree-based machine learning methods, XGBoost, Random Forests, Bayesian Additive Regression Trees (BART) and Conditional Random Forests, and two commonly used parametric methods, lasso and backward stepwise selection. Numeric results show XGBoost and BART have the overall best performance across various settings. Guidance for choosing methods appropriate to the structure of the analysis data at hand are discussed. We further demonstrate the methods via a case study of risk factors for 3-year incidence of metabolic syndrome with data from the Study of Women's Health Across the Nation. | statistics |
We present MeerKAT 1.28 GHz total-intensity, polarization, and spectral-index images covering the giant (projected length $l \approx 1.57$~Mpc) X-shaped radio source PKS~2014$-$55 with an unprecedented combination of brightness sensitivity and angular resolution. They show the clear "double boomerang" morphology of hydrodynamical backflows from the straight main jets deflected by the large and oblique hot-gas halo of the host galaxy PGC~064440. The magnetic field orientation in PKS~2014$-$55 follows the flow lines from the jets through the secondary wings. The radio source is embedded in faint ($T_\mathrm{b} \approx 0.5 \mathrm{\,K}$) cocoons having the uniform brightness temperature and sharp outer edges characteristic of subsonic expansion into the ambient intra-group medium. The position angle of the much smaller ($l \sim 25$~kpc) restarted central source is within $5^\circ$ of the main jets, ruling out models that invoke jet re-orientation or two independent jets. Compression and turbulence in the backflows probably produce the irregular and low polarization bright region behind the apex of each boomerang as well as several features in the flow with bright heads and dark tails. | astrophysics |
Diffuse filaments connect galaxy clusters to form the cosmic web. Detecting these filaments could yield information on the magnetic field strength, cosmic ray population and temperature of intercluster gas, yet, the faint and large-scale nature of these bridges makes direct detections very challenging. Using multiple independent all-sky radio and X-ray maps we stack pairs of luminous red galaxies as tracers for cluster pairs. For the first time, we detect an average surface brightness between the clusters from synchrotron (radio) and thermal (X-ray) emission with $\gtrsim 5\sigma$ significance, on physical scales larger than observed to date ($\geq 3\,$Mpc). We obtain a synchrotron spectral index of $\alpha \simeq -1.0$ and estimates of the average magnetic field strength of $ 30 \leq B \leq 60 \,$nG, derived from both equipartition and Inverse Compton arguments, implying a 5 to 15$\,$per cent degree of field regularity when compared with Faraday rotation measure estimates. While the X-ray detection is inline with predictions, the average radio signal comes out higher than predicted by cosmological simulations and dark matter annihilation and decay models. This discovery demonstrates that there are connective structures between mass concentrations that are significantly magnetised, and the presence of sufficient cosmic rays to produce detectable synchrotron radiation. | astrophysics |
We propose an RF deflector in the THz regime to measure the bunch length of the ultrashort electron beam in GeV scale by using the dielectric-lined circular waveguide (DLW) structure. We show the design of the deflector and the possible resolution in the attosecond scale with a reasonable input pulse energy of THz. We investigate the short-range wakefield effect in the DLW to the time resolution using the analytical model based on the eigenmode calculation and show the scaling law in terms of the beam size, the bunch length. We found the ideal resolution of the deflector can reach order ${\cal O}(100)~[{\rm as}]$ with pulse energy of several ${\rm mJ}$ with a negligible wakefield effect. Example calculations are given for a structure with the vacuum hole radius of $0.5,\ 0.4,\ 0.3~[{\rm mm}]$, the dielectric constant of $3.75$, the operating frequency of $0.2,\ 0.4,\ 0.6~[{\rm THz}]$, respectively. | physics |
In this paper we develop the concept of information transfer between the Borel-measurable sets for a dynamical system described by a measurable space and a non-singular transformation. The concept is based on how Shannon entropy is transferred between the measurable sets, as the dynamical system evolves. We show that the proposed definition of information transfer satisfies the usual notions of information transfer and causality, namely, zero transfer and transfer asymmetry. Furthermore, we show how the information transfer measure can be used to classify ergodicity and mixing. We also develop the computational methods for information transfer computation and apply the framework for optimal placements of actuators and sensors for control of non-equilibrium dynamics. | electrical engineering and systems science |
Partially Fe filled multi-walled carbon nanotubes (MWCNTs) were grown by chemical vapor deposition with propane at 850 {\deg}C using a simple mixture of iron (III) acetylacetonate (Fe(acac)3) powder and conventional photoresist. Scanning electron microscopy revealed that catalytic nanoparticles with an average diameter of 70 nm are formed on the Si substrate which governs the diameter of the MWCNTs. Transmission electron microscopy shows that the nanotubes have a multi-walled structure with partial Fe filling. A site-selective growth of partially Fe filled MWCNTs is achieved by a simple photolithographic route. | physics |
Probability samples are the preferred method for providing inferences that are generalizable to a larger population. However, when a small (or rare) subpopulation is the group of interest, this approach is unlikely to yield a sample size large enough to produce precise inferences. Non-probability (or convenience) sampling often provides the necessary sample size to yield efficient estimates, but selection bias may compromise the generalizability of results to the broader population. Motivating the exposition is a survey of military caregivers; our interest is focused on unpaid caregivers of wounded, ill, or injured servicemembers and veterans who served in the US armed forces following September 11, 2001. An extensive probability sampling effort yielded only 72 caregivers from this subpopulation. Therefore, we consider supplementing the probability sample with a convenience sample from the same subpopulation, and we develop novel methods of statistical weighting that may be used to combine (or blend) the samples. Our analyses show that the subpopulation of interest endures greater hardships than caregivers of veterans with earlier dates of service, and these conclusions are discernably stronger when blended samples with the proposed weighting schemes are used. We conclude with simulation studies that illustrate the efficacy of the proposed techniques, examine the bias-variance trade-off encountered when using inadequately blended data, and show that the gain in precision provided by the convenience sample is lower in circumstances where the outcome is strongly related to the auxiliary variables used for blending. | statistics |
Non-equilibrium states of quantum systems in contact with thermal baths help telling environments with different temperatures or different statistics apart. We extend these studies to a more generic problem that consists in discriminating between two baths with disparate constituents at unequal temperatures. Notably there exist temperature regimes in which the presence of coherence in the initial state preparation is beneficial for the discrimination capability. We also find that non-equilibrium states are not universally optimal, and detail the conditions in which it becomes convenient to wait for complete thermalisation of the probe. These concepts are illustrated in a linear optical simulation. | quantum physics |
Recent work on overfitting Bayesian mixtures of distributions offers a powerful framework for clustering multivariate data using a latent Gaussian model which resembles the factor analysis model. The flexibility provided by overfitting mixture models yields a simple and efficient way in order to estimate the unknown number of clusters and model parameters by Markov chain Monte Carlo (MCMC) sampling. The present study extends this approach by considering a set of eight parameterizations, giving rise to parsimonious representations of the covariance matrix per cluster. A Gibbs sampler combined with a prior parallel tempering scheme is implemented in order to approximately sample from the posterior distribution of the overfitting mixture. The parameterization and number of factors is selected according to the Bayesian Information Criterion. Identifiability issues related to label switching are dealt by post-processing the simulated output with the Equivalence Classes Representatives algorithm. The contributed method and software are demonstrated and compared to similar models estimated using the Expectation-Maximization algorithm on simulated and real datasets. The software is available online at https://CRAN.R-project.org/package=fabMix. | statistics |
We study if eternal inflation is realized while satisfying the recently proposed string Swampland criteria concerning the range of scalar field excursion, $|\Delta \phi| < \mathcal{D} \cdot M_{\rm P}$, and the potential gradient, $|\nabla V| > c \cdot V/M_{\rm P}$, where $\mathcal{D}$ and $c$ are constants of order unity, and $M_{\rm P}$ is the reduced Planck mass. We find that only the eternal inflation of chaotic type is possible for $c \sim {\cal O}(0.01)$ and $1/\mathcal{D} \sim {\cal O}(0.01)$, and that the Hubble parameter during the eternal inflation is parametrically close to the Planck scale, and is in the range of $2 \pi c \lesssim H_{\rm inf}/M_{\rm P} < 1/\sqrt{3}$. | high energy physics theory |
The purpose of this letter is to explore the relation between gauge fields, which are at the base of our understanding of fundamental interactions, and the quantum entanglement. To this end, we investigate the case of ${\rm SU}(2)$ gauge fields. It is first argued that holonomies of the ${\rm SU}(2)$ gauge fields are naturally associated with maximally entangled two-particle states. Then, we provide some evidence that the notion of such gauge fields can be deduced from the transformation properties of maximally entangled two-particle states. This new insight unveils a possible relation between gauge fields and spin systems, as well as contributes to understanding of the relation between tensor networks (such as MERA) and spin network states considered in loop quantum gravity. In consequence, our results turn out to be relevant in the context of the emerging Entanglement/Gravity duality. | high energy physics theory |
High concentration episodes for NO$_2$ are increasingly dealt with by authorities through traffic restrictions which are activated when air quality deteriorates beyond certain thresholds. Foreseeing the probability that pollutant concentrations reach those thresholds becomes thus a necessity. Probabilistic forecasting is a family of techniques that allow for the prediction of the expected distribution function instead of a single value. In the case of NO$_2$, it allows for the calculation of future chances of exceeding thresholds and to detect pollution peaks. We thoroughly compared 10 state of the art probabilistic predictive models, using them to predict the distribution of NO$_2$ concentrations in a urban location for a set of forecasting horizons (up to 60 hours). Quantile gradient boosted trees shows the best performance, yielding the best results for both the expected value and the forecast full distribution. Furthermore, we show how this approach can be used to detect pollution peaks. | statistics |
We apply the modern multiloop methods to the calculation of the total cross sections of electron-positron annihilation to 2 and 3 photons exactly in $s/m^2$ with the accuracy $O(\alpha^3)$. Examining the asymptotics of our results, we find agreement with Ref. [Andreassi_et_al_1962] and discover mistakes in the results of Refs. [Eidelman&Kuraev_1978,Berends&Kleiss_1981]. This mistake is due to the terms, omitted in differential cross section in Refs. \[Eidelman&Kuraev_1978,Berends&Kleiss_1981], which are peaked in the kinematic region with all three photons being quasi-parallel to the collision axis. After restoring these terms, we find an agreement of the corrected result of Ref. [Berends&Kleiss_1981] with our result. | high energy physics phenomenology |
Beam and trap methods find incompatible results for the lifetime of the neutron: the former delivers a value which is about $8.7\pm2.1$ s longer than the latter. Very recently (1906.10024) it has been proposed that the inverse Zeno effect (IZE) could be responsible for the shorter lifetime in trap experiments. Here, we compare two different models of measurement, one obtained by bang-bang measurements and by a continuous measurement: the IZE turns out to be in both cases very similar, showing that the results do not depend on the details of the measurement process. | high energy physics phenomenology |
We prove a lower bound for the modulus of the amplitude for a two-body process at large scattering angle. This is based on the interplay of the analyticity of the amplitude and the positivity properties of its absorptive part. The assumptions are minimal, namely those of local quantum field theory (in the case when dispersion relations hold). In Appendix A, lower bounds for the forward particle-particle and particle-antiparticle amplitudes are obtained. This is of independent interest. | high energy physics theory |
We consider covert communication, i.e., hiding the presence of communication from an adversary for multiple-input multiple-output (MIMO) additive white Gaussian noise (AWGN) channels. We characterize the maximum covert coding rate under a variety of settings, including different regimes where either the number of transmit antennas or the blocklength is scaled up. We show that a non-zero covert capacity can be achieved in the massive MIMO regime in which the number of transmit antennas scales up but under specific conditions. Under such conditions, we show that the covert capacity of MIMO AWGN channels converges the capacity of MIMO AWGN channels. Furthermore, we derive the order-optimal scaling of the number of covert bits in the regime where the covert capacity is zero. We provide an insightful comparative analysis of different cases in which secrecy and energy-undetectability constraints are imposed separately or jointly. | electrical engineering and systems science |
An excess of low-energy electronic recoil events over known backgrounds was recently observed in the XENON1T detector, where $285$ events are observed compared to an expected $232 \pm 15$ events from the background-only fit to the data in the energy range 1-7 keV. This could be due to the beta decay of an unexpected tritium component, or possibly to new physics. One plausible new physics explanation for the excess is absorption of hidden photon dark matter relics with mass around $2.8$ keV and kinetic mixing of about $10^{-15}$, which can also explain cooling excesses in horizontal-branch (HB) stars. Such small gauge boson masses and couplings can naturally arise from type-IIB low scale string theory. We provide a fit of the XENON1T excess in terms of a minimal low scale type-IIB string theory parameter space and present some benchmark points which provide a good fit to the data. It is also demonstrated how the required transformation properties of the massless spectrum are obtained in intersecting D-brane models. | high energy physics phenomenology |
We present simple and predictive realizations of neutrino masses in theories based on the $SU(6)$ grand unifying group. At the level of the lowest-dimension operators, this class of models predicts a skew-symmetric flavor structure for the Dirac mass term of the neutrinos. In the case that neutrinos are Dirac particles, the lowest-order prediction of this construction is then one massless neutrino and two degenerate massive neutrinos. Higher-dimensional operators suppressed by the Planck scale perturb this spectrum, allowing a good fit to the observed neutrino mass matrix. A firm prediction of this construction is an inverted neutrino mass spectrum with the lightest neutrino hierarchically lighter than the other two, so that the sum of neutrino masses lies close to the lower bound for an inverted hierarchy. In the alternate case that neutrinos are Majorana particles, the mass spectrum can be either normal or inverted. However, the lightest neutrino is once again hierarchically lighter than the other two, so that the sum of neutrino masses is predicted to lie close to the corresponding lower bound for the normal or inverted hierarchy. Near future cosmological measurements will be able to test the predictions of this scenario for the sum of neutrino masses. In the case of Majorana neutrinos that exhibit an inverted hierarchy, future neutrinoless double beta experiments can provide a complementary probe. | high energy physics phenomenology |
Horava gravity breaks Lorentz symmetry by introducing a dynamical timelike scalar field (the khronon), which can be used as a preferred time coordinate (thus selecting a preferred space-time foliation). Adopting the khronon as the time coordinate, the theory is invariant only under time reparametrizations and spatial diffeomorphisms. In the infrared limit, this theory is sometimes referred to as khronometric theory. Here, we explicitly construct a generalization of khronometric theory, which avoids the propagation of Ostrogradski modes as a result of a suitable degeneracy condition (although stability of the latter under radiative corrections remains an open question). While this new theory does not have a general-relativistic limit and does not yield a Friedmann-Robertson-Walker-like cosmology on large scales, it still passes, for suitable choices of its coupling constants, local tests on Earth and in the solar system, as well as gravitational-wave tests. We also comment on the possible usefulness of this theory as a toy model of quantum gravity, as it could be completed in the ultraviolet into a 'degenerate Horava gravity' theory that could be perturbatively renormalizable without imposing any projectability condition. | high energy physics theory |
Minijets provide useful information on parton interactions in the low transverse-momentum (low-$p_T$) region. Because minijets produce clusters, we study the clustering properties of produced particles in high-energy $pp$ collisions as a first step to identify minijets. We develop an algorithm to find clusters by using the k-means clustering method, in conjunction with a k-number (cluster number) selection principle in the space of pseudorapidity and azimuthal angles. We test the clustering algorithm using events generated by PYTHIA 8.1, for $pp$ collision at $\sqrt{s}=200$ GeV. We find that clustering of low-$p_T$ hadrons occurs in high multiplicity events. However similar clustering properties are also present for particles produced randomly in a finite pseudorapidity and azimuthal angle space. To distinguish the dynamics from random generations of events, it is necessary to examine the correlation between particles and between clusters. We find that the correlations between clusters may provide a useful tool to distinguish the underlying dynamics of the reaction mechanism. | high energy physics phenomenology |
We simulate the formation and evolution of Oort clouds around the 200 nearest stars (within 16pc according to the Gaia DR2) database. This study is performed by numerically integrating the planets and minor bodies in orbit around the parent star and in the Galactic potential. The calculations start 1\,Gyr ago and continue for 100Myr into the future. In this time frame, we simulate how asteroids (and planets) are ejected from the star's vicinity and settle in an Oort cloud and how they escape the local stellar gravity to form tidal steams. A fraction of 0.0098 to 0.026 of the asteroids remain bound to their parent star. The orbits of these asteroids isotropizes and circularizes due to the influence of the Galactic tidal field to eventually form an Oort cloud between 10^4 and 2 10^5au. We estimate that 6% of the nearby stars may have a planet in its Oort cloud. The majority of asteroids (and some of the planets) become unbound from the parent star to become free floating in the Galactic potential. These soli lapides remain in a similar orbit around the Galactic center as their host star, forming dense streams of rogue interstellar asteroids and planets. The Solar system occasionally passes through such tidal streams, potentially giving rise to occasional close encounters with object in this stream. The two recently discovered objects, 1I/(2017 Q3) 'Oumuamua and 2I/(2019 Q4) Borisov, may be such objects. Although the direction from which an individual solus lapis originated cannot easily be traced back to the original host, multiple such objects coming from the same source might help to identify their origin. At the moment the Solar system is in the bow or wake of the tidal stream of 10 of the nearby stars which might contribute considerably to the interaction rate. (abridged) | astrophysics |
This paper deals with some qualitative properties of entropy solutions to hyperbolic conservation laws. In [11] the jump set of entropy solution to conservation laws has been introduced. We find an entropy solution to scalar conservation laws for which the jump set is not closed, in particular, it is dense in a space-time domain. In the later part of this article, we obtain a similar result for the hyperbolic system. We give two different approaches for scalar conservation laws and hyperbolic system to obtain the results. For the scalar case, obtained solutions are more explicitly calculated. | mathematics |
Despite theoretical predictions for a Cherenkov-type radiation of spin waves (magnons) by various propagating magnetic perturbations, fast-enough moving magnetic field stimuli have not been available so far. Here, we experimentally realize the Cherenkov radiation of spin waves in a Co-Fe magnonic conduit by fast-moving (>1 km/s) magnetic flux quanta (Abrikosov vortices) in an adjacent Nb-C superconducting strip. The radiation is evidenced by the microwave detection of spin waves propagating a distance of 2 micrometers from the superconductor and it is accompanied by a magnon Shapiro step in its current-voltage curve. The spin-wave excitation is unidirectional and monochromatic, with sub-40 nm wavelengths determined by the period of the vortex lattice. The phase-locking of the vortex lattice with the excited spin wave limits the vortex velocity and reduces the dissipation in the superconductor. | condensed matter |
We study the vacuum interaction of a scalar field and two concentric spheres defined by a singular potential on their surfaces. The potential is a linear combination of the Dirac-$\delta$ and its derivative. The presence of the delta prime term in the potential causes that it behaves differently when it is seen from the inside or from the outside of the sphere. We study different cases for positive and negative values of the delta prime coupling, keeping positive the coupling of the delta. As a consequence, we find regions in the space of couplings, where the energy is positive, negative or zero. Moreover, the sign of the $\delta'$ couplings cause different behavior on the value of the Casimir energy for different values of the radii. This potential gives rise to general boundary conditions with limiting cases defining Dirichlet and Robin boundary conditions what allows us to simulate purely electric o purely magnetic spheres. | high energy physics theory |
We study the problem of efficient exploration in order to learn an accurate model of an environment, modeled as a Markov decision process (MDP). Efficient exploration in this problem requires the agent to identify the regions in which estimating the model is more difficult and then exploit this knowledge to collect more samples there. In this paper, we formalize this problem, introduce the first algorithm to learn an $\epsilon$-accurate estimate of the dynamics, and provide its sample complexity analysis. While this algorithm enjoys strong guarantees in the large-sample regime, it tends to have a poor performance in early stages of exploration. To address this issue, we propose an algorithm that is based on maximum weighted entropy, a heuristic that stems from common sense and our theoretical analysis. The main idea here is to cover the entire state-action space with the weight proportional to the noise in the transitions. Using a number of simple domains with heterogeneous noise in their transitions, we show that our heuristic-based algorithm outperforms both our original algorithm and the maximum entropy algorithm in the small sample regime, while achieving similar asymptotic performance as that of the original algorithm. | statistics |
Fragmentation methods applied to multireference wave functions constitute a road towards the application of highly accurate ab initio wave function calculations to large molecules and solids. However, it is important for reproducibility and transferability that a fragmentation scheme be well-defined with minimal dependence on initial orbital guesses or user-designed ad hoc fragmentation schemes. One way to improve this sort of robustness is to ensure the energy obeys a variational principle; i.e., that the active orbitals and active space wave functions minimize the electronic energy in a certain ansatz for the molecular wave function. We extended the theory of the localized active space self-consistent field, LASSCF, method (JCTC 2019, 15, 972) to fully minimize the energy with respect to all orbital rotations, rendering it truly variational. The new method, called vLASSCF, substantially improves the robustness and reproducibility of the LAS wave function compared to LASSCF. We analyze the storage and operation cost scaling of vLASSCF compared to orbital optimization using a standard CASSCF approach and we show results of vLASSCF calculations on some simple test systems. We show that vLASSCF is energetically equivalent to CASSCF in the limit of one active subspace, and that vLASSCF significantly improves upon the reliability of LASSCF energy differences, allowing for more meaningful and subtle analysis of potential energy curves of dissociating molecules. We also show that all forms of LASSCF have a lower operation cost scaling than the orbital-optimization part of CASSCF. | physics |
We introduce Ensemble Rejection Sampling, a scheme for exact simulation from the posterior distribution of the latent states of a class of non-linear non-Gaussian state-space models. Ensemble Rejection Sampling relies on a proposal for the high-dimensional state sequence built using ensembles of state samples. Although this algorithm can be interpreted as a rejection sampling scheme acting on an extended space, we show under regularity conditions that the expected computational cost to obtain an exact sample increases cubically with the length of the state sequence instead of exponentially for standard rejection sampling. We demonstrate this methodology by sampling exactly state sequences according to the posterior distribution of a stochastic volatility model and a non-linear autoregressive process. We also present an application to rare event simulation. | statistics |
Employing the finite element and computational fluid dynamics methods, we have determined the conditions for the fragmentation of space bodies or preservation of their integrity when they penetrate into the Earth's atmosphere. The origin of forces contributing to the fragmentation of space iron bodies during the passage through the dense layers of the planetary atmosphere has been studied. It was shown that the irregular shape of the surface can produce transverse aerodynamic forces capable of causing deformation stress in the body exceeding the tensile strength threshold of iron. | astrophysics |
We study the worldsheet scattering theory of the $\eta$ deformation of the AdS$_\mathsf{5} \times $S$^\mathsf{5}$ superstring corresponding to the purely fermionic Dynkin diagram. This theory is a Weyl-invariant integrable deformation of the AdS$_\mathsf{5} \times $S$^\mathsf{5}$ superstring, with trigonometric quantum-deformed symmetry. We compute the two-body worldsheet S matrix of this string in the light-cone gauge at tree level to quadratic order in fermions. The result factorizes into two elementary blocks, and solves the classical Yang-Baxter equation. We also determine the corresponding exact factorized S matrix, and show that its perturbative expansion matches our tree-level results, once we correctly identify the deformed light-cone symmetry algebra of the string. Finally, we briefly revisit the computation of the corresponding S matrix for the $\eta$ deformation based on the distinguished Dynkin diagram, finding a tree-level S matrix that factorizes and solves the classical Yang-Baxter equation, in contrast to previous results. | high energy physics theory |
In this work we formulate the satisfaction of a (syntactically co-safe) linear temporal logic specification on a physical plant through a recent hybrid dynamical systems formalism. In order to solve this problem, we introduce an extension to such a hybrid system framework of the so-called eventuality property, which matches suitably the condition for the satisfaction of such a temporal logic specification. The eventuality property can be established through barrier certificates, which we derive for the considered hybrid system framework. Using a hybrid barrier certificate, we propose a solution to the original problem. Simulations illustrate the effectiveness of the proposed method. | electrical engineering and systems science |
Strong correlations between the signal and idler beams imprinted during their generation dominantly determine the properties of twin beams. They are also responsible for the waves in intensity coherence observed in the wave-vector space of a twin beam propagating in a nonlinear crystal in the regime with pump depletion. These waves start to develop at certain twin-beam intensity and move from the signal and idler beam centers towards their tails. They manifest themselves via the change of coherence volume monitored in the far field by the measurement of local modified $ \bar{g}^{(2)} $ function, which acts as a sensitive and stable tool for investigating field intensity coherence. | quantum physics |
Standard power systems are modeled using differential-algebraic equations (DAE). Following a transient event, voltage collapse can occur as a bifurcation of the transient load flow solutions which is marked by the system trajectory reaching a singular surface in state space where the voltage causality is lost. If the system is under such a risk, preventive control decisions such as changes in AVR setpoints need to be taken to enhance the stability. In this regard, the knowledge of sensitivity of critical clearing time (CCT) to controllable system parameters can be of great help. The stability boundary of DAE systems is more complicated than ODE systems where in addition to stable manifolds of unstable equilibrium points (UEP) and periodic orbits, singular surfaces play an important role. In the present work, we derive the expressions for CCT sensitivity for a generic DAE model using trajectory sensitivities with applications to power system transient stability analysis (TSA) and preventive control. The results are illustrated for multiple test systems which are then validated against computationally intensive time-domain simulations (TDS). | electrical engineering and systems science |
Active learning aims to reduce labeling costs by selecting only the most informative samples on a dataset. Few existing works have addressed active learning for object detection. Most of these methods are based on multiple models or are straightforward extensions of classification methods, hence estimate an image's informativeness using only the classification head. In this paper, we propose a novel deep active learning approach for object detection. Our approach relies on mixture density networks that estimate a probabilistic distribution for each localization and classification head's output. We explicitly estimate the aleatoric and epistemic uncertainty in a single forward pass of a single model. Our method uses a scoring function that aggregates these two types of uncertainties for both heads to obtain every image's informativeness score. We demonstrate the efficacy of our approach in PASCAL VOC and MS-COCO datasets. Our approach outperforms single-model based methods and performs on par with multi-model based methods at a fraction of the computing cost. | computer science |
Mechanical systems facilitate the development of a new generation of hybrid quantum technology comprising electrical, optical, atomic and acoustic degrees of freedom. Entanglement is the essential resource that defines this new paradigm of quantum enabled devices. Continuous variable (CV) entangled fields, known as Einstein-Podolsky-Rosen (EPR) states, are spatially separated two-mode squeezed states that can be used to implement quantum teleportation and quantum communication. In the optical domain, EPR states are typically generated using nondegenerate optical amplifiers and at microwave frequencies Josephson circuits can serve as a nonlinear medium. It is an outstanding goal to deterministically generate and distribute entangled states with a mechanical oscillator. Here we observe stationary emission of path-entangled microwave radiation from a parametrically driven 30 micrometer long silicon nanostring oscillator, squeezing the joint field operators of two thermal modes by 3.40(37) dB below the vacuum level. This mechanical system correlates up to 50 photons/s/Hz giving rise to a quantum discord that is robust with respect to microwave noise. Such generalized quantum correlations of separable states are important for quantum enhanced detection and provide direct evidence for the non-classical nature of the mechanical oscillator without directly measuring its state. This noninvasive measurement scheme allows to infer information about otherwise inaccessible objects with potential implications in sensing, open system dynamics and fundamental tests of quantum gravity. In the near future, similar on-chip devices can be used to entangle subsystems on vastly different energy scales such as microwave and optical photons. | quantum physics |
Coupled-channel dynamics for scattering and production processes in partial-wave amplitudes is discussed from a perspective that emphasizes unitarity and analyticity. We elaborate on several methods that have driven to important results in hadron physics, either by themselves or in conjunction with effective field theory. We also develop and compare with the use of the Lippmann-Schwinger equation in near-threshold scattering. The final(initial)-state interactions are discussed in detail for the elastic and coupled-channel case. Emphasis has been put in the derivation and discussion of the methods presented, with some applications examined as important examples of their usage. | high energy physics phenomenology |
Noble metals with well-defined crystallographic orientation constitute an appealing class of materials for controlling light-matter interactions on the nanoscale. Nonlinear optical processes, being particularly sensitive to anisotropy, are a natural and versatile probe of crystallinity in nano-optical devices. Here we study the nonlinear optical response of monocrystalline gold flakes, revealing a polarization dependence in second-harmonic generation from the {111} surface that is markedly absent in polycrystalline films. Apart from suggesting an approach for directional enhancement of nonlinear response in plasmonic systems, we anticipate that our findings can be used as a rapid and non-destructive method for characterization of crystal quality and orientation that may be of significant importance in future applications. | physics |
We present a thorough phenomenological analysis of the experimental data from Belle Collaboration for the transverse $\Lambda$ and $\bar\Lambda$ polarisation, measured in $e^+e^-$ annihilation processes, for the case of inclusive (plus a jet) and associated production with a light charged hadron. This allows for the first ever extraction of the quark polarising fragmentation function for a $\Lambda$ hyperon, a transverse momentum dependent distribution giving the probability that an unpolarised quark fragments into a transversely polarised spin-1/2 hadron. | high energy physics phenomenology |
This paper extends our previous quantum Fisher information (QFI) based analysis of the problem of separating a pair of equal-brightness incoherent point sources in three dimensions to the case of a pair of sources that are unequally bright. When the pair's geometric center is perfectly known in advance, QFI with respect to the estimation of the three separation coordinates remains independent of the degree of brightness asymmetry. For the experimentally more relevant case of perfect prior knowledge of the pair's brightness centroid, however, such QFI becomes dependent on the pair separation vector in a way that is controlled by the degree of its brightness asymmetry. This study yields potentially useful insights into the analysis of a more general superresolution imaging problem involving extended incoherent sources with nontrivial brightness distributions. | quantum physics |
In the presence of symmetries, one-dimensional quantum systems can exhibit topological order, which in many cases can be characterized by a quantized value of the many-body geometric Zak or Berry phase. We establish that this topological Zak phase is directly related to the Zak phase of an elementary quasiparticle excitation in the system. By considering various systems, we establish this connection for a number of different interacting phases including: the Su-Schrieffer-Heeger model, p-wave topological superconductors, and the Haldane chain. Crucially, in contrast to the bulk many-body Zak phase associated with the ground state of such systems, the topological invariant associated with quasiparticle excitations (above this ground-state) exhibit a more natural route for direct experimental detection. To this end, we build upon recent work [Nature Communications 7, 11994 (2016)] and demonstrate that mobile quantum impurities can be used, in combination with Ramsey interferometry and Bloch oscillations, to directly measure these quasiparticle topological invariants. Finally, a concrete experimental realization of our protocol for dimerized Mott insulators in ultracold atomic systems is discussed and analyzed. | condensed matter |
Analysis of data presented in the paper -- Unveiling the double-well energy landscape in a ferroelectric layer, by M. Hoffmann, et al., Nature 565, 464 (2019) -- suggesting the claims of lack of hysteresis and s-curve trajectory are unfounded. | condensed matter |
In present paper, the definition of new metric space with neutrosophic numbers is given. Several topological and structural properties have been investigated. The analogues of Baire Category Theorem and Uniform Convergence Theorem are given for Neutrosophic metric spaces. | mathematics |
Deep reinforcement learning algorithms have shown an impressive ability to learn complex control policies in high-dimensional tasks. However, despite the ever-increasing performance on popular benchmarks, policies learned by deep reinforcement learning algorithms can struggle to generalize when evaluated in remarkably similar environments. In this paper we propose a protocol to evaluate generalization in reinforcement learning through different modes of Atari 2600 games. With that protocol we assess the generalization capabilities of DQN, one of the most traditional deep reinforcement learning algorithms, and we provide evidence suggesting that DQN overspecializes to the training environment. We then comprehensively evaluate the impact of dropout and $\ell_2$ regularization, as well as the impact of reusing learned representations to improve the generalization capabilities of DQN. Despite regularization being largely underutilized in deep reinforcement learning, we show that it can, in fact, help DQN learn more general features. These features can be reused and fine-tuned on similar tasks, considerably improving DQN's sample efficiency. | computer science |
We propose a novel generalisation to the Student-t Probabilistic Principal Component methodology which: (1) accounts for an asymmetric distribution of the observation data; (2) is a framework for grouped and generalised multiple-degree-of-freedom structures, which provides a more flexible approach to modelling groups of marginal tail dependence in the observation data; and (3) separates the tail effect of the error terms and factors. The new feature extraction methods are derived in an incomplete data setting to efficiently handle the presence of missing values in the observation vector. We discuss various special cases of the algorithm being a result of simplified assumptions on the process generating the data. The applicability of the new framework is illustrated on a data set that consists of crypto currencies with the highest market capitalisation. | statistics |
In order to avoid the "Midas Touch" problem, gaze-based interfaces for selection often introduce a dwell time: a fixed amount of time the user must fixate upon an object before it is selected. Past interfaces have used a uniform dwell time across all objects. Here, we propose a gaze-based browser using a two-step selection policy with variable dwell time. In the first step, a command, e.g. "back" or "select", is chosen from a menu using a dwell time that is constant across the different commands. In the second step, if the "select" command is chosen, the user selects a hyperlink using a dwell time that varies between different hyperlinks. We assign shorter dwell times to more likely hyperlinks and longer dwell times to less likely hyperlinks. In order to infer the likelihood each hyperlink will be selected, we have developed a probabilistic model of natural gaze behavior while surfing the web. We have evaluated a number of heuristic and probabilistic methods for varying the dwell times using both simulation and experiment. Our results demonstrate that varying dwell time improves the user experience in comparison with fixed dwell time, resulting in fewer errors and increased speed. While all of the methods for varying dwell time resulted in improved performance, the probabilistic models yielded much greater gains than the simple heuristics. The best performing model reduces error rate by 50% compared to 100ms uniform dwell time while maintaining a similar response time. It reduces response time by 60% compared to 300ms uniform dwell time while maintaining a similar error rate. | computer science |
When simultaneously testing multiple hypotheses, the usual approach in the context of confirmatory clinical trials is to control the familywise error rate (FWER), which bounds the probability of making at least one false rejection. In many trial settings, these hypotheses will additionally have a hierarchical structure that reflects the relative importance and links between different clinical objectives. The graphical approach of Bretz et al. (2009) is a flexible and easily communicable way of controlling the FWER while respecting complex trial objectives and multiple structured hypotheses. However, the FWER can be a very stringent criterion that leads to procedures with low power, and may not be appropriate in exploratory trial settings. This motivates controlling generalised error rates, particularly when the number of hypotheses tested is no longer small. We consider the generalised familywise error rate (k-FWER), which is the probability of making k or more false rejections, as well as the tail probability of the false discovery proportion (FDP), which is the probability that the proportion of false rejections is greater than some threshold. We also consider asymptotic control of the false discovery rate (FDR), which is the expectation of the FDP. In this paper, we show how to control these generalised error rates when using the graphical approach and its extensions. We demonstrate the utility of the resulting graphical procedures on three clinical trial case studies. | statistics |
Boundaries in three-dimensional $\mathcal{N}=2$ superconformal theories may preserve one half of the original bulk supersymmetry. There are two possibilities which are characterized by the chirality of the leftover supercharges. Depending on the choice, the remaining $2d$ boundary algebra exhibits $\mathcal{N}=(0,2)$ or $\mathcal{N}=(1,1)$ supersymmetry. In this work we focus on correlation functions of chiral fields for both types of supersymmetric boundaries. We study a host of correlators using superspace techniques and calculate superconformal blocks for two- and three-point functions. For $\mathcal{N}=(1,1)$ supersymmetry, some of our results can be analytically continued in the spacetime dimension while keeping the codimension fixed. This opens the door for a bootstrap analysis of the $\epsilon$-expansion in supersymmetric BCFTs. Armed with our analytically-continued superblocks, we prove that in the free theory limit two-point functions of chiral (and antichiral) fields are unique. The first order correction, which already describes interactions, is universal up to two free parameters. As a check of our analysis, we study the Wess-Zumino model with a supersymmetric boundary using Feynman diagrams, and find perfect agreement between the perturbative and bootstrap results. | high energy physics theory |
To address the large amount of energy wasted by blockchains, we propose a decentralized consensus protocol for blockchains in which the computation can be used to search for good approximate solutions to any optimization problem. Our protocol allows the wasted energy to be used for finding approximate solutions to problems submitted by any nodes~(called clients). Our protocol works in a similar way to proof-of-work, and it makes nodes evaluate a large number of solution candidates to add a new block to the chain. A client provides a search program that implements any search algorithm that finds a good solution by evaluating a large number of solution candidates. The node that finds the best approximate solution is rewarded by the client. Our analysis shows that the probability of a fork and the variance in the block time with our protocol are lower than those in proof-of-work. | computer science |
Spiral acquisitions are preferred in real-time MRI because of their time efficiency. A fundamental limitation of spirals is image blurring due to off-resonance, which degrades image quality significantly at air-tissue boundaries. Here, we demonstrate a simple CNN-based deblurring method for spiral real-time MRI of human speech production. We show the CNN-based deblurring is capable of restoring blurred vocal tract tissue boundaries, without a need for exam-specific field maps. Deblurring performance is superior to a current auto-calibrated method, and slightly inferior to ideal reconstruction with perfect knowledge of the field maps. | electrical engineering and systems science |
We investigate the feasibility of quantum seals. A quantum seal is a state provided by Alice to Bob along with information which Bob can use to make a measurement, "break the seal," and read the classical message stored inside. There are two success criteria for a seal: the probability Bob can successfully read the message without any further information from Alice must be high, and if Alice asks for the state back from Bob, the probability Alice can tell if Bob broke the seal without permission must be high. We build on the work of [Chau, PRA 2007], which gave optimal bounds on these criteria, showing that they are mutually exclusive for high probability. We weaken the assumptions of this previous work by providing Bob with only a classical description of a prescribed measurement, rather than classical descriptions of the possible seal states. We show that this weakening does not affect the bounds but does simplify the analysis. We also prove upper and lower bounds on an alternative operational metric for measuring the success criteria. | quantum physics |
Deep neural networks (DNNs) have become one of the enabling technologies in many safety-critical applications, e.g., autonomous driving and medical image analysis. DNN systems, however, suffer from various kinds of threats, such as adversarial example attacks and fault injection attacks. While there are many defense methods proposed against maliciously crafted inputs, solutions against faults presented in the DNN system itself (e.g., parameters and calculations) are far less explored. In this paper, we develop a novel lightweight fault-tolerant solution for DNN-based systems, namely DeepDyve, which employs pre-trained neural networks that are far simpler and smaller than the original DNN for dynamic verification. The key to enabling such lightweight checking is that the smaller neural network only needs to produce approximate results for the initial task without sacrificing fault coverage much. We develop efficient and effective architecture and task exploration techniques to achieve optimized risk/overhead trade-off in DeepDyve. Experimental results show that DeepDyve can reduce 90% of the risks at around 10% overhead. | computer science |
There is a growing interest in shape analysis in recent years and in this paper we present a novel contour-based shape representation named Beltrami signature for 2D bounded simple connected domain. The proposed representation is based on conformal welding. With suitable normalization, the uniqueness of welding is guaranteed up to a rotation. Then it can be extended to a harmonic function and finally quasi-conformal theory get rid of the only uncertainty by computing Beltrami coefficient of harmonic extension. The benifits of the proposed signature is it keeps invariant under simple transformations like sacling, transformation and rotation and is roubost under slight deformation and distortion. Experiments demonstrates the above properties and also shows the excellent classification performance. | computer science |
The paper presents a distributed algorithm, called Prediction-based Opportunistic Sensing for Resilient and Efficient Sensor Networks (POSE.R), where the sensor nodes utilize predictions of the targets positions to probabilistically control their multi-modal operating states to track the target. There are two desired features of the algorithm: energy-efficiency and resilience. If the target is traveling through a high node density area, then an optimal sensor selection approach is employed that maximizes a joint cost function of remaining energy and geometric diversity around the targets position. This provides energy-efficiency and increases the network lifetime while preventing redundant nodes from tracking the target. On the other hand, if the target is traveling through a low node density area or in a coverage gap (e.g., formed by node failures or non-uniform node deployment), then a potential game is played amongst the surrounding nodes to optimally expand their sensing ranges via minimizing energy consumption and maximizing target coverage. This provides resilience, that is the self-healing capability to track the target in the presence of low node densities and coverage gaps. The algorithm is comparatively evaluated against existing approaches through Monte Carlo simulations which demonstrate its superiority in terms of tracking performance, network-resilience and network-lifetime. | electrical engineering and systems science |
Liouville theorems for scaling invariant nonlinear parabolic problems in the whole space and/or the halfspace (saying that the problem does not posses positive bounded solutions defined for all times $t\in(-\infty,\infty)$) guarantee optimal estimates of solutions of related initial-boundary value problems in general domains. We prove an optimal Liouville theorem for the linear equation in the halfspace complemented by the nonlinear boundary condition $\partial u/\partial\nu=u^q$, $q>1$. | mathematics |
In this paper, we provide a detailed description on our approach designed for CVPR 2019 Workshop and Challenge on Learned Image Compression (CLIC). Our approach mainly consists of two proposals, i.e. deep residual learning for image compression and sub-pixel convolution as up-sampling operations. Experimental results have indicated that our approaches, Kattolab, Kattolabv2 and KattolabSSIM, achieve 0.972 in MS-SSIM at the rate constraint of 0.15bpp with moderate complexity during the validation phase. | electrical engineering and systems science |
While traditional methods for instruction-following typically assume prior linguistic and perceptual knowledge, many recent works in reinforcement learning (RL) have proposed learning policies end-to-end, typically by training neural networks to map joint representations of observations and instructions directly to actions. In this work, we present a novel framework for learning to perform temporally extended tasks using spatial reasoning in the RL framework, by sequentially imagining visual goals and choosing appropriate actions to fulfill imagined goals. Our framework operates on raw pixel images, assumes no prior linguistic or perceptual knowledge, and learns via intrinsic motivation and a single extrinsic reward signal measuring task completion. We validate our method in two environments with a robot arm in a simulated interactive 3D environment. Our method outperforms two flat architectures with raw-pixel and ground-truth states, and a hierarchical architecture with ground-truth states on object arrangement tasks. | computer science |
Spatiotemporal spin dynamics under spin-orbit interaction is investigated in a (001) GaAs two-dimensional electron gas using magneto-optical Kerr rotation microscopy. Spin polarized electrons are diffused away from the excited position, resulting in spin precession because of the diffusion-induced spin-orbit field. Near the cancellation between spin-orbit field and external magnetic field, the induced spin precession frequency depends nonlinearly on the diffusion velocity, which is unexpected from the conventional linear relation between the spin-orbit field and the electron velocity.This behavior originates from an enhancement of the spin relaxation anisotropy by the electron velocity perpendicular to the diffused direction. We demonstrate that the spin relaxation anisotropy, which has been regarded as a material constant, can be controlled via diffusive electron motion. | condensed matter |
For all the amides detected in the interstellar medium (ISM), the corresponding nitriles or isonitriles have also been detected in the ISM, some of which have relatively high abundances. Among the abundant nitriles for which the corresponding amide has not yet been detected is cyanoacetylene (HCCCN), whose amide counterpart is propiolamide (HCCC(O)NH$_2$). With the aim of supporting searches for this amide in the ISM, we provide a complete rotational study of propiolamide from 6 GHz to 440 GHz using rotational spectroscopic techniques in the frequency and time domain. We identified and measured more than 5500 distinct frequency lines of propiolamide and obtained accurate sets of spectroscopic parameters for the ground state and the three low-lying excited vibrational states. We used the ReMoCA spectral line survey performed with the Atacama Large Millimeter/submillimeter Array toward the star-forming region Sgr B2(N) to search for propiolamide. We report the nondetection of propiolamide toward the hot cores Sgr B2(N1S) and Sgr B2(N2). We find that propiolamide is at least 50 and 13 times less abundant than acetamide in Sgr B2(N1S) and Sgr B2(N2), respectively, indicating that the abundance difference between both amides is more pronounced by at least a factor of 8 and 2, respectively, than for their corresponding nitriles. Although propiolamide has yet to be included in astrochemical modeling networks, the observed upper limit to the ratio of propiolamide to acetamide seems consistent with the ratios of related species as determined from past simulations. | astrophysics |
At a nonzero temperature T, a constant field $\overline{A}_0 \sim T/g$ generates nontrivial eigenvalues of the thermal Wilson line. We discuss contributions to the free energy of such a holonomous plasma when the coupling constant, $g$, is weak. We review the computation to $\sim g^2$ by several alternate methods, and show that gauge invariant sources, which are nonlinear in the gauge potential $A_0$, generate novel contributions to the gluon self energy at $\sim g^2$. These ensure the gluon self energy remains transverse to $\sim g^2$, and are essential in computing contributions to the free energy at $\sim g^3$ for small holonomy, $\overline{A}_0 \sim T$. We show that the contribution $\sim g^3$ from off-diagonal gluons is discontinuous as the holonomy vanishes. The contribution from diagonal gluons is continuous as the holonomy vanishes, but sharply constrains the possible sources which generate nonzero holonomy, and must involve an infinite number of Polyakov loops. | high energy physics phenomenology |
We study 3d $\mathcal{N}=2$ Chern-Simons (CS) quiver theories on $S^3$ and ${\Sigma}_{\mathfrak{g}}\times S^1$. Using localization results, we examine their partition functions in the large rank limit and requiring the resulting matrix models to be local, find a large class of quiver theories that include quivers in one-to-one correspondence with the $\widehat{ADE}$ Dynkin diagrams. We compute explicitly the partition function on $S^3$ for $\widehat{D}$ quivers and that on ${\Sigma}_{\mathfrak{g}}\times S^1$ for $\widehat{AD}$ quivers, which lead to certain predictions for their holographic duals. We also provide a new and simple proof of the "index theorem", extending its applicability to a larger class of theories than considered before in the literature. | high energy physics theory |
Dynamical charge transfer processes at molecule-metal interfaces proceed in the few fs time scale that renders them highly relevant to electronic excitations in optoelectronic devices. Yet, knowledge thereof is limited when electronic ground state situations are considered that implicate charge transfer directly at the fermi energy. Here we show that such processes can be accessed by means of vibrational excitations, with non-adiabatic electron-vibron coupling leading to distinct asymmetric line shapes. Thereby the characteristic time scale of this interfacial dynamical charge transfer can be derived by using the vibrational oscillation period as an internal clock reference. | condensed matter |
The ocean is filled with microscopic microalgae called phytoplankton, which together are responsible for as much photosynthesis as all plants on land combined. Our ability to predict their response to the warming ocean relies on understanding how the dynamics of phytoplankton populations is influenced by changes in environmental conditions. One powerful technique to study the dynamics of phytoplankton is flow cytometry, which measures the optical properties of thousands of individual cells per second. Today, oceanographers are able to collect flow cytometry data in real-time onboard a moving ship, providing them with fine-scale resolution of the distribution of phytoplankton across thousands of kilometers. One of the current challenges is to understand how these small and large scale variations relate to environmental conditions, such as nutrient availability, temperature, light and ocean currents. In this paper, we propose a novel sparse mixture of multivariate regressions model to estimate the time-varying phytoplankton subpopulations while simultaneously identifying the specific environmental covariates that are predictive of the observed changes to these subpopulations. We demonstrate the usefulness and interpretability of the approach using both synthetic data and real observations collected on an oceanographic cruise conducted in the north-east Pacific in the spring of 2017. | statistics |
Subsets and Splits