text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
We consider a quasi-periodically identified conical spacetime, like the one of a cosmic string or disclination, to investigate nonzero averaged quantum vacuum fluctuations effects on the energy-momentum tensor and induced current density associated with a charged scalar field. We obtain exactly closed analytical expressions for the two-point Wightman function and the vacuum expectation value of the field squared, as well as for all components of the energy-momentum tensor. As to the induced current density, due to the quasi-periodic condition used, the only nonzero averaged component found is the azimuthal one. We also compare our results with previous ones found in literature. | high energy physics theory |
We define the group analogue of birational sheets, a construction performed by Losev for reductive Lie algebras. For G semisimple simply connected, we describe birational sheets in terms of Lusztig-Spaltenstein induction and we prove that they form a partition of G, and that they are unibranch varieties with smooth normalization by means of a local study. | mathematics |
We propose an on-line supervisory control scheme for discrete event systems (DESs), where a control specification is described by a fragment of linear temporal logic. On the product automaton of the DES and an acceptor for the specification, we define a ranking function that returns the minimum number of steps required to reach an accepting state from each state. In addition, we introduce a permissiveness function that indicates a time-varying permissive level. At each step during the on-line control scheme, the supervisor refers to the permissiveness function as well as the ranking function in order to guarantee the control specification while handling the tradeoff between its permissiveness and acceptance of the specification. The proposed scheme is demonstrated in a surveillance problem for a mobile robot. | electrical engineering and systems science |
Homomorphic encryption has been an area of study in classical computing for decades. The fundamental goal of homomorphic encryption is to enable (untrusted) Oscar to perform a computation for Alice without Oscar knowing the input to the computation or the output from the computation. Alice encrypts the input before sending it to Oscar, and Oscar performs the computation directly on the encrypted data, producing an encrypted result. Oscar then sends the encrypted result of the computation back to Alice, who can decrypt it. We describe an approach to homomorphic encryption for quantum annealing based on spin reversal transformations and show that it comes with little or no performance penalty. This is in contrast to approaches to homomorphic encryption for classical computing, which incur a significant additional computational cost. This implies that the performance gap between quantum annealing and classical computing is reduced when both paradigms use homomorphic encryption. Further, homomorphic encryption is critical for quantum annealing because quantum annealers are native to the cloud -- a third party (such as untrusted Oscar) performs the computation. If sensitive information, such as health-related data subject to the Health Insurance Portability and Accountability Act, is to be processed with quantum annealers, such a technique could be useful. | quantum physics |
We present OtoWorld, an interactive environment in which agents must learn to listen in order to solve navigational tasks. The purpose of OtoWorld is to facilitate reinforcement learning research in computer audition, where agents must learn to listen to the world around them to navigate. OtoWorld is built on three open source libraries: OpenAI Gym for environment and agent interaction, PyRoomAcoustics for ray-tracing and acoustics simulation, and nussl for training deep computer audition models. OtoWorld is the audio analogue of GridWorld, a simple navigation game. OtoWorld can be easily extended to more complex environments and games. To solve one episode of OtoWorld, an agent must move towards each sounding source in the auditory scene and "turn it off". The agent receives no other input than the current sound of the room. The sources are placed randomly within the room and can vary in number. The agent receives a reward for turning off a source. We present preliminary results on the ability of agents to win at OtoWorld. OtoWorld is open-source and available. | computer science |
We propose a procedure for assigning a relevance measure to each explanatory variable in a complex predictive model. We assume that we have a training set to fit the model and a test set to check the out of sample performance. First, the individual relevance of each variable is computed by comparing the predictions in the test set, given by the model that includes all the variables with those of another model in which the variable of interest is substituted by its ghost variable, defined as the prediction of this variable by using the rest of explanatory variables. Second, we check the joint effects among the variables by using the eigenvalues of a relevance matrix that is the covariance matrix of the vectors of individual effects. It is shown that in simple models, as linear or additive models, the proposed measures are related to standard measures of significance of the variables and in neural networks models (and in other algorithmic prediction models) the procedure provides information about the joint and individual effects of the variables that is not usually available by other methods. The procedure is illustrated with simulated examples and the analysis of a large real data set. | statistics |
The success of photonic crystal fibres relies largely on the endless variety of two-dimensional photonic crystals in the cross-section. Here, we propose a topological bandgap fibre whose bandgaps along in-plane directions are opened by generalized Kekul\'e modulation of a Dirac lattice with a vortex phase. Then, the existence of mid-gap defect modes is guaranteed to guide light at the core of this Dirac-vortex fibre, where the number of guiding modes equals the winding number of the spatial vortex. The single-vortex design provides a single-polarization single mode for a bandwidth as large as one octave. | physics |
In this paper, we explicitly construct the smooth compact base threefold for the elliptic Calabi-Yau fourfold with the largest known $h^{1,1}=303\,148$. It is generated by blowing up a smooth toric "seed" base threefold with $(E_8,E_8,E_8)$ collisions. The 4d F-theory compactification model over it has the largest geometric gauge group, $E_8^{2\,561}\times F_4^{7\,576}\times G_2^{20\,168}\times SU(2)^{30\,200}$, and the largest number of axions, $181\,820$, in the known 4d $\mathcal{N}=1$ supergravity landscape. We also prove that there are at least $1100^{15\,048}\approx 7.5\times 10^{45\,766}$ different flip and flop phases of this base threefold. Moreover, we find that many other base threefolds with large $h^{1,1}$ in the 4d F-theory landscape can be constructed in a similar way as well. | high energy physics theory |
We present materials informatics approach to search for superconducting hydrogen compounds, which is based on a genetic algorithm and a genetic programming. This method consists of four stages: (i) search for stable crystal structures of materials by a genetic algorithm, (ii) collection of physical and chemical property data by first-principles calculations, (iii) development of superconductivity predictor based on the database by a genetic programming, and (iv) discovery of potential candidates by regression analysis. By repeatedly performing the process as (i) $\rightarrow$ (ii) $\rightarrow$ (iii) $\rightarrow$ (iv) $\rightarrow$ (i) $\rightarrow$ $\dots$, the superconductivity of the discovered candidates is validated by first-principles calculations, and the database and predictor are further improved, which leads to an efficient search for superconducting materials. We applied this method to hypothetical ternary hydrogen compounds and predicted KScH$_{12}$ with a modulated hydrogen cage showing the superconducting critical temperature of 122 K at 300 GPa and GaAsH$_{6}$ showing 98 K at 180 GPa. | condensed matter |
In this paper, we study the parallel simulation of the magnetohydrodynamic (MHD) dynamo in a rapidly rotating spherical shell with pseudo-vacuum magnetic boundary conditions. A second-order finite volume scheme based on a collocated quasi-uniform cubed-sphere grid is applied to the spatial discretization of the MHD dynamo equations. To ensure the solenoidal condition of the magnetic field, we adopt a widely-used approach whereby a pseudo-pressure is introduced into the induction equation. The temporal integration is split by a second-order approximate factorization approach, resulting in two linear algebraic systems both solved by a preconditioned Krylov subspace iterative method. A multi-level restricted additive Schwarz preconditioner based on domain decomposition and multigrid method is then designed to improve the efficiency and scalability. Accurate numerical solutions of two benchmark cases are obtained with our code, comparable to the existing local method results. Several large-scale tests performed on the Sunway TaihuLight supercomputer show good strong and weak scalabilities and a noticeable improvement from the multi-level preconditioner with up to 10368 processor cores. | physics |
Simulation of quantum systems is expected to be one of the most important applications of quantum computing, with much of the theoretical work so far having focused on fermionic and spin-$\frac{1}{2}$ systems. Here, we instead consider encodings of $d$-level (i.e. qudit) quantum operators into multi-qubit operators, studying resource requirements for approximating operator exponentials by Trotterization. We primarily focus on spin-$s$ and truncated bosonic operators in second quantization, observing desirable properties for approaches based on the Gray code, which to our knowledge has not been used in this context previously. After outlining a methodology for implementing an arbitrary encoding, we investigate the interplay between Hamming distances, sparsity patterns, bosonic truncation, and other properties of local operators. Finally, we obtain resource counts for five common Hamiltonian classes used in physics and chemistry, while modeling the possibility of converting between encodings within a Trotter step. The most efficient encoding choice is heavily dependent on the application and highly sensitive to $d$, although clear trends are present. These operation count reductions are relevant for running algorithms on near-term quantum hardware because the savings effectively decrease the required circuit depth. Results and procedures outlined in this work may be useful for simulating a broad class of Hamiltonians on qubit-based digital quantum computers. | quantum physics |
Radially excited $U(1)$ gauged $Q$-balls are studied using both analytical and numerical methods. Unlike the nongauged case, there exists only a finite number of radially excited gauged $Q$-balls at given values of the model's parameters. Similarly to the unexcited gauged $Q$-ball, the radially excited one cannot possess the Noether charge exceeding some limiting value. This limiting Noether charge decreases with an increase in the radial excitation of the gauged $Q$-ball. For $n$-th radial excitation, there is a maximum allowable value of the gauge coupling constant, and the existence of the $n$-th radially excited gauged $Q$-ball becomes impossible if the gauge coupling constant exceeds this limiting value. Similarly to the limiting Noether charge, the limiting gauge coupling constant decreases with an increase in the radial excitation. At a fixed Noether charge, the energy of the gauged $Q$-ball increases with an increase in the radial excitation, and thus the radially excited gauged $Q$-ball is unstable against transit into a less excited or unexcited one. | high energy physics theory |
Being a lithophile element at ambient pressure, magnesium is long believed to be immiscible with iron. A recent study by Gao et al. [1] showed that pressure turns magnesium into a siderophile element and can produce unconventional Fe-Mg compounds. Here, we extend the investigation to exoplanetary pressure conditions using an adaptive genetic algorithm-based variable-composition structural prediction approach. We identify several Fe-Mg phases up to 3 TPa. Our cluster alignment analysis reveals that most of the predicted Fe-Mg compounds prefer a BCC packing motif at terapascal pressures. This study provides a more comprehensive structure database to support future investigations of the high-pressure structural behavior of Fe-Mg and ternary, quaternary, etc. compounds involving these elements. | condensed matter |
We use a hierarchy of numerical models (a 3-D Global Climate Model, a 1-D radiative-convective model and a 2-D Mantle Dynamics model) to explore the environmental effects of very large impacts on the atmosphere, surface and interior of early Mars. Using a combination of 1-D and 3-D climate simulations, we show that the environmental effects of the largest impact events recorded on Mars are characterized by: (i) a short impact-induced warm period; (ii) a low amount of hydrological cycling of water; (iii) deluge-style precipitation; and (iv) precipitation patterns that are uncorrelated with the observed regions of valley networks. We show that the impact-induced stable runaway greenhouse state predicted by Segura et al. 2012 is physically inconsistent. We confirm the results of Segura et al. 2008 and Urata & Toon 2013 that water ice clouds can significantly extend the duration of the post-impact warm period, and even for cloud coverage lower than predicted in Ramirez & Kasting 2017. However, the range of cloud microphysical properties for which this scenario works is very narrow. Using 2-D Mantle Dynamics simulations we find that large impacts can raise the near-surface internal heat flux up to several hundreds of mW/m$^2$ (i.e. up to $\sim$ 10 times the ambient flux) for several millions years at the edges of the impact crater. However, such internal heat flux is insufficient to keep the martian surface above the melting point of water. Our numerical results support the prediction of Palumbo & Head 2018 that very large impact-induced rainfall could have caused degradation of craters and formed smooth plains, potentially erasing much of the previously visible morphological surface history. Such hot rainfalls may have also led to the formation of aqueous alteration products on Noachian-aged terrains. | astrophysics |
Nanolaminated materials are important because of their exceptional properties and wide range of applications. Here, we demonstrate a general approach to synthesize a series of Zn-based MAX phases and Cl-terminated MXenes originating from the replacement reaction between the MAX phase and the late transition metal halides. The approach is a top-down route that enables the late transitional element atom (Zn in the present case) to occupy the A site in the pre-existing MAX phase structure. Using this replacement reaction between Zn element from molten ZnCl2 and Al element in MAX phase precursors (Ti3AlC2, Ti2AlC, Ti2AlN, and V2AlC), novel MAX phases Ti3ZnC2, Ti2ZnC, Ti2ZnN, and V2ZnC were synthesized. When employing excess ZnCl2, Cl terminated MXenes (such as Ti3C2Cl2 and Ti2CCl2) were derived by a subsequent exfoliation of Ti3ZnC2 and Ti2ZnC due to the strong Lewis acidity of molten ZnCl2. These results indicate that A-site element replacement in traditional MAX phases by late transition metal halides opens the door to explore MAX phases that are not thermodynamically stable at high temperature and would be difficult to synthesize through the commonly employed powder metallurgy approach. In addition, this is the first time that exclusively Cl-terminated MXenes were obtained, and the etching effect of Lewis acid in molten salts provides a green and viable route to prepare MXenes through an HF-free chemical approach. | condensed matter |
We derive the potential modular symmetries of heterotic string theory. For a toroidal compactification with Wilson line modulus, we obtain the Siegel modular group $\mathrm{Sp}(4,\mathbb{Z})$ that includes the modular symmetries $\mathrm{SL}(2,\mathbb{Z})_T$ and $\mathrm{SL}(2,\mathbb{Z})_U$ (of the "geometric" moduli $T$ and $U$) as well as mirror symmetry. In addition, string theory provides a candidate for a CP-like symmetry that enhances the Siegel modular group to $\mathrm{GSp}(4,\mathbb{Z})$. | high energy physics theory |
For short DNA molecules in crowded environments, we evaluate macroscopic parameters such as the average end-to-end distance and the twist conformation by tuning the strength of the site specific confinement driven by the crowders. The ds-DNA is modeled by a mesoscopic Hamiltonian which accounts for the three dimensional helical structure and incorporates fluctuational effects at the level of the base pair. The computational method assumes that the base pair fluctuations are temperature dependent trajectories whose amplitudes can be spatially modulated according to the crowders distribution. We show that the molecule elongation, as measured by the end-to-end distance varies non-monotonically with the strength of the confinement. Furthermore it is found that, if the crowders mostly confine the DNA mid-chain, the helix over-twists and its end-to-end distance grows in the strong confinement regime. Instead, if the crowders mostly pin one chain end, the helix untwists while the molecule stretches for large confinement strengths. Thus, our results put forward a peculiar relation between stretching and twisting which significantly depends on the crowders profile. The method could be applied to disegn specific DNA shapes by controlling the environment which constrains the molecule. | condensed matter |
The phase of the wave function of charged matter is sensitive to the value of the electric potential, even when the matter never enters any region with non-vanishing electromagnetic fields. Despite its fundamental character, this archetypal electric Aharonov-Bohm effect has evidently never been observed. We propose an experiment to detect the electric potential through its coupling to the superconducting order parameter. A potential difference between two superconductors will induce a relative phase shift that is observable via the DC Josephson effect even when no electromagnetic fields ever act on the superconductors, and even if the potential difference is later reduced to zero. This is a type of electromagnetic memory effect, and would directly demonstrate the physical significance of the electric potential. | high energy physics theory |
Semi-supervised learning methods are motivated by the availability of large datasets with unlabeled features in addition to labeled data. Unlabeled data is, however, not guaranteed to improve classification performance and has in fact been reported to impair the performance in certain cases. A fundamental source of error arises from restrictive assumptions about the unlabeled features, which result in unreliable classifiers that underestimate their prediction error probabilities. In this paper, we develop a semi-supervised learning approach that relaxes such assumptions and is capable of providing classifiers that reliably quantify the label uncertainty. The approach is applicable using any generative model with a supervised learning algorithm. We illustrate the approach using both handwritten digit and cloth classification data where the labels are missing at random. | statistics |
We argue that conformal invariance is a common thread linking several scalar effective field theories that appear in the double copy and scattering equations. For a derivatively coupled scalar with a quartic ${\cal O}(p^4)$ vertex, classical conformal invariance dictates an infinite tower of additional interactions that coincide exactly with Dirac-Born-Infeld theory analytically continued to spacetime dimension $D=0$. For the case of a quartic ${\cal O}(p^6)$ vertex, classical conformal invariance constrains the theory to be the special Galileon in $D=-2$ dimensions. We also verify the conformal invariance of these theories by showing that their amplitudes are uniquely fixed by the conformal Ward identities. In these theories, conformal invariance is a much more stringent constraint than scale invariance. | high energy physics theory |
Imaging circumstellar disks in the near-infrared provides unprecedented information about the formation and evolution of planetary systems. However, current post-processing techniques for high-contrast imaging using ground-based telescopes have a limited sensitivity to extended signals and their morphology is often plagued with strong morphological distortions. Moreover, it is challenging to disentangle planetary signals from the disk when the two components are close or intertwined. We propose a pipeline that is capable of detecting a wide variety of disks and preserving their shapes and flux distributions. By construction, our approach separates planets from disks. After analyzing the distortions induced by the current angular differential imaging (ADI) post-processing techniques, we establish a direct model of the different components constituting a temporal sequence of high-contrast images. In an inverse problem framework, we jointly estimate the starlight residuals and the potential extended sources and point sources hidden in the images, using low-complexity priors for each signal. To verify and estimate the performance of our approach, we tested it on VLT/SPHERE-IRDIS data, in which we injected synthetic disks and planets. We also applied our approach on observations containing real disks. Our technique makes it possible to detect disks from ADI datasets of a contrast above $3\times10^{-6}$ with respect to the host star. As no specific shape of the disks is assumed, we are capable of extracting a wide diversity of disks, including face-on disks. The intensity distribution of the detected disk is accurately preserved and point sources are distinguished, even close to the disk. | astrophysics |
A formal analysis is conducted on the exactness of various forms of unitary coupled cluster (UCC) theory based on particle-hole excitation and de-excitation operators. Both the conventional single exponential UCC parameterization and a disentangled (factorized) version are considered. We formulate a differential cluster analysis to determine the UCC amplitudes corresponding to a general quantum state. The exactness of conventional UCC (ability to represent any state) is explored numerically and it is formally shown to be determined by the structure of the critical points of the UCC exponential mapping. A family of disentangled UCC wave functions are shown to exactly parameterize any state, thus showing how to construct Trotter-error-free parameterizations of UCC for applications in quantum computing. From these results, we derive an exact disentangled UCC parameterization that employs an infinite sequence of particle-hole or general one- and two-body substitution operators. | physics |
This work proposes a novel adaptive background compensation scheme for frequency interleaved digital-to-analog converters (FI-DACs). The technique is applicable to high speed digital transceivers such as those used in coherent optical communications. Although compensation of FI-DACs has been discussed before in the technical literature, adaptive background techniques have not yet been reported. The importance of the latter lies in their capability to automatically compensate errors caused by process, voltage, and temperature variations in the echnology (e.g., CMOS, SiGe, etc.) implementations of the data converters, and therefore ensure high manufacturing yield. The key ingredients of the proposed technique are a multiple-input multiple-output (MIMO) equalizer and the backpropagation algorithm used to adapt the coefficients of the aforementioned equalizer. Simulations show that the impairments of the analog signal path are accurately compensated and their effect is essentially eliminated, resulting in a high performance transmitter system. | electrical engineering and systems science |
We consider a downlink multiuser massive MIMO system comprising multiple heterogeneous base stations with hybrid precoding architectures. To enhance the energy efficiency of the network, we propose a novel coordinated hybrid precoding technique, where the coordination between the base stations is aimed at exploiting interference as opposed to mitigating it as per conventional approaches. We formulate an optimization problem to compute the coordinated hybrid precoders that minimize the total transmit power while fulfilling the required quality of service at each user. Furthermore, we devise a low-complexity suboptimal precoding scheme to compute approximate solutions of the precoding problem. The simulation results reveal that the proposed coordinated hybrid precoding yields superior performance when compared to the conventional hybrid precoding schemes. | electrical engineering and systems science |
The motion control of a levitated nanoparticle plays a central role in optical levitation for fundamental studies and practical applications. Here, we presented a digital parametric feedback cooling based on switching between two trapping laser intensity levels with square wave modulations. The effects of modulation depth and modulation signal phase on the cooling result were investigated in detail. With such a digital parametric feedback method, the centre-of-mass temperature of all three motional degrees of freedom can be cooled to dozens of milli-Kelvin, which paved the way to fully control the motion of the levitated nanoparticle with a programmable digital process for wild applications. | physics |
The aim of this paper is to investigate possible advances obtained by the implementation of the framework of Fr\'echet mean and the generalized sense of mean that it offers, in the field of statistical process monitoring and control. In particular, the case of non-linear profiles which are described by data in functional form is considered and a framework combining the notion of Fr\'echet mean and deformation models is developed. The proposed monitoring approach is implemented to the intra-day air pollution monitoring task in the city of Athens where the capabilities and advantages of the method are illustrated. | statistics |
The Schr{\"o}dinger inequality is known to underlie the Kennard-Robertson inequality, which is the standard expression of quantum uncertainty for the product of variances of two observables $A$ and $B$, in the sense that the latter is derived from the former. In this paper we point out that, albeit more subtle, there is yet another inequality which underlies the Schr{\"o}dinger inequality in the same sense. The key component of this observation is the use of the weak-value operator $A_{\rm w}(B)$ introduced in our previous works (named after Aharonov's weak value), which was shown to act as the proxy operator for $A$ when $B$ is measured. The lower bound of our novel inequality supplements that of the Schr{\"o}dinger inequality by a term representing the discord between $A_{\rm w}(B)$ and $A$. In addition, the decomposition of the Schr{\"o}dinger inequality, which was also obtained in our previous works by making use the weak-value operator, is examined more closely to analyze its structure and the minimal uncertainty states. Our results are exemplified with some elementary spin 1 and 3/2 models as well as the familiar case of $A$ and $B$ being the position and momentum of a particle. | quantum physics |
Adversarial training is actively studied for learning robust models against adversarial examples. A recent study finds that adversarially trained models degenerate generalization performance on adversarial examples when their weight loss landscape, which is loss changes with respect to weights, is sharp. Unfortunately, it has been experimentally shown that adversarial training sharpens the weight loss landscape, but this phenomenon has not been theoretically clarified. Therefore, we theoretically analyze this phenomenon in this paper. As a first step, this paper proves that adversarial training with the L2 norm constraints sharpens the weight loss landscape in the linear logistic regression model. Our analysis reveals that the sharpness of the weight loss landscape is caused by the noise aligned in the direction of increasing the loss, which is used in adversarial training. We theoretically and experimentally confirm that the weight loss landscape becomes sharper as the magnitude of the noise of adversarial training increases in the linear logistic regression model. Moreover, we experimentally confirm the same phenomena in ResNet18 with softmax as a more general case. | statistics |
We discuss the three different classes of CP-symmetries that can be realized in a two-Higgs-doublet model, CP1, CP2 and CP3. We express conditions for realizing these symmetries in terms of masses and couplings of the model, thereby providing a way of verifying which, if any, of these symmetries is realized by nature. | high energy physics phenomenology |
By modulating the intensity of laser light before the rotating groundglass, the well-known pseudothermal light source can be modified into superbunching pseudothermal light source, in which the degree of second-order coherence of the scattered light is larger than 2. With the modulated intensities following binary distribution, we experimentally observed the degree of second- and third-order coherence equaling 20.45 and 227.07, which is much larger than the value of thermal or pseudothermal light, 2 and 6, respectively. Numerical simulation predicts that the degree of second-order coherence can be further improved by tuning the parameters of binary distribution. It is also predicted that the quality of temporal ghost imaging can be improved with this superbunching pseudothermal light. This simple and efficient superbunching pseudothermal light source provides an interesting alternative to study the second- and higher-order interference of light in these scenarios where thermal or pseudothermal light source were employed. | physics |
We study the impact of classical short-range nonlinear interactions on transport in lattices with no dispersion. The single particle band structure of these lattices contains flat bands only, and cages non-interacting particles into compact localized eigenstates. We demonstrate that there always exist local unitary transformations that detangle such lattices into decoupled sites in dimension one. Starting from a detangled representation, inverting the detangling into entangling unitary transformations and extending to higher lattice dimensions, we arrive at an All-Bands-Flat generator for single particle states in any lattice dimension. The entangling unitary transformations are parametrized by sets of angles. For a given member of the set of all-bands-flat, additional short-range nonlinear interactions destroy caging in general, and induce transport. However, fine-tuned subsets of the unitary transformations allow to completely restore caging. We derive the necessary and sufficient fine-tuning conditions for nonlinear caging, and provide computational evidence of our conclusions for one-dimensional systems. | condensed matter |
Autoencoders have seen wide success in domains ranging from feature selection to information retrieval. Despite this success, designing an autoencoder for a given task remains a challenging undertaking due to the lack of firm intuition on how the backing neural network architectures of the encoder and decoder impact the overall performance of the autoencoder. In this work we present a distributed system that uses an efficient evolutionary algorithm to design a modular autoencoder. We demonstrate the effectiveness of this system on the tasks of manifold learning and image denoising. The system beats random search by nearly an order of magnitude on both tasks while achieving near linear horizontal scaling as additional worker nodes are added to the system. | computer science |
One of the main challenges hampering an accurate measurement of the double parton scattering (DPS) cross sections is the difficulty in separating the DPS from the leading twist (LT) contributions. We argue that such a separation can be achieved, and cross section of DPS measured, by exploiting the different centrality dependence of DPS and LT processes in proton-nucleus scattering. We developed a Monte Carlo implementation of the DPS processes which includes realistic nucleon-nucleon (NN) correlations in nuclei, an accurate description of transverse geometry of both hard and soft NN collisions as well as fluctuations of the strength of interaction of nucleon with nucleus (color fluctuation effects). Our method allows the calculation of probability distributions of single and double dijet events as a function of centrality, also distinguishing double hard scatterings originating from a single target nucleon and from two different nucleons. We present numerical results for the rate of DPS as a function of centrality, which relates the number of wounded nucleons and the sum of transverse energy of hadrons produced at large negative (along the nucleus direction) rapidities, which is experimentally measurable. We suggest a new quantity which allows to test the geometry of DPS and we argue that it is a universal function of centrality for different DPS processes. This quantity can be tested by analyzing existing LHC data. The method developed in this work can be extended to the search for triple parton interactions. | high energy physics phenomenology |
The electronic density of states (DOS) quantifies the distribution of the energy levels that can be occupied by electrons in a quasiparticle picture, and is central to modern electronic structure theory. It also underpins the computation and interpretation of experimentally observable material properties such as optical absorption and electrical conductivity. We discuss the challenges inherent in the construction of a machine-learning (ML) framework aimed at predicting the DOS as a combination of local contributions that depend in turn on the geometric configuration of neighbours around each atom, using quasiparticle energy levels from density functional theory as training data. We present a challenging case study that includes configurations of silicon spanning a broad set of thermodynamic conditions, ranging from bulk structures to clusters, and from semiconducting to metallic behavior. We compare different approaches to represent the DOS, and the accuracy of predicting quantities such as the Fermi level, the DOS at the Fermi level, or the band energy, either directly or as a side-product of the evaluation of the DOS. The performance of the model depends crucially on the smoothening of the DOS, and there is a tradeoff to be made between the systematic error associated with the smoothening and the error in the ML model for a specific structure. We demonstrate the usefulness of this approach by computing the density of states of a large amorphous silicon sample, for which it would be prohibitively expensive to compute the DOS by direct electronic structure calculations, and show how the atom-centred decomposition of the DOS that is obtained through our model can be used to extract physical insights into the connections between structural and electronic features. | condensed matter |
In this work, we present a per-instant pose optimization method that can generate configurations that achieve specified pose or motion objectives as best as possible over a sequence of solutions, while also simultaneously avoiding collisions with static or dynamic obstacles in the environment. We cast our method as a multi-objective, non-linear constrained optimization-based IK problem where each term in the objective function encodes a particular pose objective. We demonstrate how to effectively incorporate environment collision avoidance as a single term in this multi-objective, optimization-based IK structure, and provide solutions for how to spatially represent and organize external environments such that data can be efficiently passed to a real-time, performance-critical optimization loop. We demonstrate the effectiveness of our method by comparing it to various state-of-the-art methods in a testbed of simulation experiments and discuss the implications of our work based on our results. | computer science |
It has recently been reported [\textit{PNAS} \textbf{114}, 2303 (2017)] that, under an operational definition of time, quantum clocks would get entangled through gravitational effects. Here we study an alternative scenario: the clocks have different masses and energy gaps, which would produce time difference via gravitational interaction. The proposal of quantum clock synchronization for the gravity-induced time difference is discussed. We illustrate how the stability of measurement probability in the quantum clock synchronization proposal is influenced by the gravitational interaction induced by the clock themselves. It is found that the precision of clock synchronization depends on the energy gaps of the clocks and the improvement of precision in quantum metrology is in fact an indicator of entanglement generation. We also present the quantum enhanced estimation of time difference and find that the quantum Fisher information is very sensitive to the distance between the clocks. | quantum physics |
Causal mediation analysis seeks to investigate how the treatment effect of an exposure on outcomes is mediated through intermediate variables. Although many applications involve longitudinal data, the existing methods are not directly applicable to settings where the mediator and outcome are measured on sparse and irregular time grids. We extend the existing causal mediation framework from a functional data analysis perspective, viewing the sparse and irregular longitudinal data as realizations of underlying smooth stochastic processes. We define causal estimands of direct and indirect effects accordingly and provide corresponding identification assumptions. For estimation and inference, we employ a functional principal component analysis approach for dimension reduction and use the first few functional principal components instead of the whole trajectories in the structural equation models. We adopt the Bayesian paradigm to accurately quantify the uncertainties. The operating characteristics of the proposed methods are examined via simulations. We apply the proposed methods to a longitudinal data set from a wild baboon population in Kenya to investigate the causal relationships between early adversity, strength of social bonds between animals, and adult glucocorticoid hormone concentrations. We find that early adversity has a significant direct effect (a 9-14% increase) on females' glucocorticoid concentrations across adulthood, but find little evidence that these effects were mediated by weak social bonds. | statistics |
Random walks with stochastic resetting provides a treatable framework to study interesting features about central-place motion. In this work, we introduce non-instantaneous resetting as a two-state model being a combination of an exploring state where the walker moves randomly according to a propagator and a returning state where the walker performs a ballistic motion with constant velocity towards the origin. We study the emerging transport properties for two types of reset time probability density functions (PDFs): exponential and Pareto. In the first case, we find the stationary distribution and a general expression for the stationary mean square displacement (MSD) in terms of the propagator. We find that the stationary MSD may increase, decrease or remain constant with the returning velocity. This depends on the moments of the propagator. Regarding the Pareto resetting PDF we also study the stationary distribution and the asymptotic scaling of the MSD for diffusive motion. In this case, we see that the resetting modifies the transport regime, making the overall transport sub-diffusive and even reaching a stationary MSD., i.e., a stochastic localization. This phenomena is also observed in diffusion under instantaneous Pareto resetting. We check the main results with stochastic simulations of the process. | condensed matter |
Machine learning models, which are frequently used in self-driving cars, are trained by matching the captured images of the road and the measured angle of the steering wheel. The angle of the steering wheel is generally fetched from steering angle sensor, which is tightly-coupled to the physical aspects of the vehicle at hand. Therefore, a model-agnostic autonomous car-kit is very difficult to be developed and autonomous vehicles need more training data. The proposed vision based steering angle estimation system argues a new approach which basically matches the images of the road captured by an outdoor camera and the images of the steering wheel from an onboard camera, avoiding the burden of collecting model-dependent training data and the use of any other electromechanical hardware. | computer science |
Neutron star (NS) as the dark matter (DM) probe has gained a broad attention recently, either from heating due to DM annihilation or its stability under the presence of DM. In this work, we investigate spin-$1/2$ fermionic DM $\chi$ charged under the $U(1)_{X}$ in the dark sector. The massive gauge boson $V$ of $U(1)_{X}$ gauge group can be produced in NS via DM annihilation. The produced gauge boson can decay into Standard Model (SM) particles before it exits NS, despite its tiny couplings to SM particles. Thus, we perform a systematic study on $\chi\bar{\chi}\to2V\to4{\rm SM}$ as a new heating mechanism for NS in addition to $\chi\bar{\chi}\to2{\rm SM}$ and kinetic heating from DM-baryon scattering. The self-trapping due to $\chi V$ scattering is also considered. We assume the general framework that both kinetic and mass mixing terms between $V$ and SM gauge bosons are present. This allows both vector and axial-vector couplings between $V$ and SM fermions even for $m_V\ll m_Z$. Notably, the contribution from axial-vector coupling is not negligible when particles scatter relativistically. We point out that the above approaches to DM-induced NS heating are not yet adopted in recent analyses. Detectabilities of the aforementioned effects to the NS surface temperature by the future telescopes are discussed as well. | high energy physics phenomenology |
Dwarf spheroidal galaxies (dSphs) are the most compact dark matter-dominated objects observed so far. The Pauli exclusion principle limits the number of fermionic dark matter particles that can compose a dSph halo. This results in a well-known lower bound on their particle mass. So far, such bounds were obtained from the analysis of individual dSphs. In this paper, we model dark matter halo density profiles via the semi-analytical approach and analyse the data from eight `classical' dSphs assuming the same mass of dark matter fermion in each object. First, we find out that modelling of Carina dSph results in a much worse fitting quality compared to the other seven objects. From the combined analysis of the kinematic data of the remaining seven `classical' dSphs, we obtain a new $2\sigma$ lower bound of $m\gtrsim 190$ eV on the dark matter fermion mass. In addition, by combining a sub-sample of four dSphs -- Draco, Fornax, Leo I and Sculptor -- we conclude that 220 eV fermionic dark matter appears to be preferred over the standard CDM at about 2$\sigma$ level. However, this result becomes insignificant if all seven objects are included in the analysis. Future improvement of the obtained bound requires more detailed data, both from `classical' and ultra-faint dSphs. | astrophysics |
The near-horizon region of Neveu-Schwarz fivebranes provides interesting examples of gauge/gravity duality. We revisit the structure of wrapped and/or intersecting fivebranes using the tools of null-gauged WZW models in worldsheet string theory, revealing the effective geometry of the fivebrane throat in a variety of examples. Variant gaugings yield linear dilaton fivebrane throats with AdS3 caps, providing a wealth of information about the near-BPS structure of the corresponding spacetime CFT duals. | high energy physics theory |
We provide an overview of the methods that can be used for prediction under uncertainty and data fitting of dynamical systems, and of the fundamental challenges that arise in this context. The focus is on SIR-like models, that are being commonly used when attempting to predict the trend of the COVID-19 pandemic. In particular, we raise a warning flag about identifiability of the parameters of SIR-like models; often, it might be hard to infer the correct values of the parameters from data, even for very simple models, making it non-trivial to use these models for meaningful predictions. Most of the points that we touch upon are actually generally valid for inverse problems in more general setups. | statistics |
A highly granular silicon-tungsten electromagnetic calorimeter (SiW-ECAL) is the reference design of the ECAL for International Large Detector (ILD) concept, one of the two detector concepts for the detector(s) at the future International Linear Collider. Prototypes for this type of detector are developed within the CALICE Collaboration. The final detector will comprise about $10^{8}$ calorimeter cells that have to be integrated in a volume of maximal 20 cm in depth. Detector components that in terms of size and channel density come already close to the specifications for future large scale experiments are progressively developed. This contribution will report on the performance of a new 1.2 mm thick 9-layer PCB with wirebonded ASICs and comparisons with PCBs with BGA packaged ASICs will be presented. A volume of about $6\times18\times0.2$ cm$^{3}$ is available for the digital readout and the power supply of the individual detector layers that feature up to 10000 calorimeter cells. We will present newly developed electronic cards that meet these constraints. | physics |
We compute explicitly the two-dimensional version of Basso-Dixon type integrals for the planar four-point correlation functions given by conformal fishnet Feynman graphs. These diagrams are represented by a fragment of a regular square lattice of power-like propagators, arising in the recently proposed integrable bi-scalar fishnet CFT. The formula is derived from first principles, using the formalism of separated variables in integrable SL(2,C) spin chain. It is generalized to anisotropic fishnet, with different powers for propagators in two directions of the lattice. | high energy physics theory |
We consider the problem of deploying a quantum network on an existing fiber infrastructure, where quantum repeaters and end nodes can only be housed at specific locations. We propose a method based on integer linear programming (ILP) to place the minimal number of repeaters on such an existing network topology, such that requirements on end-to-end entanglement-generation rate and fidelity between any pair of end-nodes are satisfied. While ILPs are generally difficult to solve, we show that our method performs well in practice for networks of up to 100 nodes. We illustrate the behavior of our method both on randomly-generated network topologies, as well as on a real-world fiber topology deployed in the Netherlands. | quantum physics |
The ductile fracture process in porous metals due to growth and coalescence of micron scale voids is not only affected by the imposed stress state but also by the distribution of the voids and the material size effect. The objective of this work is to understand the interaction of the inter-void spacing (or ligaments) and the resultant gradient induced material size effect on void coalescence for a range of imposed stress states. To this end, three dimensional finite element calculations of unit cell models with a discrete void embedded in a strain gradient enhanced material matrix are performed. The calculations are carried out for a range of initial inter-void ligament sizes and imposed stress states characterised by fixed values of the stress triaxiality and the Lode parameter. Our results show that in the absence of strain gradient effects on the material response, decreasing the inter-void ligament size results in an increase in the propensity for void coalescence. However, in a strain gradient enhanced material matrix, the strain gradients harden the material in the inter-void ligament and decrease the effect of inter-void ligament size on the propensity for void coalescence. | physics |
We have carried out magnetization, heat capacity, electrical and magnetoresistance measurements (2-300 K) for the polycrystalline form of intermetallic compounds, R2RhSi3 (R= Gd, Tb, and Dy), forming in a AlB2 derived hexagonal structure with a triangular R network. This work was primarily motivated by a revival of interest on Gd2PdSi3 after about two decades in the field of Toplogical Hall Effect due to magnetic skyrmions. We report here that these compounds are characterized by double antiferromagnetic transitions (T_N= 13.5 and 12 K for Gd, 13.5 and 6.5 K for Tb; 6.5 and 2.5 for Dy), but antiferromagnerism seems to be complex. The most notable observations common to all these compounds are: (i) There are many features in the data mimicking those seen for Gd2PdSi3, including the two field-induced changes in isothermal magnetization as though there are two metamagnetic transitions well below T_N. In view of such a resemblance of the properties, we speculate that these Rh-based materials offer a good playground to study toplogical Hall effect in a centrosymmetric structure, with its origin lying in triangular lattice of magnetic R ions; (ii) There is an increasing contribution of electronic scattering with decreasing temperature towards T_N in all cases, similar to Gd2PdSi3, thereby serving as examples for a theoretical prediction for a classical spin-liquid phase in metallic systems due to geometrical frustration. | condensed matter |
Random forests are a powerful method for non-parametric regression, but are limited in their ability to fit smooth signals, and can show poor predictive performance in the presence of strong, smooth effects. Taking the perspective of random forests as an adaptive kernel method, we pair the forest kernel with a local linear regression adjustment to better capture smoothness. The resulting procedure, local linear forests, enables us to improve on asymptotic rates of convergence for random forests with smooth signals, and provides substantial gains in accuracy on both real and simulated data. We prove a central limit theorem valid under regularity conditions on the forest and smoothness constraints, and propose a computationally efficient construction for confidence intervals. Moving to a causal inference application, we discuss the merits of local regression adjustments for heterogeneous treatment effect estimation, and give an example on a dataset exploring the effect word choice has on attitudes to the social safety net. Last, we include simulation results on real and generated data. | statistics |
The capability to switch between grid-connected and islanded modes has promoted adoption of microgrid technology for powering remote locations. Stabilizing frequency during the islanding event, however, is a challenging control task, particularly under high penetration of converter-interfaced sources. In this paper, a numerical optimal control (NOC)-based control synthesis methodology is proposed for preparedness of microgrid islanding that ensure guaranteed performance. The key feature of the proposed paradigm is near real-time centralized scheduling for real-time decentralized executing. For tractable computation, linearized models are used in the problem formulation. To accommodate the linearization errors, interval analysis is employed to compute linearization-induced uncertainty as numerical intervals so that the NOC problem can be formulated into a robust mixed-integer linear program. The proposed control is verified on the full nonlinear model in Simulink. The simulation results shown effectiveness of the proposed control paradigm and the necessity of considering linearization-induced uncertainty. | electrical engineering and systems science |
Recent work on 6D superconformal field theories (SCFTs) has established an intricate correspondence between certain Higgs branch deformations and nilpotent orbits of flavor symmetry algebras associated with T-branes. In this paper, we return to the stringy origin of these theories and show that many aspects of these deformations can be understood in terms of simple combinatorial data associated with multi-pronged strings stretched between stacks of intersecting 7-branes in F-theory. This data lets us determine the full structure of the nilpotent cone for each semi-simple flavor symmetry algebra, and it further allows us to characterize symmetry breaking patterns in quiver-like theories with classical gauge groups. An especially helpful feature of this analysis is that it extends to "short quivers" in which the breaking patterns from different flavor symmetry factors are correlated. | high energy physics theory |
We study a model system with nematic and magnetic orders, within a channel geometry modelled by an interval, $[-D, D]$. The system is characterised by a tensor-valued nematic order parameter $\mathbf{Q}$ and a vector-valued magnetisation $\mathbf{M}$, and the observable states are modelled as stable critical points of an appropriately defined free energy. In particular, the full energy includes a nemato-magnetic coupling term characterised by a parameter $c$. We (i) derive $L^\infty$ bounds for $\mathbf{Q}$ and $\mathbf{M}$; (ii) prove a uniqueness result in parameter regimes defined by $c$, $D$ and material- and temperature-dependent correlation lengths; (iii) analyse order reconstruction solutions, possessing domain walls, and their stabilities as a function of $D$ and $c$ and (iv) perform numerical studies that elucidate the interplay of $c$ and $D$ for multistability. | mathematics |
Quantifying of quantum coherence of a given system not only plays an important role in quantum information science but also promote our understanding on some basic problems, such as quantum phase transition. Conventional quantum coherence measurements, such as $l_1$ norm of coherence and relative entropy of coherence, has been widely used to study quantum phase transition, which usually are basis-dependent. The recent quantum version of the Jensen-Shannon divergence meet all the requirements of a good coherence measure. It is not only a metric but also can be basis-independent. Here, based on the quantum renormalization group method we propose an analysis on the critical behavior of two types Ising systems when distribution of quantum coherence. We directly obtain the trade-off relation, critical phenomena, singular behavior, and scaling behavior for both quantum block spin system. Furthermore, the monogamy relation in the multipartite system is also studied in detail. These new result expand the result that quantum coherence can decompose into various contributions as well as enlarge the applications in using basis-independent quantum coherence to reflect quantum critical phenomena. | quantum physics |
The Cassini mission offered us the opportunity to monitor the seasonal evolution of Titan's atmosphere from 2004 to 2017, i.e. half a Titan year. The lower part of the stratosphere (pressures greater than 10 mbar) is a region of particular interest as there are few available temperature measurements, and because its thermal response to the seasonal and meridional insolation variations undergone by Titan remains poorly known. In this study, we measure temperatures in Titan's lower stratosphere between 6 mbar and 25 mbar using Cassini/CIRS spectra covering the whole duration of the mission (from 2004 to 2017) and the whole latitude range. We can thus characterize the meridional distribution of temperatures in Titan's lower stratosphere, and how it evolves from northern winter (2004) to summer solstice (2017). Our measurements show that Titan's lower stratosphere undergoes significant seasonal changes, especially at the South pole, where temperature decreases by 19 K at 15 mbar in 4 years. | astrophysics |
Dust plays a central role in several astrophysical processes. Hence the need of dust/gas numerical solutions, and analytical problems to benchmark them. In the seminal dustywave problem, we discover a regime where sound waves can not propagate through the mixture above a large critical dust fraction. We characterise this regime analytically, making it of use for testing accuracy of numerical solvers at large dust fractions. | astrophysics |
The task of RGBT tracking aims to take the complementary advantages from visible spectrum and thermal infrared data to achieve robust visual tracking, and receives more and more attention in recent years. Existing works focus on modality-specific information integration by introducing modality weights to achieve adaptive fusion or learning robust feature representations of different modalities. Although these methods could effectively deploy the modality-specific properties, they ignore the potential values of modality-shared cues as well as instance-aware information, which are crucial for effective fusion of different modalities in RGBT tracking. In this paper, we propose a novel Multi-Adapter convolutional Network (MANet) to jointly perform modality-shared, modality-specific and instance-aware feature learning in an end-to-end trained deep framework for RGBT tracking. We design three kinds of adapters within our network. In a specific, the generality adapter is to extract shared object representations, the modality adapter aims at encoding modality-specific information to deploy their complementary advantages, and the instance adapter is to model the appearance properties and temporal variations of a certain object. Moreover, to reduce computational complexity for real-time demand of visual tracking, we design a parallel structure of generic adapter and modality adapter. Extensive experiments on two RGBT tracking benchmark datasets demonstrate the outstanding performance of the proposed tracker against other state-of-the-art RGB and RGBT tracking algorithms. | computer science |
Bound electron-hole excitonic states are generally not expected to form with charges of negative effective mass. We identify such excitons in a single layer of the semiconductor WSe2, where they give rise to narrow-band upconverted photoluminescence in the UV, at an energy of 1.66 eV above the first band-edge excitonic transition. Negative band curvature and strong electron-phonon coupling result in a cascaded phonon progression with equidistant peaks in the photoluminescence spectrum, resolvable to ninth order. Ab initio GW-BSE calculations with full electron-hole correlations unmask and explain the admixture of upper conduction-band states to this complex many-body excitation: an optically bright, bound exciton in resonance with the semiconductor continuum. This exciton is responsible for atomic-like quantum-interference phenomena such as electromagnetically induced transparency. Since band curvature can be tuned by pressure or strain, synthesis of exotic quasiparticles such as flat-band excitons with infinite reduced mass becomes feasible. | condensed matter |
One of the lowest-order corrections to Gaussian quantum mechanics in infinite-dimensional Hilbert spaces are Airy functions: a uniformization of the stationary phase method applied in the path integral perspective. We introduce a ``periodized stationary phase method'' to discrete Wigner functions of systems with odd prime dimension and show that the $\frac{\pi}{8}$ gate is the discrete analog of the Airy function. We then establish a relationship between the stabilizer rank of states and the number of quadratic Gauss sums necessary in the periodized stationary phase method. This allows us to develop a classical strong simulation of a single qutrit marginal on $t$ qutrit $\frac{\pi}{8}$ gates that are followed by Clifford evolution, and show that this only requires $3^{\frac{t}{2}+1}$ quadratic Gauss sums. This outperforms the best alternative qutrit algorithm (based on Wigner negativity and scaling as $\sim\hspace{-3pt} 3^{0.8 t}$ for $10^{-2}$ precision) for any number of $\frac{\pi}{8}$ gates to full precision. | quantum physics |
We reveal the rich magnon topology in honeycomb bilayer ferromagnets (HBF) induced by the combined effect of interlayer exchange, Dzyaloshinskii-Moriya interaction (DMI), and electrostatic doping (ED). In particular, we present a systematic study of the Hamiltonian non-adiabatic evolution in the HBF parametric space, spanned by the symmetry-breaking terms (DMI and ED) and interlayer exchange. We determine the band closure manifolds which are found to divide the parametric space into six distinct regions, matched with five distinct topological phases and one topologically trivial phase. The characteristic Chern numbers and thermal Hall conductivities are calculated for the topological phases. Edge spectra, dictated by the bulk-edge correspondence, are also analyzed in the nanoribbon version of the model. Both bulk and edge spectra are found to be nonreciprocal as a consequence of ED and edge magnons are observed to counter propagate on opposite edges. The predicted results offer new insights on the manipulation of magnonic Chern numbers and magnon topological transport via experimentally tunable parameters. | condensed matter |
In this paper, we consider the following three dimensional defocusing cubic nonlinear Schr\"odinger equation (NLS) with partial harmonic potential \begin{equation*}\tag{NLS} i\partial_t u + \left(\Delta_{\mathbb{R}^3 }-x^2 \right) u = |u|^2 u, \quad u|_{t=0} = u_0. \end{equation*} Our main result shows that the solution $u$ scatters for any given initial data $u_0$ with finite mass and energy. The main new ingredient in our approach is to approximate (NLS) in the large-scale case by a relevant dispersive continuous resonant (DCR) system. The proof of global well-posedness and scattering of the new (DCR) system is greatly inspired by the fundamental works of Dodson \cite{D3,D1,D2} in his study of scattering for the mass-critical nonlinear Schr\"odinger equation. The analysis of (DCR) system allows us to utilize the additional regularity of the smooth nonlinear profile so that the celebrated concentration-compactness/rigidity argument of Kenig and Merle applies. | mathematics |
Magnetic impurities in metallic superconductors are important for both fundamental and applied sciences. In this study, we focused on dilute Mn-doped aluminum (AlMn) films, which are common superconducting materials used to make transition edge sensors (TES). We developed a multi-energy ion-implantation technique to make AlMn films. Compared with frequently used sputtering techniques, ion-implantation provides more precise and reliable control of the Mn doping concentration in the AlMn films.The ion implantation also enables us to quantitatively analyze the superconducting transition temperature curve as a function of the Mn doping concentration. We found that Mn dopants act as magnetic impurities and suppression of superconductivity is counteracted by the antiferromagnetic Ruderman Kittel Kasuya Yosida interaction among Mn dopants. The RKKY interaction can be tuned through defect engineering in the ion-implantation process and through post-implantation annealing. | condensed matter |
We present a publicly available library of model atmospheres with radiative-convective equilibrium Pressure-Temperature ($P$-$T$) profiles fully consistent with equilibrium chemical abundances, and the corresponding emission and transmission spectrum with R$\sim$5000 at 0.2 $\mu$m decreasing to R$\sim$35 at 30 $\mu$m, for 89 hot Jupiter exoplanets, for four re-circulation factors, six metallicities and six C/O ratios. We find the choice of condensation process (local/rainout) alters the $P$-$T$ profile and thereby the spectrum substantially, potentially detectable by JWST. We find H$^-$ opacity can contribute to form a strong temperature inversion in ultra-hot Jupiters for C/O ratios $\geq$ 1 and can make transmission spectra features flat in the optical, alongside altering the entire emission spectra. We highlight how adopting different model choices such as thermal ionisation, opacities, line-wing profiles and the methodology of varying the C/O ratio, effects the $P$-$T$ structure and the spectrum. We show the role of Fe opacity to form primary/secondary inversion in the atmosphere. We use WASP-17b and WASP-121b as test cases to demonstrate the effect of grid parameters across their full range, while highlighting some important findings, concerning the overall atmospheric structure, chemical transition regimes and their observables. Finally, we apply this library to the current transmission and emission spectra observations of WASP-121b, which shows H$_2$O and tentative evidence for VO at the limb, and H$_2$O emission feature indicative of inversion on the dayside, with very low energy redistribution, thereby demonstrating the applicability of library for planning and interpreting observations of transmission and emission spectrum. | astrophysics |
When the crystalline symmetries that protect a higher-order topological phase are not preserved at the boundaries of the sample, gapless hinge modes or in-gap corner states cannot be stabilized. Therefore, careful engineering of the sample termination is required. Similarly, magnetic textures, whose quantum fluctuations determine the supported magnonic excitations, tend to relax to new configurations that may also break crystalline symmetries when boundaries are introduced. Here we uncover that antiskyrmion crystals provide an experimentally accessible platform to realize a magnonic topological quadrupole insulator, whose hallmark signature are robust magnonic corner states. Furthermore, we show that tuning an applied magnetic field can trigger the self-assembly of antiskyrmions carrying a fractional topological charge along the sample edges. Crucially, these fractional antiskyrmions restore the symmetries needed to enforce the emergence of the magnonic corner states. Using the machinery of nested Wilson loops, adapted to magnonic systems supported by noncollinear magnetic textures, we demonstrate the quantization of the bulk quadrupole moment, edge dipole moments, and corner charges. | condensed matter |
Disparity by Block Matching stereo is usually used in applications with limited computational power in order to get depth estimates. However, the research on simple stereo methods has been lesser than the energy based counterparts which promise a better quality depth map with more potential for future improvements. Semi-global-matching (SGM) methods offer good performance and easy implementation but suffer from the problem of very high memory footprint because it's working on the full disparity space image. On the other hand, Block matching stereo needs much less memory. In this paper, we introduce a novel multi-scale-hierarchical block-matching approach using a pyramidal variant of depth and cost functions which drastically improves the results of standard block matching stereo techniques while preserving the low memory footprint and further reducing the complexity of standard block matching. We tested our new multi block matching scheme on the Middlebury stereo benchmark. For the Middlebury benchmark we get results that are only slightly worse than state of the art SGM implementations. | computer science |
We consider three fundamental issues in quantum gravity: (a) the black hole information paradox (b) the unboundedness of entropy that can be stored inside a black hole horizon (c) the relation between the black hole horizon and the cosmological horizon. With help from the small corrections theorem, we convert each of these issues into a sharp conflict. We then argue that all three conflicts can be resolved by the following hypothesis: {\it the vacuum wavefunctional of quantum gravity contains a `vecro' component made of virtual fluctuations of configurations of the same type that arise in the fuzzball structure of black hole microstates}. Further, if we assume that causality holds to leading order in gently curved spacetime, then we {\it must} have such a vecro component in order to resolve the above conflicts. The term vecro stands for `Virtual Extended Compression-Resistant Objects', and characterizes the nature of the vacuum fluctuations that resolve the puzzles. It is interesting that puzzle (c) may relate the role of quantum gravity in black holes to observations in the sky. | high energy physics theory |
The last decades have witnessed the breakthrough of autonomous vehicles (AVs), and the perception capabilities of AVs have been dramatically improved. Various sensors installed on AVs, including, but are not limited to, LiDAR, radar, camera and stereovision, will be collecting massive data and perceiving the surrounding traffic states continuously. In fact, a fleet of AVs can serve as floating (or probe) sensors, which can be utilized to infer traffic information while cruising around the roadway networks. In contrast, conventional traffic sensing methods rely on fixed traffic sensors such as loop detectors, cameras and microwave vehicle detectors. Due to the high cost of conventional traffic sensors, traffic state data are usually obtained in a low-frequency and sparse manner. In view of this, this paper leverages rich data collected through AVs to propose the high-resolution traffic sensing framework. The proposed framework estimates the fundamental traffic state variables, namely, flow, density and speed in high spatio-temporal resolution, and it is developed under different levels of AV perception capabilities and low AV market penetration rate. The Next Generation Simulation (NGSIM) data is adopted to examine the accuracy and robustness of the proposed framework. Experimental results show that the proposed estimation framework achieves high accuracy even with low AV market penetration rate. Sensitivity analysis regarding AV penetration rate, sensor configuration, and perception accuracy will also be studied. This study will help policymakers and private sectors (e.g Uber, Waymo) to understand the values of AVs, especially the values of massive data collected by AVs, in traffic operation and management. | electrical engineering and systems science |
We derive a universal formula for the asymptotic growth of the mean value of three-point coefficient for Warped Conformal Field Theories (WCFTs), and provide a holographic calculation in BTZ black holes. WCFTs are two dimensional quantum field theories featuring a chiral Virasoro and U(1) Kac-Moody algebra, and are conjectured to be holographically dual to quantum gravity on asymptotically AdS$_3$ spacetime with Comp$\grave{\mathrm{e}}$re-Song-Strominger boundary conditions. The WCFT calculation amounts to the calculation of one-point functions on torus, whose high temperature limit can be approximated by using modular covariance of WCFT, similar to the derivation of Cardy formula. The bulk process is given by a tadpole diagram, with a massive spinning particle propagates from the infinity to the horizon, and splits into particle and antiparticle which annihilate after going around the horizon of BTZ black holes. The agreement between the bulk and WCFT calculations indicates that the black hole geometries in asymptotically AdS$_3$ spacetimes can emerge upon coarse-graining over microstates in WCFTs, similar to the results of Kraus and Maloney in the context of AdS/CFT[arXiv:1608.03284]. | high energy physics theory |
Fault-tolerant implementation of quantum gates is one of preconditions for realizing quantum computation. The platform of Rydberg atoms is one of the most promising candidates for achieving quantum computation. We propose to implement a controlled-$Z$ gate on Rydberg atoms where an amplitude-modulated field is employed to induce Rydberg antiblockade. Gate robustness against the fluctuations in the Rydberg-Rydberg interaction can be largely enhanced by adjusting amplitude-modulated field. Furthermore, we introduce a Landau-Zener-St\"{u}ckelberg transition on the target atom so as to improve the gate resilience to the deviation in the gate time and the drift in the pulse amplitude. With feasible experimental parameters, one can achieve the gate with low fidelity errors caused by atomic decay, interatomic dipole-dipole force, and Doppler effects. Finally, we generalize the gate scheme into multiqubit cases, where resilient multiqubit phase gates can be obtained in one step with an unchanged gate time as the number of qubits increases. | quantum physics |
We study the sub-grid scale characteristics of a vorticity-transport-based approach for large-eddy simulations. In particular, we consider a multi-dimensional upwind scheme for the vorticity transport equations and establish its properties in the under-resolved regime. The asymptotic behavior of key turbulence statistics of velocity gradients, vorticity, and invariants is studied in detail. Modified equation analysis indicates that dissipation can be controlled locally via non-linear limiting of the gradient employed for the vorticity reconstruction on the cell face such that low numerical diffusion is obtained in well-resolved regimes and high numerical diffusion is realized in under-resolved regions. The enstrophy budget highlights the remarkable ability of the truncation terms to mimic the true sub-grid scale dissipation and diffusion. The modified equation also reveals diffusive terms that are similar to several commonly employed sub-grid scale models including tensor-gradient and hyper-viscosity models. Investigations on several canonical turbulence flow cases show that large-scale features are adequately represented and remain consistent in terms of spectral energy over a range of grid resolutions. Numerical dissipation in under-resolved simulations is consistent and can be characterized by diffusion terms discovered in the modified equation analysis. A minimum state of scale separation necessary to obtain asymptotic behavior is characterized using metrics such as effective Reynolds number and effective grid spacing. Temporally-evolving jet simulations, characterized by large-scale vortical structures, demonstrate that high Reynolds number vortex-dominated flows are captured when criteria is met and necessitate diffusive non-linear limiting of vorticity reconstruction be employed to realize accuracy in under-resolved simulations. | physics |
We present a pilot study on crea.blender, a novel co-creative game designed for large-scale, systematic assessment of distinct constructs of human creativity. Co-creative systems are systems in which humans and computers (often with Machine Learning) collaborate on a creative task. This human-computer collaboration raises questions about the relevance and level of human creativity and involvement in the process. We expand on, and explore aspects of these questions in this pilot study. We observe participants play through three different play modes in crea.blender, each aligned with established creativity assessment methods. In these modes, players "blend" existing images into new images under varying constraints. Our study indicates that crea.blender provides a playful experience, affords players a sense of control over the interface, and elicits different types of player behavior, supporting further study of the tool for use in a scalable, playful, creativity assessment. | computer science |
The collective dynamics of topological structures have been of great interest from both fundamental and applied perspectives. For example, the studies of dynamical properties of magnetic vortices and skyrmions not only deepened the understanding of many-body physics but also led to potential applications in data processing and storage. Topological structures constructed from electrical polarization rather than spin have recently been realized in ferroelectric superlattices, promising for ultrafast electric-field control of topological orders. However, little is known about the dynamics of such complex extended nanostructures which in turn underlies their functionalities. Using terahertz-field excitation and femtosecond x-ray diffraction measurements, we observe ultrafast collective polarization dynamics that are unique to polar vortices, with orders of magnitude higher frequencies and smaller lateral size than those of experimentally realized magnetic vortices. A previously unseen soft mode, hereafter referred to as a vortexon, emerges as transient arrays of nanoscale circular patterns of atomic displacements, which reverse their vorticity on picosecond time scales. Its frequency is significantly reduced at a critical strain, indicating a condensation of structural dynamics. First-principles-based atomistic calculations and phase-field modeling reveal the microscopic atomic arrangements and frequencies of the vortex modes. The discovery of subterahertz collective dynamics in polar vortices opens up opportunities for applications of electric-field driven data processing in topological structures with ultrahigh speed and density. | condensed matter |
Anomaly detection techniques are growing in importance at the Large Hadron Collider (LHC), motivated by the increasing need to search for new physics in a model-agnostic way. In this work, we provide a detailed comparative study between a well-studied unsupervised method called the autoencoder (AE) and a weakly-supervised approach based on the Classification Without Labels (CWoLa) technique. We examine the ability of the two methods to identify a new physics signal at different cross sections in a fully hadronic resonance search. By construction, the AE classification performance is independent of the amount of injected signal. In contrast, the CWoLa performance improves with increasing signal abundance. When integrating these approaches with a complete background estimate, we find that the two methods have complementary sensitivity. In particular, CWoLa is effective at finding diverse and moderately rare signals while the AE can provide sensitivity to very rare signals, but only with certain topologies. We therefore demonstrate that both techniques are complementary and can be used together for anomaly detection at the LHC. | high energy physics phenomenology |
Applying neural networks as controllers in dynamical systems has shown great promises. However, it is critical yet challenging to verify the safety of such control systems with neural-network controllers in the loop. Previous methods for verifying neural network controlled systems are limited to a few specific activation functions. In this work, we propose a new reachability analysis approach based on Bernstein polynomials that can verify neural-network controlled systems with a more general form of activation functions, i.e., as long as they ensure that the neural networks are Lipschitz continuous. Specifically, we consider abstracting feedforward neural networks with Bernstein polynomials for a small subset of inputs. To quantify the error introduced by abstraction, we provide both theoretical error bound estimation based on the theory of Bernstein polynomials and more practical sampling based error bound estimation, following a tight Lipschitz constant estimation approach based on forward reachability analysis. Compared with previous methods, our approach addresses a much broader set of neural networks, including heterogeneous neural networks that contain multiple types of activation functions. Experiment results on a variety of benchmarks show the effectiveness of our approach. | electrical engineering and systems science |
We study a quantum Stirling cycle which extracts work using quantized energy levels of a potential well. The work and the efficiency of the engine depend on the length of the potential well, and the Carnot efficiency is approached in a low temperature limiting case. We show that the lack of information about the position of the particle inside the potential well can be converted into useful work without resorting to any measurement. In the low temperature limit, we calculate the amount of work extractable from distinguishable particles, fermions, and bosons. | quantum physics |
Markov models lie at the interface between statistical independence in a probability distribution and graph separation properties. We review model selection and estimation in directed and undirected Markov models with Gaussian parametrization, emphasizing the main similarities and differences. These two model classes are similar but not equivalent, although they share a common intersection. We present the existing results from a historical perspective, taking into account the amount of literature existing from both the artificial intelligence and statistics research communities, where these models were originated. We cover classical topics such as maximum likelihood estimation and model selection via hypothesis testing, but also more modern approaches like regularization and Bayesian methods. We also discuss how the Markov models reviewed fit in the rich hierarchy of other, higher level Markov model classes. Finally, we close the paper overviewing relaxations of the Gaussian assumption and pointing out the main areas of application where these Markov models are nowadays used. | statistics |
This article presents a general master equation formalism for the interaction between travelling pulses of quantum radiation and localized quantum systems. Traveling fields populate a continuum of free space radiation modes and the Jaynes-Cummings model, valid for a discrete eigenmode of a cavity, does not apply. We develop a complete input-output theory to describe the driving of quantum systems by arbitrary incident pulses of radiation and the quantum state of the field emitted into any desired outgoing temporal mode. Our theory is applicable to the transformation and interaction of pulses of radiation by their coupling to a wide class of material quantum systems. We discuss the most essential differences between quantum interactions with pulses and with discrete radiative eigenmodes and present examples relevant to quantum information protocols with optical, microwave and acoustic waves. | quantum physics |
Identification of habitable planets beyond our solar system is a key goal of current and future space missions. Yet habitability depends not only on the stellar irradiance, but equally on constituent parts of the planetary atmosphere. Here we show, for the first time, that radiatively active mineral dust will have a significant impact on the habitability of Earth-like exoplanets. On tidally-locked planets, dust cools the day-side and warms the night-side, significantly widening the habitable zone. Independent of orbital configuration, we suggest that airborne dust can postpone planetary water loss at the inner edge of the habitable zone, through a feedback involving decreasing ocean coverage and increased dust loading. The inclusion of dust significantly obscures key biomarker gases (e.g. ozone, methane) in simulated transmission spectra, implying an important influence on the interpretation of observations. We demonstrate that future observational and theoretical studies of terrestrial exoplanets must consider the effect of dust. | astrophysics |
We examine various scenarios in which the Standard Model is extended by a light leptoquark state to explain one or both $B$-physics anomalies. Combining low-energy constraints and direct searches at the LHC, we confirm that the only single leptoquark model that can explain both anomalies at the same time is a vector leptoquark, known as $U_1$. Focusing on $U_1$, we highlight the complementarity between LHC and low--energy constraints, and argue that improving the experimental bound on $\mathcal{B}(B\to K\mu\tau)$ by two orders of magnitude could compromise its viability as a solution to the $B$-physics anomalies. | high energy physics phenomenology |
To take into account nuclear quantum effects on the dynamics of atoms, the path integral molecular dynamics (PIMD) method used since 1980s is based on the formalism developed by R. P. Feynman. However, the huge computation time required for the PIMD reduces its range of applicability. Another drawback is the requirement of additional techniques to access time correlation functions (ring polymer MD or centroid MD). We developed an alternative technique based on a quantum thermal bath (QTB) which reduces the computation time by a factor of ~20. The QTB approach consists in a classical Langevin dynamics in which the white noise random force is replaced by a Gaussian random force having the power spectral density given by the quantum fluctuation-dissipation theorem. The method has yielded satisfactory results for weakly anharmonic systems: the quantum harmonic oscillator, the heat capacity of a MgO crystal, and isotope effects in 7 LiH and 7 LiD. Unfortunately, the QTB is subject to the problem of zero-point energy leakage (ZPEL) in highly anharmonic systems, which is inherent in the use of classical mechanics. Indeed, a part of the energy of the high-frequency modes is transferred to the low-frequency modes leading to a wrong energy distribution. We have shown that in order to reduce or even eliminate ZPEL, it is sufficient to increase the value of the frictional coefficient. Another way to solve the ZPEL problem is to combine the QTB and PIMD techniques. It requires the modification of the power spectral density of the random force within the QTB. This combination can also be seen as a way to speed up the PIMD. | physics |
We provide a compact and unified treatment of power spectrum observables for the effective field theory (EFT) of inflation with the complete set of operators that lead to second-order equations of motion in metric perturbations in both space and time derivatives, including Horndeski and GLPV theories. We relate the EFT operators in ADM form to the four additional free functions of time in the scalar and tensor equations. Using the generalized slow roll formalism, we show that each power spectrum can be described by an integral over a single source that is a function of its respective sound horizon. With this correspondence, existing model independent constraints on the source function can be simply reinterpreted in the more general inflationary context. By expanding these sources around an optimized freeze-out epoch, we also provide characterizations of these spectra in terms of five slow-roll hierarchies whose leading order forms are compact and accurate as long as EFT coefficients vary only on timescales greater than an efold. We also clarify the relationship between the unitary gauge observables employed in the EFT and the comoving gauge observables of the post-inflationary universe. | high energy physics theory |
Cine cardiac magnetic resonance imaging (MRI) is widely used for diagnosis of cardiac diseases thanks to its ability to present cardiovascular features in excellent contrast. As compared to computed tomography (CT), MRI, however, requires a long scan time, which inevitably induces motion artifacts and causes patients' discomfort. Thus, there has been a strong clinical motivation to develop techniques to reduce both the scan time and motion artifacts. Given its successful applications in other medical imaging tasks such as MRI super-resolution and CT metal artifact reduction, deep learning is a promising approach for cardiac MRI motion artifact reduction. In this paper, we propose a recurrent neural network to simultaneously extract both spatial and temporal features from under-sampled, motion-blurred cine cardiac images for improved image quality. The experimental results demonstrate substantially improved image quality on two clinical test datasets. Also, our method enables data-driven frame interpolation at an enhanced temporal resolution. Compared with existing methods, our deep learning approach gives a superior performance in terms of structural similarity (SSIM) and peak signal-to-noise ratio (PSNR). | electrical engineering and systems science |
This paper provides a tutorial introduction to disk margins. These are robust stability measures that account for simultaneous gain and phase perturbations in a feedback system. The paper first reviews the classical (gain-only and phase-only) margins and their limitations. This motivates the use of disk margins which are defined using a set of perturbations that have simultaneous gain and phase variations. A necessary and sufficient condition is provided to compute the disk margin for a single-input, single-output feedback system. Frequency-dependent disk margins can also be computed yielding additional insight. The paper concludes with a discussion of stability margins for multiple-input, multiple output (MIMO) feedback systems. A typical approach is to assess robust stability "loop-at-a-time" with a perturbation introduced into a single channel and all other channels held at their nominal values. MIMO disk margins provide a useful extension to consider simultaneous variations in multiple channels. This multiple-loop analysis can provide a more accurate robustness assessment as compared to the loop-at-a-time approach. | electrical engineering and systems science |
We propose a control approach for a class of nonlinear mechanical systems to stabilize the system under study while ensuring that the oscillations of the transient response are reduced. The approach is twofold: (i) we apply our technique for linear viscous damping identification of the system to improve the accuracy of the selected control technique, and (ii) we implement a passivity-based controller to stabilize and reduce the oscillations by selecting the control parameters properly in accordance with the identified damping. Moreover, we provide an analysis for a particular passivity-based control approach that has been shown successfully for reducing such oscillations. Also, we validate the methodology by implementing it experimentally in a planar manipulator. | electrical engineering and systems science |
Key challenges for the deployment of reinforcement learning (RL) agents in the real world are the discovery, representation and reuse of skills in the absence of a reward function. To this end, we propose a novel approach to learn a task-agnostic skill embedding space from unlabeled multi-view videos. Our method learns a general skill embedding independently from the task context by using an adversarial loss. We combine a metric learning loss, which utilizes temporal video coherence to learn a state representation, with an entropy regularized adversarial skill-transfer loss. The metric learning loss learns a disentangled representation by attracting simultaneous viewpoints of the same observations and repelling visually similar frames from temporal neighbors. The adversarial skill-transfer loss enhances re-usability of learned skill embeddings over multiple task domains. We show that the learned embedding enables training of continuous control policies to solve novel tasks that require the interpolation of previously seen skills. Our extensive evaluation with both simulation and real world data demonstrates the effectiveness of our method in learning transferable skills from unlabeled interaction videos and composing them for new tasks. Code, pretrained models and dataset are available at http://robotskills.cs.uni-freiburg.de | computer science |
We study the ground state of two interacting bosonic particles confined in a ring-shaped lattice potential and subjected to a synthetic magnetic flux. The system is described by the Bose-Hubbard model and solved exactly through a plane-wave Ansatz of the wave function. We obtain energies and correlation functions of the system both for repulsive and attractive interactions. In contrast with the one-dimensional continuous theory described by the Lieb-Liniger model, in the lattice case we prove that the center of mass of the two particles is coupled with its relative coordinate. Distinctive features clearly emerge in the persistent current of the system. While for repulsive bosons the persistent current displays a periodicity given by the standard flux quantum for any interaction strength, in the attractive case the flux quantum becomes fractionalized in a manner that depends on the interaction. We also study the density after the long time expansion of the system which provides an experimentally accessible route to detect persistent currents in cold atom settings. Our results can be used to benchmark approximate schemes for the many-body problem. | condensed matter |
Progress in video anomaly detection research is currently slowed by small datasets that lack a wide variety of activities as well as flawed evaluation criteria. This paper aims to help move this research effort forward by introducing a large and varied new dataset called Street Scene, as well as two new evaluation criteria that provide a better estimate of how an algorithm will perform in practice. In addition to the new dataset and evaluation criteria, we present two variations of a novel baseline video anomaly detection algorithm and show they are much more accurate on Street Scene than two state-of-the-art algorithms from the literature. | computer science |
Hypergraphs offer a natural modeling language for studying polyadic interactions between sets of entities. Many polyadic interactions are asymmetric, with nodes playing distinctive roles. In an academic collaboration network, for example, the order of authors on a paper often reflects the nature of their contributions to the completed work. To model these networks, we introduce \emph{annotated hypergraphs} as natural polyadic generalizations of directed graphs. Annotated hypergraphs form a highly general framework for incorporating metadata into polyadic graph models. To facilitate data analysis with annotated hypergraphs, we construct a role-aware configuration null model for these structures and prove an efficient Markov Chain Monte Carlo scheme for sampling from it. We proceed to formulate several metrics and algorithms for the analysis of annotated hypergraphs. Several of these, such as assortativity and modularity, naturally generalize dyadic counterparts. Other metrics, such as local role densities, are unique to the setting of annotated hypergraphs. We illustrate our techniques on six digital social networks, and present a detailed case-study of the Enron email data set. | physics |
Exchange coupling between localized spins and band or topological states accounts for giant magnetotransport and magnetooptical effects as well as determines spin-spin interactions in magnetic insulators and semiconductors. However, even in archetypical dilute magnetic semiconductors such as Cd$_{1-x}$Mn$_x$Te and Hg$_{1-x}$Mn$_x$Te the evolution of this coupling with the wave vector is not understood. A series of experiments have demonstrated that exchange-induced splitting of magnetooptical spectra of Cd$_{1-x}$Mn$_x$Te and Zn$_{1-x}$Mn$_x$Te at the L points of the Brillouin zone is, in contradiction to the existing theories, more than one order of magnitude smaller compared to its value at the zone center and can show an unexpected sign of the effective Land\'e factors. The origin of these findings we elucidate quantitatively by combining: (i) relativistic first-principles density functional calculations; (ii) a tight-binding approach that takes carefully into account k-dependence of the potential and kinetic sp-d exchange interactions; (iii) a theory of magnetic circular dichroism (MCD) for $E_1$ and $E_1$ + $\Delta_1$ optical transitions, developed here within the envelope function $kp$ formalism for the L point of the Brillouin zone in zinc-blende crystals. This combination of methods leads to the conclusion that the physics of MCD at the boundary of the Brillouin zone is strongly affected by the strength of two relativistic effects in particular compounds: (i) the mass-velocity term that controls the distance of the conduction band at the L point to the upper Hubbard band of Mn ions and, thus, a relative magnitude and sign of the exchange splittings in the conduction and valence bands; (ii) the spin-momentum locking by spin-orbit coupling that reduces exchange splitting depending on the orientation of particular L valleys with respect to the magnetization direction. | condensed matter |
We propose a three-track detection system for two dimensional magnetic recording (TDMR) in which a local area influence probabilistic (LAIP) detector works with a trellis-based Bahl-Cocke-Jelinek-Raviv (BCJR) detector to remove intersymbol interference (ISI) and intertrack interference (ITI) among coded data bits as well as media noise due to magnetic grain-bit interactions. Two minimum mean-squared error (MMSE) linear equalizers with different response targets are employed before the LAIP and BCJR detectors. The LAIP detector considers local grain-bit interactions and passes coded bit log-likelihood ratios (LLRs) to the channel decoder, whose output LLRs serve as a priori information to the BCJR detector, which is followed by a second channel decoding pass. Simulation results under 1-shot decoding on a grain-flipping-probability (GFP) media model show that the proposed LAIP/BCJR detection system achieves density gains of 6.8% for center-track detection and 1.2% for three-track detection compared to a standard BCJR/1D-PDNP. The proposed system's BCJR detector bit error rates (BERs) are lower than those of a recently proposed two-track BCJR/2D-PDNP system by factors of (0.55, 0.08) for tracks 1 and 2 respectively. | electrical engineering and systems science |
Recent observations of rotationally supported galaxies show a tight correlation between the observed radial acceleration at every radius and the Newtonian acceleration generated by the baryonic mass distribution, the so-called radial acceleration relation (RAR). The rotation curves (RCs) of the SPARC sample of disk galaxies with different morphologies, masses, sizes and gas fractions are investigated in the context of modified Newtonian dynamics (MOND). We include the effect of cold dark baryons by scaling the measured mass in the atomic form by a factor of $c$ in the mass budget of galaxies. In addition to the standard interpolating function, we also fit the RCs and the RAR with the empirical RAR-inspired interpolating function. Slightly better fits for about $47\%$ of galaxies in our sample are achieved in the presence of dark baryons ($c>1$) with the mean value of $c = 2.4\pm 1.3$. Although the MOND fits are not significantly improved by including dark baryons, it results in a decrease in the characteristic acceleration $g_\dag$ by $40\%$. We find no correlation between the MOND critical acceleration $a_0$ and the central surface brightness of the stellar disk, $\mu_{3.6}$. This supports $a_0$ being a universal constant for all galaxies. | astrophysics |
High energy efficiency and low latency have always been the significant goals pursued by the designer of wireless networks. One efficient way to achieve these goals is cross-layer scheduling based on the system states in different layers, such as queuing state and channel state. However, most existing works in cross-layer design focus on the scheduling based on the discrete channel state. Little attention is paid to considering the continuous channel state that is closer to the practical scenario. Therefore, in this paper, we study the optimal cross-layer scheduling policy on data transmission in a single communication link with continuous state-space of channel. The aim of scheduling is to minimize the average power for data transmission when the average delay is constrained. More specifically, the optimal cross-layer scheduling problem was formulated as a variational problem. Based on the variational problem, we show the optimality of the deterministic scheduling policy through a constructive proof. The optimality of the deterministic policy allows us to compress the searching space from the probabilistic policies to the deterministic policies. | electrical engineering and systems science |
We derive universal classical-quantum superposition coding and universal classical-quantum multiple access channel code by using generalized packing lemmas for the type method. Using our classical-quantum universal superposition code, we establish the capacity region of a classical-quantum compound broadcast channel with degraded message sets. Our universal classical-quantum multiple access channel codes have two types of codes. One is a code with joint decoding and the other is a code with separate decoding. The former universally achieves corner points of the capacity region and the latter universally achieves general points of the capacity region. Combining the latter universal code with the existing result by Quantum Inf Process. 18, 246 (2019), we establish a single-letterized formula for the capacity region of a classical-quantum compound multiple access channel. | quantum physics |
Progress toward the solution of the strongly correlated electron problem has been stymied by the exponential complexity of the wave function. Previous work established an exact two-body exponential product expansion for the ground-state wave function. By developing a reduced density matrix analogue of Dalgarno-Lewis perturbation theory, we prove here that (i) the two-body exponential product expansion is rapidly and globally convergent with each operator representing an order of a renormalized perturbation theory, (ii) the energy of the expansion converges quadratically near the solution, and (iii) the expansion is exact for both ground and excited states. The two-body expansion offers a reduced parametrization of the many-particle wave function as well as the two-particle reduced density matrix with potential applications on both conventional and quantum computers for the study of strongly correlated quantum systems. We demonstrate the result with the exact solution of the contracted Schr\"odinger equation for the molecular chains H$_{4}$ and H$_{5}$. | quantum physics |
Fluid-structure simulations of slender inextensible filaments in a viscous fluid are often plagued by numerical stiffness. Recent coarse-graining studies have reduced the computational requirements of simulating such systems, though have thus far been limited to the motion of planar filaments. In this work we extend such frameworks to filament motion in three dimensions, identifying and circumventing coordinate-system singularities introduced by filament parameterisation via repeated changes of basis. The resulting methodology enables efficient and rapid study of the motion of flexible filaments in three dimensions, and is readily extensible to a wide range of problems, including filament motion in confined geometries, large-scale active matter simulations, and the motility of mammalian spermatozoa. | physics |
This paper proposes a novel learning method for multi-task applications. Multi-task neural networks can learn to transfer knowledge across different tasks by using parameter sharing. However, sharing parameters between unrelated tasks can hurt performance. To address this issue, we propose a framework to learn fine-grained patterns of parameter sharing. Assuming that the network is composed of several components across layers, our framework uses learned binary variables to allocate components to tasks in order to encourage more parameter sharing between related tasks, and discourage parameter sharing otherwise. The binary allocation variables are learned jointly with the model parameters by standard back-propagation thanks to the Gumbel-Softmax reparametrization method. When applied to the Omniglot benchmark, the proposed method achieves a 17% relative reduction of the error rate compared to state-of-the-art. | computer science |
Context. Globular clusters (GCs) are witnesses of the past accretion events onto the Milky Way (MW). In particular, the GCs of the Sagittarius (Sgr) dwarf galaxy are important probes of an on-going merger. Aims. Our main goal is to search for new GC members of this dwarf galaxy using the VISTA Variables in the Via Lactea Extended Survey (VVVX) near-infrared database combined with the Gaia Early Data Release 3 (EDR3) optical database. Methods. We investigated all VVVX-enabled discoveries of GC candidates in a region covering about 180 sq. deg. toward the bulge and the Sgr dwarf galaxy. We used multiband point-spread function photometry to obtain deep color-magnitude diagrams (CMDs) and luminosity functions (LFs) for all GC candidates, complemented by accurate Gaia-EDR3 proper motions (PMs) to select Sgr members and variability information to select RR Lyrae which are potential GC members. Results. After applying a strict PM cut to discard foreground bulge and disk stars, the CMDs and LFs for some of the GC candidates exhibit well defined red giant branches and red clump giant star peaks. We selected the best Sgr GCs, estimating their distances, reddenings, and associated RR Lyrae. Conclusions. We discover 12 new Sgr GC members, more than doubling the number of GCs known in this dwarf galaxy. In addition, there are 11 other GC candidates identified that are uncertain, awaiting better data for confirmation. | astrophysics |
We review the closed-forms of the partial Fourier sums associated with $HP_k(n)$ and create an asymptotic expression for $HP(n)$ as a way to obtain formulae for the full Fourier series (if $b$ is such that $|b|<1$, we get a surprising pattern, $HP(n) \sim H(n)-\sum_{k\ge 2}(-1)^k\zeta(k)b^{k-1}$). Finally, we use the found Fourier series formulae to obtain the values of the Lerch transcendent function, $\Phi(e^m,k,b)$, and by extension the polylogarithm, $\mathrm{Li}_{k}(e^{m})$, at the positive integers $k$. | mathematics |
In [M. Walter et al., Science 340, 1205, 7 June (2013)], they gave a sufficient condition for genuinely entangled pure states and discussed SLOCC classification via polytopes and the eigenvalues of the single-particle states. In this paper, for $4n$ qubits, we show the invariance of algebraic multiplicities (AMs) and geometric multiplicities (GMs) of eigenvalues and the invariance of sizes of Jordan blocks (JBs) of the coefficient matrices under SLOCC. We explore properties of spectra, eigenvectors, generalized eigenvectors, standard Jordan normal forms (SJNFs), and Jordan chains of the coefficient matrices. The properties and invariance permit a reduction of SLOCC classification of $4n$ qubits to integer partitions (in number theory) of the number $2^{2n}-k$ and the AMs. | quantum physics |
We introduce tramp, standing for TRee Approximate Message Passing, a python package for compositional inference in high-dimensional tree-structured models. The package provides an unifying framework to study several approximate message passing algorithms previously derived for a variety of machine learning tasks such as generalized linear models, inference in multi-layer networks, matrix factorization, and reconstruction using non-separable penalties. For some models, the asymptotic performance of the algorithm can be theoretically predicted by the state evolution, and the measurements entropy estimated by the free entropy formalism. The implementation is modular by design: each module, which implements a factor, can be composed at will with other modules to solve complex inference tasks. The user only needs to declare the factor graph of the model: the inference algorithm, state evolution and entropy estimation are fully automated. | statistics |
We apply the recently developed formalism by Kosower, Maybee and O'Connell (KMO) to analyse the soft electromagnetic and soft gravitational radiation emitted by particles without spin in Four and higher dimensions. We use this formalism in conjunction with quantum soft theorems to derive radiative electro-magnetic and gravitational fields in low frequency expansion and to next to leading order in the coupling. We show that in all dimensions, the classical limit of sub-leading soft (photon and graviton) theorems is consistent with the classical soft theorems proved by Sen et al in a series of papers. In particular Saha, Sahoo and Sen proved classical soft theorems for electro-magnetic and gravitational radiation in Four dimensions. For the class of scattering processes that can be analyzed using KMO formalism, we show that the classical limit of quantum soft theorems is consistent with these classical soft theorems, paving the way for their proof from scattering amplitudes. | high energy physics theory |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.