text
stringlengths
11
9.77k
label
stringlengths
2
104
We present SPUX - a modular framework for Bayesian inference enabling uncertainty quantification and propagation in linear and nonlinear, deterministic and stochastic models, and supporting Bayesian model selection. SPUX can be coupled to any serial or parallel application written in any programming language, (e.g. including Python, R, Julia, C/C++, Fortran, Java, or a binary executable), scales effortlessly from serial runs on a personal computer to parallel high performance computing clusters, and aims to provide a platform particularly suited to support and foster reproducibility in computational science. We illustrate SPUX capabilities for a simple yet representative random walk model, describe how to couple different types of user applications, and showcase several readily available examples from environmental sciences. In addition to available state-of-the-art numerical inference algorithms including EMCEE, PMCMC (PF) and SABC, the open source nature of the SPUX framework and the explicit description of the hierarchical parallel SPUX executors should also greatly simplify the implementation and usage of other inference and optimization techniques.
statistics
We study out-of-equilibrium dynamics in the quantum Ising model with power-law interactions and positional disorder. For arbitrary dimension $d$ and interaction range $\alpha \geq d$ we analytically find a stretched exponential decay with stretch power $\beta = d/\alpha$ for the global magnetization and ensemble-averaged single-spin purity in the thermodynamic limit. We reveal numerically that glassy behavior persists for finite system sizes and sufficiently strong disorder. We conclude that the magnetization decay is due to interaction induced dephasing while entanglement builds up at a smaller rate evident from the decay of single-spin purity, thus providing a microscopic understanding of glassy dynamics in disordered closed quantum systems.
quantum physics
For data sets with similar features, for example highly correlated features, most existing stability measures behave in an undesired way: They consider features that are almost identical but have different identifiers as different features. Existing adjusted stability measures, that is, stability measures that take into account the similarities between features, have major theoretical drawbacks. We introduce new adjusted stability measures that overcome these drawbacks. We compare them to each other and to existing stability measures based on both artificial and real sets of selected features. Based on the results, we suggest using one new stability measure that considers highly similar features as exchangeable.
statistics
Transmission electron microscopy has become a major characterisation tool with an ever increasing variety of methods being applied in wide range of scientific fields. However, the probably most famous pitfall in related workflows is the preparation of high-quality electron-transparent lamellae enabling for extraction of valuable and reliable information. Particularly in the field of solid state physics and materials science, it is often required to study the surface of a macroscopic specimen with plan-view orientation. Nevertheless, despite tremendous advances in instrumentation, i.e. focused ion beam, the yield of existing plan-view lamellae preparation techniques is relatively low compared to cross-sectional extraction methods. Furthermore, techniques relying on mechanical treatments, i.e. conventional preparation, compromise site-specifity. In this paper, we demonstrate that by combining a mechanical grinding step prior to backside lift-out in the focused ion beam plan-view lamellae preparation becomes increasingly easy. The suggested strategy combines site-specifity with micrometer precision as well as possible investigation of pristine surfaces with a field of view of several hundred square micrometers.
physics
Data-driven spatial filtering algorithms optimize scores such as the contrast between two conditions to extract oscillatory brain signal components. Most machine learning approaches for filter estimation, however, disregard within-trial temporal dynamics and are extremely sensitive to changes in training data and involved hyperparameters. This leads to highly variable solutions and impedes the selection of a suitable candidate for, e.g.,~neurotechnological applications. Fostering component introspection, we propose to embrace this variability by condensing the functional signatures of a large set of oscillatory components into homogeneous clusters, each representing specific within-trial envelope dynamics. The proposed method is exemplified by and evaluated on a complex hand force task with a rich within-trial structure. Based on electroencephalography data of 18 healthy subjects, we found that the components' distinct temporal envelope dynamics are highly subject-specific. On average, we obtained seven clusters per subject, which were strictly confined regarding their underlying frequency bands. As the analysis method is not limited to a specific spatial filtering algorithm, it could be utilized for a wide range of neurotechnological applications, e.g., to select and monitor functionally relevant features for brain-computer interface protocols in stroke rehabilitation.
electrical engineering and systems science
This paper presents some new criteria for partial exponential stability of a slow-fast nonlinear system with a fast scalar variable using periodic averaging methods. Unlike classical averaging techniques, we construct an averaged system by averaging over this fast scalar variable instead of the time variable. We then show that partial exponential stability of the averaged system implies partial exponential stability of the original one. As some intermediate results, we also obtain a new converse Lyapunov theorem and some perturbation theorems for partially exponentially stable systems. We then apply our established criteria to study remote synchronization of Kuramoto-Sakaguchi oscillators coupled by a star network with two peripheral nodes. We analytically show that detuning the natural frequency of the central mediating oscillator can increase the robustness of the remote synchronization against phase shifts.
electrical engineering and systems science
Data collected by wearable devices in sports provide valuable information about an athlete's behavior such as their activity, performance, and ability. These time series data can be studied with approaches such as hidden Markov and semi-Markov models (HMM and HSMM) for varied purposes including activity recognition and event detection. HSMMs extend the HMM by explicitly modeling the time spent in each state. In a discrete-time HSMM, the duration in each state can be modeled with a zero-truncated Poisson distribution, where the duration parameter may be state-specific but constant in time. We extend the HSMM by allowing the state-specific duration parameters to vary in time and model them as a function of known covariates derived from the wearable device and observed over a period of time leading up to a state transition. In addition, we propose a data subsampling approach given that high-frequency data from wearable devices can violate the conditional independence assumption of the HSMM. We apply the model to wearable device data collected on a soccer referee in a Major League Soccer game. We model the referee's physiological response to the game demands and identify important time-varying effects of these demands associated with the duration in each state.
statistics
The two-dimensional Active Brownian Particles system is meant to be composed of hard disks, that show excluded volume interactions, usually simulated via molecular dynamics using pure repulsive potentials. We show that the softness of the chosen potential plays a role in the result of the simulation, focusing on the case of the emergence of Motility Induced Phase Separation. In a pure hard-sphere system with no traslational diffusion,the phase diagram should be completely determined by their density and P\'eclet number. However, we have found two additional effects that affect the phase diagram in the ABP model we simulate: the relative strength of the traslational diffusion compared to the propulsion term and the overlapping of the particles. As we show, the second effect can be strongly mitigated if we use, instead of the standard Weeks-Chandler- Andersen potential, a harder one, the pseudo-hard spheres potential. Moreover, in determining the boundary of our phase space, we have tried different approaches to detect MIPS and concluded that observing dynamical features, via the non-Gaussian parameter, is more efficient than observing structural ones, via the local density distribution function. We also demonstrate that the Vogel-Fulcher equation successfully reproduces the decay of the diffusion as a function of density, except for very high density cases. Thus, the ABP system behaves similarly to a fragile glass in this regard.
condensed matter
Fluctuation theorems are fundamental extensions of the second law of thermodynamics for small nonequilibrium systems. While work and heat are equally important forms of energy exchange, fluctuation relations have not been experimentally assessed for the generic situation of simultaneous mechanical and thermal changes. Thermal driving is indeed generally slow and more difficult to realize than mechanical driving. We here use feedback cooling techniques to implement fast and controlled temperature variations of an underdamped levitated microparticle that are one order of magnitude faster than the equilibration time. Combining mechanical and thermal control, we verify the validity of a fluctuation theorem that accounts for both contributions, well beyond the range of linear response theory. Our system allows the investigation of general far-from-equilibrium processes in microscopic systems that involve fast mechanical and thermal changes at the same time.
condensed matter
In the majority of population synthesis calculations of close binary stars, the common envelope (CE) phase is modeled using a standard prescription based upon the conservation of energy, known as the alpha prescription. In this prescription, the orbital separation of the secondary and giant core at the end of the CE phase is taken to be the orbital separation when the envelope becomes unbound. However, recent observations of planetary nebulae with binary cores (BPNe), believed to be the immediate products of CE evolution, indicate orbital periods that are significantly shorter than predicted by population synthesis models using the alpha prescription. We argue that unbinding the envelope provides a necessary, but not sufficient, condition to escape a merger during CE evolution. The spiral-in of the secondary must also be halted. This requires the additional dynamical constraint that the frictional torque on the secondary be reduced to approximately zero. In this paper, we undertake a preliminary examination of the effect of adding this dynamical constraint in population synthesis calculations of BPNe. We assume that the frictional torque will be sufficiently reduced when the secondary enters a region within the giant where the mass-radius profile is flat. We crudely estimate the location of this region as a function of the core mass based upon existing stellar models of AGB stars between 1 and 7 solar masses. We calculate a theoretical orbital period distribution of BPNe using a population synthesis code that incorporates this dynamical constraint along with the alpha prescription.
astrophysics
We present 3D core-collapse supernova simulations of massive Pop-III progenitor stars at the transition to the pulsational pair instability regime. We simulate two progenitor models with initial masses of $85\,\mathrm{M}_{\odot}$ and $100\,\mathrm{M}_\odot$ with the LS220, SFHo, and SFHx equations of state. The $85\,\mathrm{M}_{\odot}$ progenitor experiences a pair instability pulse coincident with core collapse, whereas the $100\,\mathrm{M}_{\odot}$ progenitor has already gone through a sequence of four pulses $1\mathord,500$ years before collapse in which it ejected its H and He envelope. The $85\,\mathrm{M}_{\odot}$ models experience shock revival and then delayed collapse to a black hole (BH) due to ongoing accretion within hundreds of milliseconds. The diagnostic energy of the incipient explosion reaches up to $2.7\times10^{51}\,\mathrm{erg}$ in the SFHx model. Due to the high binding energy of the metal core, BH collapse by fallback is eventually unavoidable, but partial mass ejection may be possible. The $100\,\mathrm{M}_\odot$ models have not achieved shock revival or undergone BH collapse by the end of the simulation. All models exhibit relatively strong gravitational-wave emission both in the high-frequency g-mode emission band and at low frequencies. The SFHx and SFHo models show clear emission from the standing accretion shock instability. For our models, we estimate maximum detection distances of up to $\mathord{\sim}46\,\mathrm{kpc}$ with LIGO and $\mathord{\sim} 850\,\mathrm{kpc}$ with Cosmic Explorer.
astrophysics
MATHUSLA is a proposed displaced vertex detector for neutral long-lived particle decays. It was proposed with general specifications of the size of its decay volume and its location. In this study, different simplified models containing LLPs are investigated using Monte Carlo event generators, and LLP decay probability maps are generated. Specific optimal configurations for the detector are found for each model according to available land around the CMS detector. We demonstrate that the placement and dimensions of a proposed 10000 m2 engineering benchmark can be modified so that an improvement in acceptance (up to 12% more LLP decays) is observed. Also, it is found that the engineering benchmark would observe about 80% of the number of LLP decays that the earlier MATHUSLA200 physics sensitivity benchmark with four times the area would observe.
high energy physics phenomenology
For ultracold neutrons with a kinetic energy below 10 neV, strong scattering, characterized by $2\pi l_{c} / \lambda\leq 1$, can be obtained in metamaterials of C and $^7$Li. Here $l_{c}$ and $\lambda$ are the coherent scattering mean free path and the neutron wavelength, respectively. UCN interferometry and high-resolution spectroscopy (nano-electronvolt to pico-electronvolt resolution) in parallel waveguide arrays of neutronic metamaterials are given as examples of new experimental possibilities.
physics
Bayesian inference involves two main computational challenges. First, in estimating the parameters of some model for the data, the posterior distribution may well be highly multi-modal: a regime in which the convergence to stationarity of traditional Markov Chain Monte Carlo (MCMC) techniques becomes incredibly slow. Second, in selecting between a set of competing models the necessary estimation of the Bayesian evidence for each is, by definition, a (possibly high-dimensional) integration over the entire parameter space; again this can be a daunting computational task, although new Monte Carlo (MC) integration algorithms offer solutions of ever increasing efficiency. Nested sampling (NS) is one such contemporary MC strategy targeted at calculation of the Bayesian evidence, but which also enables posterior inference as a by-product, thereby allowing simultaneous parameter estimation and model selection. The widely-used MultiNest algorithm presents a particularly efficient implementation of the NS technique for multi-modal posteriors. In this paper we discuss importance nested sampling (INS), an alternative summation of the MultiNest draws, which can calculate the Bayesian evidence at up to an order of magnitude higher accuracy than `vanilla' NS with no change in the way MultiNest explores the parameter space. This is accomplished by treating as a (pseudo-)importance sample the totality of points collected by MultiNest, including those previously discarded under the constrained likelihood sampling of the NS algorithm. We apply this technique to several challenging test problems and compare the accuracy of Bayesian evidences obtained with INS against those from vanilla NS.
astrophysics
We develop a quantum theory of atomic Rayleigh scattering. Scattering is considered as a relaxation of incident photons from a selected mode of free space to the reservoir of the other free space modes. Additional excitations of the reservoir states which appear are treated as scattered light. We show that an entangled state of the excited atom and the incident photon is formed during the scattering. Due to entanglement, a photon is never completely absorbed by the atom. We show that even if the selected mode frequency is incommensurable with any atomic transition frequency, the scattered light spectrum has a maximum at the frequency of the selected mode. The linewidth of scattered light is much smaller than that of the spontaneous emission of a single atom, therefore, the process can be considered as elastic. The developed theory does not use the phenomenological concept of virtual level.
quantum physics
Standard lore uses local anomalies to check the kinematic consistency of gauge theories coupled to chiral fermions, e.g. Standard Models (SM). Based on a systematic cobordism classification, we examine constraints from invertible quantum anomalies (including all perturbative local and nonperturbative global anomalies) for gauge theories. We also clarify the different uses of these anomalies: including (1) anomaly cancellations of dynamical gauge fields, (2) 't Hooft anomaly matching conditions of background fields of global symmetries, and others. We apply several 4d $\mathbb{Z}_{n}$ anomaly constraints of $n=16,4,2$ classes, beyond the familiar Feynman-graph perturbative $\mathbb{Z}$ class local anomalies. As an application, for (SU(3)$\times$SU(2)$\times$U(1))/$\mathbb{Z}_q$ SM (with $q=1,2,3,6$) and SU(5) Grand Unification with 15n chiral Weyl fermions and with a discrete baryon minus lepton number $X=5({\bf B}- {\bf L})-4Y$ preserved, we discover a new hidden gapped sector previously unknown to the SM and Georgi-Glashow model. The gapped sector at low energy contains either (1) 4d non-invertible topological quantum field theory (TQFT, above the energy gap with heavy fractionalized anyon excitations from 1d particle worldline and 2d string worldsheet, inaccessible directly from Dirac or Majorana mass gap of the 16th Weyl fermions [i.e., right-handed neutrinos], but accessible via a topological quantum phase transition), or (2) 5d invertible TQFT in extra dimensions. Above a higher energy scale, the discrete $X$ becomes dynamically gauged, the entangled Universe in 4d and 5d is mediated by Topological Force. Our model potentially resolves puzzles, surmounting sterile neutrinos and dark matter, in fundamental physics.
high energy physics theory
Working within the polynomial quadratic family, we introduce a new point of view on bifurcations which naturally allows to see the seat of bifurcations as the projection of a Julia set of a complex dynamical system in dimension three. We expect our approach to be extendable to other holomorphic families of dynamical systems.
mathematics
We use the method of Laplace transformation to determine the dynamics of a wave packet that passes a barrier by tunneling. We investigate the transmitted wave packet and find that it can be resolved into a sequence of subsequent wave packages. This result sheds new light on the Hartmann effect for the tunneling time and gives a possible explanation for an experimental result obtained by Spielmann et. al.
quantum physics
We assume that $M$ is a closed symplectic manifold, diffeomorphic to a complete intersection of complex dimension $4m$, having a Hamiltonian $S^{1}$-action with finite fixed point set. We show that $M$ is diffeomorphic to a $\mathbb{CP}^{4m}$, a quadric $Q \subset \mathbb{CP}^{4m+1}$or an intersection of $2$ quadrics $Q_{1} \cap Q_{2} \subset \mathbb{C P}^{4m + 2}$. We also prove the statement under a weaker assumption on the fixed point set, namely if components of the fixed point set are isolated points or submanifolds with real dimension not divisible by $4$.
mathematics
Intermetallic compounds are key materials for energy transition as they form reversible hydrides that can be used for solid state hydrogen storage or as anodes in batteries. ABy compounds (A = Rare Earth (RE); B = transition metal; 2 < y < 5) are good candidates to fulfill the required properties for practical applications. They can be described as stacking of AB5 and AB2 sub-units along the c crystallographic axis. The latter sub-unit brings a larger capacity, while the former one provides a better cycling stability. However, ABy binaries do not show good enough properties for applications. Upon hydrogenation, they exhibit multiplateau behavior and poor reversibility, attributed to H induced amorphization. These drawbacks can be overcome by chemical substitutions on the A and/or the B sites leading to stabilized reversible hydrides. The present work focuses on the pseudo-binary Sm2MnxNi7-x system (0 <= x < 0.5). The structural, thermodynamic and corrosion properties are analyzed and interpreted by means of X-ray diffraction, chemical analysis, scanning electron microscopy, thermogravimetric analysis and magnetic measurements. Unexpected cell parameter variations are reported and interpreted regarding possible formation of structural defects and uneven Mn distribution within the Ni sublattice. Reversible capacity is improved for x > 0.3 leading to larger and flatter isotherm curves, allowing for reversible capacity >1.4 wt %. Regarding corrosion, the binary compound corrodes in alkaline medium to form rare earth hydroxide and nanoporous nickel. As for the Mn-substituted compounds, a new corrosion product is formed in addition to those above mentioned, as manganese initiates a sacrificial anode mechanism taking place at the early corrosion stage.
condensed matter
The main purpose of the paper is to give a characterization of all compactly supported dual windows of a Gabor frame. As an application, we consider an iterative procedure for approximation of the canonical dual window via compactly supported dual windows on every step. In particular, the procedure allows to have approximation of the canonical dual window via dual windows from certain modulation spaces or from the Schwartz space.
mathematics
A $k$-geodetic digraph with minimum out-degree $d$ has excess $\epsilon $ if it has order $M(d,k) + \epsilon $, where $M(d,k)$ represents the Moore bound for out-degree $d$ and diameter $k$. For given $\epsilon $, it is simple to show that any such digraph must be out-regular with degree $d$ for sufficiently large $d$ and $k$. However, proving in-regularity is in general non-trivial. It has recently been shown that any digraph with excess $\epsilon = 1$ must be diregular. In this paper we prove that digraphs with minimum out-degree $d = 2$ and excess $\epsilon = 2$ are diregular for $k \geq 2$.
mathematics
We study quantum effects in higher curvature extensions of general relativity using the functional renormalisation group. New flow equations are derived for general classes of models involving Ricci scalar, Ricci tensor, and Riemann tensor interactions. Our method is applied to test the asymptotic safety conjecture for quantum gravity with polynomial Riemann tensor interactions of the form $\sim\int \sqrt{g} \,(R_{\mu\nu\sigma\tau}R^{\mu\nu\sigma\tau})^n$ and $\sim\int \sqrt{g} \, R\cdot(R_{\mu\nu\sigma\tau}R^{\mu\nu\sigma\tau})^n$, and functions thereof. Interacting fixed points, universal scaling dimensions, gaps in eigenvalue spectra, quantum equations of motion, and de Sitter solutions are identified by combining high order polynomial approximations, Pad\'e resummations, and full numerical integration. Most notably, we discover that quantum-induced shifts of scaling dimensions can lead to a four-dimensional ultraviolet critical surface. Increasingly higher-dimensional interactions remain irrelevant and show near-Gaussian scaling and signatures of weak coupling. Moreover, a new equal weight condition is put forward to identify stable eigenvectors to all orders in the expansion. Similarities and differences with results from the Einstein-Hilbert approximation, $f(R)$ approximations, and $f(R,{\rm Ric}^2)$ models are highlighted and the relevance of findings for quantum gravity and the asymptotic safety conjecture is discussed.
high energy physics theory
The quantum logic gates used in the design of a quantum computer should be both universal, meaning arbitrary quantum computations can be performed, and fault-tolerant, meaning the gates keep errors from cascading out of control. A number of no-go theorems constrain the ways in which a set of fault-tolerant logic gates can be universal. These theorems are very restrictive, and conventional wisdom holds that a universal fault-tolerant logic gate set cannot be implemented natively, requiring us to use costly distillation procedures for quantum computation. Here, we present a general framework for universal fault-tolerant logic with stabiliser codes, together with a no-go theorem that reveals the very broad conditions constraining such gate sets. Our theorem applies to a wide range of stabiliser code families, including concatenated codes and conventional topological stabiliser codes such as the surface code. The broad applicability of our no-go theorem provides a new perspective on how the constraints on universal fault-tolerant gate sets can be overcome. In particular, we show how non-unitary implementations of logic gates provide a general approach to circumvent the no-go theorem, and we present a rich landscape of constructions for logic gate sets that are both universal and fault-tolerant. That is, rather than restricting what is possible, our no-go theorem provides a signpost to guide us to new, efficient architectures for fault-tolerant quantum computing.
quantum physics
We use the extended GALEX Arecibo SDSS Survey (xGASS) to quantify the relationship between atomic hydrogen (HI) reservoir and current star formation rate (SFR) for central disk galaxies. This is primarily motivated by recent claims for the existence, in this sample, of a large population of passive disks harbouring HI reservoirs as large as those observed in main sequence galaxies. Across the stellar mass range 10$^{9}<$M$_{*}$/M$_{\odot}<$10$^{11}$, we practically find no passive ($\gtrsim$2$\sigma$ below the star-forming main sequence) disk galaxies with HI reservoirs comparable to those typical of star-forming systems. Even including HI non detections at their upper limits, passive disks typically have $\geq$0.5 dex less HI than their active counterparts. We show that previous claims are due to the use of aperture-corrected SFR estimates from the MPA/JHU SDSS DR7 catalog, which do not provide a fair representation of the global SFR of HI-rich galaxies with extended star-forming disks. Our findings confirm that the bulk of the passive disk population in the local Universe is HI-poor. These also imply that the reduction of star formation, even in central disk galaxies, has to be accompanied by a reduction in their HI reservoir.
astrophysics
In this paper we investigate surfaces in $\mathbb C P^2$ without complex points and characterize the minimal surfaces without complex points and the minimal Lagrangian surfaces by Ruh-Vilms type theorems. We also discuss the liftability of an immersion from a surface to $\mathbb C P^2$ into $S^5$ in Appendix A.
mathematics
Cross-flow, or vertical-axis, turbines are a promising technology for capturing kinetic energy in wind or flowing water and their inherently unsteady fluid mechanics present unique opportunities for control optimization of individual rotors or arrays. To explore the potential for beneficial interactions between turbines in an array, coherent structures in the wake of a single two-bladed cross-flow turbine are examined using planar stereo particle image velocimetry in a water channel. First, the mean wake structure of this high chord-to-radius ratio rotor is described, compared to previous studies, and a simple explanation for observed wake deflection is presented. Second, the unsteady flow is then analyzed via the triple decomposition, with the periodic component extracted using a combination of traditional techniques and a novel implementation of the optimized dynamic mode decomposition. The latter method is shown to outperform conditional averaging and Fourier methods, as well as uncover frequencies suggesting a transition to bluff-body shedding in the far wake. Third, vorticity and finite-time Lyapunov exponents are then employed to further analyze the oscillatory wake component. Vortex streets on both sides of the wake are identified, and their formation mechanisms and effects on the mean flow are discussed. Strong axial (vertical) flow is observed in vortical structures shed on the retreating side of the rotor where the blades travel downstream. Time-resolved tracking of these vortices is performed, which demonstrates that vortex trajectories have significant rotation-to-rotation variation within one diameter downstream. This variability suggests it would be challenging to harness or avoid such structures at greater downstream distances.
physics
Galactic discs are known to have a complex multilayer structure. An in-depth study of the stellar population properties of the thin and thick components can elucidate the formation and evolution of disc galaxies. Even though thick discs are ubiquitous, their origin is still debated. Here we probe the thick disc formation scenarios by investigating NGC7572, an enormous edge-on galaxy having $R_{25}\approx 25$ kpc and $V_{\rm rot} \approx 370$ km s$^{-1}$, which substantially exceeds the Milky Way size and mass. We analysed DECaLS archival imaging and found that the disc of NGC7572 contains two flaring stellar discs (a thin and a thick disc) with similar radial scales. We collected deep long-slit spectroscopic data using the 6m Russian BTA telescope and analysed them with a novel technique. We first reconstructed a non-parametric stellar line-of-sight velocity distribution along the radius of the galaxy and then fitted it with two kinematic components accounting for the orbital distribution of stars in thin and thick discs. The old thick disc turned out to be 2.7 times as massive as the intermediate-age thin component, $1.6\times 10^{11}$ $\textrm{M}_{\odot}$ vs. $5.9\times10^{10}$ $\textrm{M}_{\odot}$, which is very unusual. The different duration of the formation epochs evidenced by the [Mg/Fe] values of +0.3 and +0.15 dex for the thick and thin discs respectively, their kinematics and the mass ratio suggest that in NGC7572 we observe a rapidly formed very massive thick disc and an underdeveloped thin disc, whose growth ended prematurely due to the exhaustion of the cold gas likely because of environmental effects.
astrophysics
Let $k=k_0(\sqrt[3]{d})$ be a cubic Kummer extension of $k_0=\mathbb{Q}(\zeta_3)$ with $d>1$ a cube-free integer and $\zeta_3$ a primitive third root of unity. Denote by $C_{k,3}^{(\sigma)}$ the $3$-group of ambiguous classes of the extension $k/k_0$ with relative group $G=\operatorname{Gal}(k/k_0)=\langle\sigma\rangle$. The aims of this paper are to characterize all extensions $k/k_0$ with cyclic $3$-group of ambiguous classes $C_{k,3}^{(\sigma)}$ of order $3$, to investigate the multiplicity $m(f)$ of the conductors $f$ of these abelian extensions $k/k_0$, and to classify the fields $k$ according to the cohomology of their unit groups $E_{k}$ as Galois modules over $G$. The techniques employed for reaching these goals are relative $3$-genus fields, Hilbert norm residue symbols, quadratic $3$-ring class groups modulo $f$, the Herbrand quotient of $E_{k}$, and central orthogonal idempotents. All theoretical achievements are underpinned by extensive computational results.
mathematics
We suggest a novel stochastic approximation algorithm to compute a Symmetric Nash Equilibrium strategy in a general queueing game with a finite action space. The algorithm involves a single simulation of the queueing process with dynamic updating of the strategy at regeneration times. Under mild assumptions regarding the regenerative structure of the process the algorithm converges to a symmetric equilibrium strategy almost surely. This yields a powerful tool that can be used to approximate equilibrium strategies in a broad range of strategic queueing models in which direct analysis is impracticable.
mathematics
In this paper we present full-wave signal models for magnetic and electric field measurements in magnetic resonance imaging (MRI). Our analysis is based on a scattering formalism in which the presence of an object or body is taken into account via an electric scattering source. We show that these signal models can be evaluated, provided the Green's tensors of the background field are known along with the dielectric parameters of the object and the magnetization within the excited part of the object. Furthermore, explicit signal expressions are derived in case of a small homogeneous ball that is embedded in free space and for which the quasi-static Born approximation can be applied. The conductivity and permittivity of the ball appear as explicit parameters in the resulting signal models and allow us to study the sensitivity of the measured signals with respect to these dielectric parameters. Moreover, for free induction decay signals we show that under certain conditions it is possible to retrieve the dielectric parameters of the ball from noise-contaminated induction decay signals that are based on electric or magnetic field measurements.
physics
The development and application of models, which take the evolution of network dynamics into account are receiving increasing attention. We contribute to this field and focus on a profile likelihood approach to model time-stamped event data for a large-scale dynamic network. We investigate the collaboration of inventors using EU patent data. As event we consider the submission of a joint patent and we explore the driving forces for collaboration between inventors. We propose a flexible semiparametric model, which includes external and internal covariates, where the latter are built from the network history.
statistics
It is a celebrated result in early combinatorics that, in bipartite graphs, the size of maximum matching is equal to the size of a minimum vertex cover. K\H{o}nig's proof of this fact gave an algorithm for finding a minimum vertex cover from a maximum matching. In this paper, we revisit the connection this algorithm induces between the two types of structures. We find that all minimum vertex covers can be found by applying this algorithm to some matching and then classify which matchings give minimum vertex covers when this algorithm is applied to them.
mathematics
In many biomedical, science, and engineering problems, one must sequentially decide which action to take next so as to maximize rewards. One general class of algorithms for optimizing interactions with the world, while simultaneously learning how the world operates, is the multi-armed bandit setting and, in particular, the contextual bandit case. In this setting, for each executed action, one observes rewards that are dependent on a given 'context', available at each interaction with the world. The Thompson sampling algorithm has recently been shown to enjoy provable optimality properties for this set of problems, and to perform well in real-world settings. It facilitates generative and interpretable modeling of the problem at hand. Nevertheless, the design and complexity of the model limit its application, since one must both sample from the distributions modeled and calculate their expected rewards. We here show how these limitations can be overcome using variational inference to approximate complex models, applying to the reinforcement learning case advances developed for the inference case in the machine learning community over the past two decades. We consider contextual multi-armed bandit applications where the true reward distribution is unknown and complex, which we approximate with a mixture model whose parameters are inferred via variational inference. We show how the proposed variational Thompson sampling approach is accurate in approximating the true distribution, and attains reduced regrets even with complex reward distributions. The proposed algorithm is valuable for practical scenarios where restrictive modeling assumptions are undesirable.
statistics
A top-down approach to the flavor problem motivated from string theory leads to the concept of eclectic flavor groups that combine traditional and modular flavor symmetries. To make contact with models constructed in the bottom-up approach, we analyze a specific example based on the eclectic flavor group Omega(1) (a nontrivial combination of the traditional flavor group Delta(54) and the finite modular group T') in order to extract general lessons from the eclectic scheme. We observe that this scheme is highly predictive since it severely restricts the possible group representations and modular weights of matter fields. Thereby, it controls the structure of the Kaehler potential and the superpotential, which we discuss explicitly. In particular, both Kaehler potential and superpotential are shown to transform nontrivially, but combine to an invariant action. Finally, we find that discrete R-symmetries are intrinsic to eclectic flavor groups.
high energy physics phenomenology
We introduce the electromagnetic-gravitational coupling in the Ho\v{r}ava-Lifshitz framework, in $3+1$ dimensions, by considering the Ho\v{r}ava-Lifshitz gravity theory in $4+1$ dimensions at the kinetic conformal point and then performing a Kaluza-Klein reduction to $3+1$ dimensions. The action of the theory is second order in time derivatives and the potential contains only higher order spacelike derivatives up to $z=4$, $z$ being the critical exponent. These terms include also higher order derivative terms of the electromagnetic field. The propagating degrees of freedom of the theory are exactly the same as in the Einstein-Maxwell theory. We obtain the Hamiltonian, the field equations and show consistency of the constraint system. The kinetic conformal point is protected from quantum corrections by a second class constraint. At low energies the theory depends on two coupling constants, $\beta$ and $\alpha$. We show that the anisotropic field equations for the gauge vector is a deviation of the covariant Maxwell equations by a term depending on $\beta-1$. Consequently, for $\beta=1$, Maxwell equations arise from the anisotropic theory at low energies. We also prove that the anisotropic electromagnetic-gravitational theory at the IR point $\beta=1$, $\alpha=0$, is exactly the Einstein-Maxwell theory in a gravitational gauge used in the ADM formulation of General Relativity.
high energy physics theory
Recently, semantic video segmentation gained high attention especially for supporting autonomous driving systems. Deep learning methods made it possible to implement real time segmentation and object identification algorithms on videos. However, most of the available approaches process each video frame independently disregarding their sequential relation in time. Therefore their results suddenly miss some of the object segments in some of the frames even if they were detected properly in the earlier frames. Herein we propose two sequential probabilistic video frame analysis approaches to improve the segmentation performance of the existing algorithms. Our experiments show that using the information of the past frames we increase the performance and consistency of the state of the art algorithms.
electrical engineering and systems science
The entropic uncertainty relation (EUR) is of significant importance in the security proof of continuous-variable quantum key distribution under coherent attacks. The parameter estimation in the EUR method contains the estimation of the covariance matrix (CM), as well as the max-entropy. The discussions in previous works have not involved the effect of finite-size on estimating the CM, which will further affect the estimation of leakage information. In this work, we address this issue by adapting the parameter estimation technique to the EUR analysis method under composable security frameworks. We also use the double-data modulation method to improve the parameter estimation step, where all the states can be exploited for both parameter estimation and key generation; thus, the statistical fluctuation of estimating the max-entropy disappears. The result shows that the adapted method can effectively estimate parameters in EUR analysis. Moreover, the double-data modulation method can, to a large extent, save the key consumption, which further improves the performance in practical implementations of the EUR.
quantum physics
We review complicated problems in the Lifshitz theory describing the Casimir force between real material plates made of metals and dielectrics including different approaches to their resolution. It has been shown that both for metallic plates with perfect crystal lattices and for any dielectric plates the Casimir entropy calculated in the framework of the Lifshitz theory violates the Nernst heat theorem when the well approved dielectric functions are used in computations. The respective theoretical Casimir forces are excluded by the measurement data of numerous precision experiments. In the literature this situation received the names of the Casimir puzzle and the Casimir conundrum for the cases of metallic and dielectric plates, respectively. The review presents a summary of the main facts on this subject on both theoretical and experimental sides. Next, we discuss the main approaches proposed in the literature in order to bring the Lifshitz theory in agreement with the measurement data and with the laws of thermodynamics. Special attention is paid to the recently suggested spatially nonlocal Drude-like response functions which take into account the relaxation properties of conduction electrons, as does the standard Drude model, but lead to the theoretical results in agreement with both thermodynamics and the measurement data through the alternative response to quantum fluctuations off the mass shell. Further advances and trends in this field of research are discussed.
quantum physics
We seek to understand how the representations of individual tokens and the structure of the learned feature space evolve between layers in deep neural networks under different learning objectives. We focus on the Transformers for our analysis as they have been shown effective on various tasks, including machine translation (MT), standard left-to-right language models (LM) and masked language modeling (MLM). Previous work used black-box probing tasks to show that the representations learned by the Transformer differ significantly depending on the objective. In this work, we use canonical correlation analysis and mutual information estimators to study how information flows across Transformer layers and how this process depends on the choice of learning objective. For example, as you go from bottom to top layers, information about the past in left-to-right language models gets vanished and predictions about the future get formed. In contrast, for MLM, representations initially acquire information about the context around the token, partially forgetting the token identity and producing a more generalized token representation. The token identity then gets recreated at the top MLM layers.
computer science
A general approach is presented to describing nonlinear classical Maxwell electrodynamics with conformal symmetry. We introduce generalized nonlinear constitutive equations, expressed in terms of constitutive tensors dependent on conformal-invariant functionals of the field strengths. This allows a characterization of Lagrangian and non-Lagrangian theories. We obtain a general formula for possible Lagrangian densities in nonlinear conformal-invariant electrodynamics. This generalizes the standard Lagrangian of classical linear electrodynamics so as to preserve the conformal symmetry.
high energy physics theory
In some models of thermal relic dark matter, the relic abundance may be set by inelastic scattering processes (rather than annihilations) becoming inefficient as the universe cools down. This effect has been called coscattering. We present a procedure to numerically solve the full momentum-dependent Boltzmann equations in coscattering, which allows for a precise calculation of the dark matter relic density including the effects of early kinetic decoupling. We apply our method to a simple model, containing a fermionic SU(2) triplet and a fermionic singlet with electroweak-scale masses, at small triplet-singlet mixing. The relic density can be set by either coannihilation or, at values of the mixing angle $\theta\lesssim 10^{-5}$, by coscattering. We identify the parameter ranges which give rise to the observed relic abundance. As a special case, we study bino-like dark matter in split supersymmetry at large $\mu$.
high energy physics phenomenology
The partial entanglement entropy (PEE) $s_{\mathcal{A}}(\mathcal{A}_i)$ characterizes how much the subset $\mathcal{A}_i$ of $\mathcal{A}$ contribute to the entanglement entropy $S_{\mathcal{A}}$. We find one additional physical requirement for $s_{\mathcal{A}}(\mathcal{A}_i)$, which is the invariance under a permutation. The partial entanglement entropy proposal satisfies all the physical requirements. We show that for Poincar\'e invariant theories the physical requirements are enough to uniquely determine the PEE (or the entanglement contour) to satisfy a general formula. This is the first time we find the PEE can be uniquely determined. Since the solution of the requirements is unique and the \textit{PEE proposal} is a solution, the \textit{PEE proposal} is justified for Poincar\'e invariant theories.
high energy physics theory
We address regularised versions of the Expectation-Maximisation (EM) algorithm for Generalised Linear Mixed Models (GLMM) in the context of panel data (measured on several individuals at different time-points). A random response y is modelled by a GLMM, using a set X of explanatory variables and two random effects. The first one introduces the dependence within individuals on which data is repeatedly collected while the second one embodies the serially correlated time-specific effect shared by all the individuals. Variables in X are assumed many and redundant, so that regression demands regularisation. In this context, we first propose a L2-penalised EM algorithm, and then a supervised component-based regularised EM algorithm as an alternative.
statistics
We investigate the decay properties of some beauty and charm mesons with a phenomenological potential model. First, we consider the nonrelativistic Hamiltonian of the mesonic system with Coulomb plus exponential terms and study the wave function and the energy of the system using the variational approach. Thereby, we compute the masses, the decay constants, the leptonic branching fractions of heavy-light mesons and the mixing mass parameter $\Delta {m_{{B_q}}}$. We study the radiative leptonic decay widths of ${D_s} \to \gamma \ell \bar \nu $, ${D^ - } \to \gamma \ell \bar \nu $ and the semileptonic decay widths of ${\bar B_{(s)}} \to {D_{(s)}}\ell \bar \nu $, ${\bar B_{(s)}} \to D_{(s)}^*\ell \bar \nu $. Using Isgur-Wise functions, we calculate the branching ratios of $B \to {D^{(*)}}\pi $ and two-body nonleptonic decay of $D \to K\pi $. Our results are consistent with other theoretical models and the experimental results.
high energy physics phenomenology
Recent studies classify the topology of proteins by analysing the distribution of their projections using knotoids. The approximation of this distribution depends on the number of projection directions that are sampled. Here we investigate the relation between knotoids differing only by small perturbations of the direction of projection. Since such knotoids are connected by at most a single forbidden move, we characterise forbidden moves in terms of equivariant band attachment between strongly invertible knots and of strand passages between $\theta$-curves. This allows for the determination of the optimal sample size needed to produce a well approximated knotoid distribution. Based on that and on topological properties of the distribution, we propose a numerical measure for the determination of deeply knotted proteins that does not require the computationally expensive method of subchain analysis.
mathematics
CALLISTO is a radio spectrometer designed to monitor the transient radio emissions/bursts originated from the solar corona in the frequency range $45-870$ MHz. At present, there are $\gtrsim 150$ stations (together forms an e-CALLISTO network) around the globe continuously monitoring the Sun 24 hours a day. We have developed a pyCallisto, a python library to process the CALLISTO data observed by all stations of the e-CALLISTO network. In this article, we demonstrate various useful functions that are routinely used to process the CALLISTO data with suitable examples. This library is not only efficient in processing the data but plays a significant role in developing automatic classification algorithms of different types of solar radio bursts.
astrophysics
The greedy spanner is arguably the simplest and most well-studied spanner construction. Experimental results demonstrate that it is at least as good as any other spanner construction, in terms of both the size and weight parameters. However, a rigorous proof for this statement has remained elusive. In this work we fill in the theoretical gap via a surprisingly simple observation: The greedy spanner is \emph{existentially optimal} (or existentially near-optimal) for several important graph families, in terms of both the size and weight. Roughly speaking, the greedy spanner is said to be existentially optimal (or near-optimal) for a graph family $\mathcal G$ if the worst performance of the greedy spanner over all graphs in $\mathcal G$ is just as good (or nearly as good) as the worst performance of an optimal spanner over all graphs in $\mathcal G$. Focusing on the weight parameter, the state-of-the-art spanner constructions for both general graphs (due to Chechik and Wulff-Nilsen [SODA'16]) and doubling metrics (due to Gottlieb [FOCS'15]) are complex. Plugging our observation on these results, we conclude that the greedy spanner achieves near-optimal weight guarantees for both general graphs and doubling metrics, thus resolving two longstanding conjectures in the area. Further, we observe that approximate-greedy spanners are existentially near-optimal as well. Consequently, we provide an $O(n \log n)$-time construction of $(1+\epsilon)$-spanners for doubling metrics with constant lightness and degree. Our construction improves Gottlieb's construction, whose runtime is $O(n \log^2 n)$ and whose number of edges and degree are unbounded, and remarkably, it matches the state-of-the-art Euclidean result (due to Gudmundsson et al.\ [SICOMP'02]) in all the involved parameters (up to dependencies on $\epsilon$ and the dimension).
computer science
In Phys. Rev. D 98, 103023 (2018), a novel scenario was proposed to probe the interactions between dark matter (DM) particles and electrons, via hydrogen-atmosphere pulsating white dwarfs (DAVs) in globular clusters. The estimation showed that the scenario could hopefully test the parameter space: $5 \mathrm{GeV} \le m_{\chi} \le 10^{4} \mathrm{GeV}$ and $\sigma_{\chi,e} \ge 10^{-40} \mathrm{cm}^{2}$, where $m_{\chi}$ is the DM particle's mass and $\sigma_{\chi,e}$ is the elastic scattering cross section between DM and electron. In this comment, we have determined the exact lower limit of the testable DM particle mass $\sim 1.38 - 1.58 \mathrm{GeV}$, which depends on $\sigma_{\chi,e}$. This gives us a credible lower limit of the testable DM particle mass in above scenario, and provide a clear upper limit of the DM particle mass which we should consider in future research.
high energy physics phenomenology
We present RobustTP, an end-to-end algorithm for predicting future trajectories of road-agents in dense traffic with noisy sensor input trajectories obtained from RGB cameras (either static or moving) through a tracking algorithm. In this case, we consider noise as the deviation from the ground truth trajectory. The amount of noise depends on the accuracy of the tracking algorithm. Our approach is designed for dense heterogeneous traffic, where the road agents corresponding to a mixture of buses, cars, scooters, bicycles, or pedestrians. RobustTP is an approach that first computes trajectories using a combination of a non-linear motion model and a deep learning-based instance segmentation algorithm. Next, these noisy trajectories are trained using an LSTM-CNN neural network architecture that models the interactions between road-agents in dense and heterogeneous traffic. Our trajectory prediction algorithm outperforms state-of-the-art methods for end-to-end trajectory prediction using sensor inputs. We achieve an improvement of upto 18% in average displacement error and an improvement ofup to 35.5% in final displacement error at the end of the prediction window (5 seconds) over the next best method. All experiments were set up on an Nvidia TiTan Xp GPU. Additionally, we release a software framework, TrackNPred. The framework consists of implementations of state-of-the-art tracking and trajectory prediction methods and tools to benchmark and evaluate them on real-world dense traffic datasets.
computer science
Liquid crystal elastomers and glasses can have significant shape change determined by their director patterns. Cones deformed from circular director patterns have non-trivial Gaussian curvature localised at tips, curved interfaces, and intersections of interfaces. We employ a generalised metric compatibility condition to characterize two families of interfaces between circular director patterns -- hyperbolic and elliptical interfaces, and find that the deformed interfaces are geometrically compatible. We focus on hyperbolic interfaces to design complex topographies and non-isometric origami, including n-fold intersections, symmetric and irregular tilings. The large design space of three-fold and four-fold tiling is utilized to quantitatively inverse design an array of pixels to display target images. Taken together, our findings provide comprehensive design principles for the design of actuators, displays, and soft robotics in liquid crystal elastomers and glasses.
condensed matter
Many decays of light baryons consisting of light $u,d,s$ quarks have been measured, and these measurements will help us to understand the decay properties of light baryons. In this work, we study two-body nonleptonic weak decays of light baryon octet $(T_8)$ and baryon decuplet $(T_{10})$ by the topological diagram approach (TDA) under the SU(3) flavor symmetry for the first time. We find that (1) the TDA and the SU(3) irreducible representation approach (IRA) match consistently in $T_{10}\to T^{(')}_{8,10}P_8$ ($P_8$ is the light pseudoscalar meson octet); (2) almost all relevant not-yet-measured $\mathcal{B}(T_{10}\to T_8 P_8)$ may be predicted by using three experimental data of $\mathcal{B}(\Omega^-\to \Xi^0\pi^-,\Xi^-\pi^0,\Lambda^0K^-)$, and the upper limits of $\mathcal{B}(T_{10}\to T{'}_{10}\pi^-)$ may be obtained from the experimental upper limit of $\mathcal{B}(\Omega^-\to \Xi^{*0}\pi^-)$ by both the TDA and the IRA together, nevertheless, all new predicted branching ratios are too small to br reached in current experiments; (3) $T_8\to T'_8 P_8$ decays are quite complex in terms of the TDA, and we find that W-exchange diagrams give large and even dominant contributions by using relevant experimental data and the isospin relations.
high energy physics phenomenology
Approximate Bayesian computation allows for inference of complicated probabilistic models with intractable likelihoods using model simulations. The Markov chain Monte Carlo implementation of approximate Bayesian computation is often sensitive to the tolerance parameter: low tolerance leads to poor mixing and large tolerance entails excess bias. We consider an approach using a relatively large tolerance for the Markov chain Monte Carlo sampler to ensure its sufficient mixing, and post-processing the output leading to estimators for a range of finer tolerances. We introduce an approximate confidence interval for the related post-corrected estimators, and propose an adaptive approximate Bayesian computation Markov chain Monte Carlo, which finds a `balanced' tolerance level automatically, based on acceptance rate optimisation. Our experiments show that post-processing based estimators can perform better than direct Markov chain targetting a fine tolerance, that our confidence intervals are reliable, and that our adaptive algorithm leads to reliable inference with little user specification.
statistics
Three-dimensional Topologically Massive Gravity at its critical point has been conjectured to be holographically dual to a Logarithmic CFT. However, many details of this correspondence are still lacking. In this work, we study the 1-loop partition function of Critical Cosmological Topologically Massive Gravity, previously derived by Gaberdiel, Grumiller and Vassilevich, and show that it can be usefully rewritten as a Bell polynomial expansion. We also show that there is a relationship between this Bell polynomial expansion and the Plethystic Exponential. Our reformulation allows us to match the TMG partition function to states on the CFT side, including the multi-particle states of t (the logarithmic partner of the CFT stress tensor) which had previously been elusive. We also discuss the appearance of a ladder action between the different multi-particle sectors in the partition function, which induces an interesting sl(2) structure on the n-particle components of the partition function.
high energy physics theory
The two full body problem concerns the dynamics of two spatially extended rigid bodies (e.g. rocky asteroids) subject to mutual gravitational interaction. In this note we deduce the Euler-Poincare and Hamiltonian equations of motion using the geometric mechanics formalism.
physics
We study the quasiparticle excitation and quench dynamics of the one-dimensional transverse-field Ising model with power-law ($1/r^{\alpha}$) interactions. We find that long-range interactions give rise to a confining potential, which couples pairs of domain walls (kinks) into bound quasiparticles, analogous to mesonic bound states in high-energy physics. We show that these quasiparticles have signatures in the dynamics of order parameters following a global quench and the Fourier spectrum of these order parameters can be expolited as a direct probe of the masses of the confined quasiparticles. We introduce a two-kink model to qualitatively explain the phenomenon of long-range-interaction induced confinement, and to quantitatively predict the masses of the bound quasiparticles. Furthermore, we illustrate that these quasiparticle states can lead to slow thermalization of one-point observables for certain initial states. Our work is readily applicable to current trapped-ion experiments.
condensed matter
In typical examples of the AdS/CFT correspondence, the world-sheet theory with holes in the presence of D-branes is assumed to be equivalent in a low-energy limit to a world-sheet theory without holes for a different background such as $AdS_5 \times S^5$. In the case of the bosonic string, we claim under the assumption of this equivalence that open string field theory on $N$ coincident D-branes can be used to provide a nonperturbative definition of closed string theory based on the fact that the $1/N$ expansion of correlation functions of gauge-invariant operators reproduces the world-sheet theory with holes where the moduli space of Riemann surfaces is precisely covered.
high energy physics theory
Space-based X-ray detectors are subject to significant fluxes of charged particles in orbit, notably energetic cosmic ray protons, contributing a significant background. We develop novel machine learning algorithms to detect charged particle events in next-generation X-ray CCDs and DEPFET detectors, with initial studies focusing on the Athena Wide Field Imager (WFI) DEPFET detector. We train and test a prototype convolutional neural network algorithm and find that charged particle and X-ray events are identified with a high degree of accuracy, exploiting correlations between pixels to improve performance over existing event detection algorithms. 99 per cent of frames containing a cosmic ray are identified and the neural network is able to correctly identify up to 40 per cent of the cosmic rays that are missed by current event classification criteria, showing potential to significantly reduce the instrumental background, and unlock the full scientific potential of future X-ray missions such as Athena, Lynx and AXIS.
astrophysics
Long ago, Newman and Janis showed that a complex deformation $z\rightarrow z+i a$ of the Schwarzschild solution produces the Kerr solution. The underlying explanation for this relationship has remained obscure. The complex deformation has an electromagnetic counterpart: by shifting the Coloumb potential, we obtain the EM field of a certain rotating charge distribution which we term $\sqrt{\rm Kerr}$. In this note, we identify the origin of this shift as arising from the exponentiation of spin operators for the recently defined "minimally coupled" three-particle amplitudes of spinning particles coupled to gravity, in the large-spin limit. We demonstrate this by studying the impulse imparted to a test particle in the background of the heavy spinning particle. We first consider the electromagnetic case, where the impulse due to $\sqrt{\rm Kerr}$ is reproduced by a charged spinning particle; the shift of the Coloumb potential is matched to the exponentiated spin-factor appearing in the amplitude. The known impulse due to the Kerr black hole is then trivially derived from the gravitationally coupled spinning particle via the double copy.
high energy physics theory
While machine learning (ML) has shown increasing effectiveness in optimizing materials properties under known physics, its application in challenging conventional wisdom and discovering new physics still remains challenging due to its interpolative nature. In this work, we demonstrate the potential of using ML for such applications by implementing an adaptive ML-accelerated search process that can discover unexpected lattice thermal conductivity ($\kappa_l$) enhancement instead of reduction in aperiodic superlattices (SLs) as compared to periodic superlattices. We use non-equilibrium molecular dynamics (NEMD) simulations for high-fidelity calculations of $\kappa_l$ for a small fraction of SLs in the search space, along with a convolutional neural network (CNN) which can rapidly predict $\kappa_l$ for a large number of structures. To ensure accurate prediction by the CNN for the target unknown structures, we iteratively identify aperiodic SLs containing structural features which lead to locally enhanced thermal transport, and include them as additional training data for the CNN in each iteration. As a result, our CNN can accurately predict the high $\kappa_l$ of aperiodic SLs that are absent from the initial training dataset, which allows us to identify the previously unseen exceptional structures. The identified RML structures exhibit increased coherent phonon contribution to thermal conductivity owing to the presence of closely spaced interfaces. Our work describes a general purpose machine learning approach for identifying low-probability-of-occurrence exceptional solutions within an extremely large subspace and discovering the underlying physics.
condensed matter
Predictive wavefront control is an important and rapidly developing field of adaptive optics (AO). Through the prediction of future wavefront effects, the inherent AO system servo-lag caused by the measurement, computation, and application of the wavefront correction can be significantly mitigated. This lag can impact the final delivered science image, including reduced strehl and contrast, and inhibits our ability to reliably use faint guidestars. We summarize here a novel method for training deep neural networks for predictive control based on an adversarial prior. Unlike previous methods in the literature, which have shown results based on previously generated data or for open-loop systems, we demonstrate our network's performance simulated in closed loop. Our models are able to both reduce effects induced by servo-lag and push the faint end of reliable control with natural guidestars, improving K-band Strehl performance compared to classical methods by over 55% for 16th magnitude guide stars on an 8-meter telescope. We further show that LSTM based approaches may be better suited in high-contrast scenarios where servo-lag error is most pronounced, while traditional feed forward models are better suited for high noise scenarios. Finally, we discuss future strategies for implementing our system in real-time and on astronomical telescope systems.
astrophysics
We compute the two-loop helicity amplitudes for the production of three photons at hadron colliders in QCD at leading-color. Using the two-loop numerical unitarity method coupled with analytic reconstruction techniques, we obtain the decomposition of the two-loop amplitudes in terms of master integrals in analytic form. These expressions are valid to all orders in the dimensional regulator. We use them to compute the two-loop finite remainders, which are given in a form that can be efficiently evaluated across the whole physical phase space. We further package these results in a public code which assembles the helicity-summed squared two-loop remainders, whose numerical stability across phase-space is demonstrated. This is the first time that a five-point two-loop process is publicly available for immediate phenomenological applications.
high energy physics phenomenology
We quantitatively differentiate between the spreads of discrete-time quantum and classical random walks on a cyclic graph. Due to the closed nature of any cyclic graph, there is additional "collision"- like interference in the quantum random walk along with the usual interference in any such walk on any graph, closed or otherwise. We find that the quantum walker remains localized in comparison to the classical one, even in the absence of disorder, a phenomenon that is potentially attributable to the additional interference in the quantum case. This is to be contrasted with the situation on open graphs, where the quantum walker, being effectively denied the collision-like interference, garners a much higher spread than its classical counterpart. We use Shannon entropy of the position probability distribution to quantify spread of the walker in both quantum and classical cases. We find that for a given number of vertices on a cyclic graph, the entropy with respect to number of steps for the quantum walker saturates, on average, to a value lower than that for the corresponding classical one. We also analyze variations of the entropies with respect to system size, and look at the corresponding asymptotic growth rates.
quantum physics
Thermal conduction in polymer nanocomposites depends on several parameters including the thermal conductivity and geometrical features of the nanoparticles, the particle loading, their degree of dispersion and formation of a percolating networks. To enhance efficiency of thermal contact between free-standing conductive nanoparticles were previously proposed. This work report for the first time the investigation of molecular junctions within a graphene polymer nanocomposite. Molecular dynamics simulations were conducted to investigate the thermal transport efficiency of molecular junctions in polymer tight contact, to quantify the contribution of molecular junctions when graphene and the molecular junctions are surrounded by polydimethylsiloxane (PDMS). A strong dependence of the thermal conductance in PDMS/graphene model was found, with best performances obtained with short and conformationally rigid molecular junctions.
physics
Multi-horizon forecasting problems often contain a complex mix of inputs -- including static (i.e. time-invariant) covariates, known future inputs, and other exogenous time series that are only observed historically -- without any prior information on how they interact with the target. While several deep learning models have been proposed for multi-step prediction, they typically comprise black-box models which do not account for the full range of inputs present in common scenarios. In this paper, we introduce the Temporal Fusion Transformer (TFT) -- a novel attention-based architecture which combines high-performance multi-horizon forecasting with interpretable insights into temporal dynamics. To learn temporal relationships at different scales, the TFT utilizes recurrent layers for local processing and interpretable self-attention layers for learning long-term dependencies. The TFT also uses specialized components for the judicious selection of relevant features and a series of gating layers to suppress unnecessary components, enabling high performance in a wide range of regimes. On a variety of real-world datasets, we demonstrate significant performance improvements over existing benchmarks, and showcase three practical interpretability use-cases of TFT.
statistics
We report comprehensive temperature and doping-dependences of the Raman scattering spectra for $\mathrm{BaFe_{2}}(\mathrm{As}_{1-x}\mathrm{P}_{x}\mathrm{)_{2}}$ ($x =$ 0, 0.07, 0.24, 0.32, and 0.38), focusing on the nematic fluctuation and the superconducting responses. With increasing $x$, the bare nematic transition temperature estimated from the Raman spectra reaches $T =$ 0 K at the optimal doping, which indicates a quantum critical point (QCP) at this composition. In the superconducting compositions, in addition to the pair breaking peaks observed in the $A_{\mathrm{1g}}$ and $B_{\mathrm{1g}}$ spectra, another strong $B_{\mathrm{1g}}$ peak appears below the superconducting transition temperature which is ascribed to the nematic resonance peak. The observation of this peak indicates significant nematic correlations in the superconducting state near the QCP in this compound.
condensed matter
The preparation of quantum systems and the execution of quantum information tasks between distant users are always affected by gravitational and relativistic effects. In this work, we quantitatively analyze how the curved space-time background of the Earth affects the classical and quantum correlations between photon pairs that are initially prepared in a two-mode squeezed state. More specifically, considering the rotation of the Earth, the space-time around the Earth is described by the Kerr metric. Our results show that these state correlations, which initially increase for a specific range of satellite's orbital altitude, will gradually approach a finite value with increasing height of satellites orbit (when the special relativistic effects become relevant). More importantly, our analysis demonstrates that the changes of correlations generated by the total gravitational frequency shift could reach the level of <0.5$\%$ within the satellites height at geostationary Earth orbits.
quantum physics
In this study, we propose a two-stage procedure for hypothesis testing, where the first stage is conventional hypothesis testing and the second is an equivalence testing procedure using an introduced Empirical Equivalence Bound. In 2016, the American Statistical Association released a policy statement on P-values to clarify the proper use and interpretation in response to the criticism of reproducibility and replicability in scientific findings. A recent solution to improve reproducibility and transparency in statistical hypothesis testing is to integrate P-values (or confidence intervals) with practical or scientific significance. Similar ideas have been proposed via the equivalence test, where the goal is to infer equality under a presumption (null) of inequality of parameters. However, in these testing procedures, the definition of scientific significance/equivalence can be subjective. To circumvent this drawback, we introduce a B-value and the Empirical Equivalence Bound, which are both estimated from the data. Performing a second-stage equivalence test, our procedure offers an opportunity to correct for false positive discoveries and improve the reproducibility in findings across studies.
statistics
We continue the search for rules that govern when off-shell 4D, $\cal N$ = 1 supermultiplets can be combined to form off-shell 4D, $\cal N$ = 2 supermultiplets. We study the ${\mathbb S}_8$ permutations and Height Yielding Matrix Numbers (HYMN) embedded within the adinkras that correspond to these putative 4D, $\cal N$ = 2 supermultiplets off-shell supermultiplets. Even though the HYMN definition was designed to distinguish between the raising and lowering of nodes in one dimensional valises supermultiplets, they are shown to accurately select out which combinations of off-shell 4D, $\cal N$ = 1 supermultiplets correspond to off-shell 4D, $\cal N$ = 2 supermultiplets. Only the combinations of the chiral + vector and chiral + tensor are found to have valises in the same class. This is consistent with the well known structure of 4D, $\cal N$ = 2 supermultiplets.
high energy physics theory
The design of a 9-channel microphone system for location recording of mainly atmospheres will be described. The key concept is matching the recording and reproduction angles of the individual sectors. The rig is designed for the AURO-3D 9-channel playback system (4 height speakers). An analysis of the reproduction layout will be included, as well as recording concepts like the Stereo Recording Angle (SRA), Williams curves, Scale Factors for different reproduction angles than 60 degrees and diffuse field decorrelation. Finally, practical aspects like microphone mounts and windshields for such a system will be presented.
electrical engineering and systems science
We investigate the noise current in a thermally biased tunnel junction between two superconductors with different zero-temperature gaps. When the Josephson effect is suppressed, this structure can support a nonlinear thermoelectric effect due to the spontaneous breaking of electron-hole symmetry, as we recently theoretically predicted. We discuss the possibly relevant role played by the noise in the junction. While a moderate noise contribution assists the generation of the thermoelectric signal, further unveiling the spontaneous nature of the electron-hole symmetry breaking, a large noise contribution can induce a switching between the two stationary thermoelectric values, thus hardening the detection of the effect and its application. We demonstrate that the thermoelectric effect is robust to the presence of noise for a wide range of parameters and that the spurious fluctuations of the thermoelectric signal can be lowered by increasing the capacitance of the junction, for instance by expanding the junction's size. Our results pave the way to the future experimental observation of the thermoelectric effect in superconducting junctions, and to improved performance in quantum circuits designed for thermal management.
condensed matter
Data acquired at Ceres by the visible channel of the Visible and InfraRed mapping spectrometer (VIR) on board the NASA Dawn spacecraft are affected by the temperatures of both the visible (VIS) and the infrared (IR) sensors, which are respectively a CCD and a HgCdTe array. The variations of the visible channel temperatures measured during the sessions of acquisitions are correlated with variations in the spectral slope and shape for all the mission phases. The infrared channel (IR) temperature is more stable during the acquisitions, nonetheless it is characterized by a bi-modal distribution whether the cryocooler (and therefore the IR channel) is used or not during the visible channel operations. When the infrared channel temperature is high (175K, i.e. not in use and with crycooler off), an additional negative slope and a distortion are observed in the spectra of the visible channel. We developed an empirical correction based on a reference spectrum for the whole data set; it is designed to correct the two issues related to the sensor temperatures that we have identified. The reference spectrum is calculated to be representative of the global Ceres' surface. It is also made of data acquired when the visible and infrared channel temperatures are equal to the ones measured during an observation of the Arcturus star by VIR, which is consistent with several ground-based observations. The developed correction allows reliable analysis and mapping to be performed by minimizing the artifacts induced by fluctuations of the VIS temperature. Thanks to this correction, a direct comparison between different mission phases during which VIR experienced different visible and infrared channel temperatures is now possible.
astrophysics
Relativistic fermionic systems have physical quantities calculated by well stablished quantum electrodynamic prescriptions. In the last few years there has been an enormous interest in condensed matter systems in which the fermions exhibit relativistic dispersion, as Dirac fermions in graphene. We employ a non-perturbative method in order to obtain a non-perturbative correction to the quasiparticle velocity in graphene, and compare with the experimental data. We find a better agreement between the quasiparticle velocity corrected with non-perturbative corrections and measurements, when compared with the standard one-loop result. We also investigate the behavior of the beta function of the renormalization group theory, and find that the non-perturbative corrections do not alter the stability of the infrared fixed point found by the standard result.
condensed matter
Quantum spin liquids host novel emergent excitations, such as monopoles of an emergent gauge field. Here, we study the hierarchy of monopole operators that emerges at quantum critical points (QCPs) between a two-dimensional Dirac spin liquid and various ordered phases. This is described by a confinement transition of quantum electrodynamics in two spatial dimensions (QED3 Gross-Neveu theories). Focusing on a spin ordering transition, we get the scaling dimension of monopoles at leading order in a large-N expansion, where 2N is the number of Dirac fermions, as a function of the monopole's total magnetic spin. Monopoles with a maximal spin have the smallest scaling dimension while monopoles with a vanishing magnetic spin have the largest one, the same as in pure QED3. The organization of monopoles in multiplets of the QCP's symmetry group SU(2) x SU(N) is shown for general N.
condensed matter
The Schrodinger equation for a macroscopic number of particles is linear in the wave function, deterministic, and invariant under time reversal. In contrast, the concepts used and calculations done in statistical physics and condensed matter physics involve stochasticity, nonlinearities, irreversibility, top-down effects, and elements from classical physics. This paper analyzes several methods used in condensed matter physics and statistical physics and explains how they are in fundamental ways incompatible with the above properties of the Schrodinger equation. The problems posed by reconciling these approaches to unitary quantum mechanics are of a similar type as the quantum measurement problem. This paper therefore argues that rather than aiming at reconciling these contrasts one should use them to identify the limits of quantum mechanics. The thermal wave length and thermal time indicate where these limits are for (quasi-)particles that constitute the thermal degrees of freedom.
quantum physics
Insulator to metal (IMT) transition (T$_t$ $\sim$ 341 K) in the VO2 accompanies transition from an infrared (IR) transparent to IR opaque phase. Tailoring of the IMT and associated IR switching behavior can offer potential thermochromic applications. Here we report on effects of the W and the Tb doping on the IMT and associated structural, electronic structure and optical properties of the VO2 thin film. Our results show that the W doping significantly lowers IMT temperature ($\sim$ 292 K to $\sim$ 247 K for 1.3\% W to 3.7\% W) by stabilizing the metallic rutile, $\it{R}$, phase while Tb doping does not alter the IMT temperature much and retains the insulating monoclinic, $\it{M1}$, phase at room temperature. It is observed that the W doping albeit significantly reduces the IR switching temperature but is detrimental to the solar modulation ability, contrary to the Tb doping effects where higher IR switching temperature and solar modulation ability is observed. The IMT behavior, electrical conductivity and IR switching behavior in the W and the Tb doped thin films are found to be directly associated with the spectral changes in the V 3$\it{d_{\|}}$ states.
condensed matter
Two-dimensional, resonant scanners have been utilized in a large variety of imaging modules due to their compact form, low power consumption, large angular range, and high speed. However, resonant scanners have problems with non-optimal and inflexible scanning patterns and inherent phase uncertainty, which limit practical applications. Here we propose methods for optimized design and control of the scanning trajectory of two-dimensional resonant scanners under various physical constraints, including high frame-rate and limited actuation amplitude. First, we propose an analytical design rule for uniform spatial sampling. We demonstrate theoretically and experimentally that by including non-repeating scanning patterns, the proposed designs outperform previous designs in terms of scanning range and fill factor. Second, we show that we can create flexible scanning patterns that allow focusing on user-defined Regions-of-Interest (RoI) by modulation of the scanning parameters. The scanning parameters are found by an optimization algorithm. In simulations, we demonstrate the benefits of these designs with standard metrics and higher-level computer vision tasks (LiDAR odometry and 3D object detection). Finally, we experimentally implement and verify both unmodulated and modulated scanning modes using a two-dimensional, resonant MEMS scanner. Central to the implementations is high bandwidth monitoring of the phase of the angular scans in both dimensions. This task is carried out with a position-sensitive photodetector combined with high-bandwidth electronics, enabling fast spatial sampling at ~ 100Hz frame-rate.
electrical engineering and systems science
Let $M_n$ be drawn uniformly from all $\pm 1$ symmetric $n \times n$ matrices. We show that the probability that $M_n$ is singular is at most $\exp(-c(n\log n)^{1/2})$, which represents a natural barrier in recent approaches to this problem. In addition to improving on the best-known previous bound of Campos, Mattos, Morris and Morrison of $\exp(-c n^{1/2})$ on the singularity probability, our method is different and considerably simpler.
mathematics
Surface alloying is a straightforward route to control and modify the structure and electronic properties of surfaces. Here, We present a systematical study on the structural and electronic properties of three novel rare earth-based intermetallic compounds, namely ReAu2 (Re = Tb, Ho, and Er), on Au(111) via directly depositing rare-earth metals onto the hot Au(111) surface. Scanning tunneling microscopy/spectroscopy measurements reveal the very similar atomic structures and electronic properties, e.g. electronic states, and surface work functions, for all these intermetallic compound systems due to the physical and chemical similarities between these rare earth elements. Further, these electronic properties are periodically modulated by the moir\'e structures caused by the lattice mismatches between ReAu2 and Au(111). These periodically modulated surfaces could serve as templates for the self-assembly of nanostructures. Besides, these two-dimensional rare earth-based intermetallic compounds provide platforms to investigate the rare earth related catalysis, magnetisms, etc., in the lower dimensions.
condensed matter
We review recent theoretical and experimental developments concerning collective spin excitations in two-dimensional electron liquid (2DEL) systems, with particular emphasis on the interplay between many-body and spin-orbit effects, as well as the intrinsic dissipation due to the spin-Coulomb drag. Historically, the experimental realization of 2DELs in silicon inversion layers in the 60s and 70s created unprecedented opportunities to probe subtle quantum effects, culminating in the discovery of the quantum Hall effect. In the following years, high quality 2DELs were obtained in doped quantum wells made in typical semiconductors like GaAs or CdTe. These systems became important test beds for quantum many-body effects due to Coulomb interaction, spin dynamics, spin-orbit coupling, effects of applied magnetic fields, as well as dissipation mechanisms. Here we focus on the recent results involving chiral effects and intrinsic dissipation of collective spin modes: these are not only of fundamental interest but also important towards demonstrating new concepts in spintronics. Moreover, new realizations of 2DELs are emerging beyond traditional semiconductors, for instance in multilayer graphene, oxide interfaces, dichalcogenide monolayers, and many more. The concepts discussed in this review will be relevant also for these emerging systems.
condensed matter
Graph embedding (GE) methods embed nodes (and/or edges) in graph into a low-dimensional semantic space, and have shown its effectiveness in modeling multi-relational data. However, existing GE models are not practical in real-world applications since it overlooked the streaming nature of incoming data. To address this issue, we study the problem of continual graph representation learning which aims to continually train a GE model on new data to learn incessantly emerging multi-relational data while avoiding catastrophically forgetting old learned knowledge. Moreover, we propose a disentangle-based continual graph representation learning (DiCGRL) framework inspired by the human's ability to learn procedural knowledge. The experimental results show that DiCGRL could effectively alleviate the catastrophic forgetting problem and outperform state-of-the-art continual learning models.
computer science
In this paper, we will derive the real roots of certain sets of matrices with real entries. We will also demonstrate that real orthogonal matrices can have real root or be involutory. Eventually, we will represent idempotent matrices in a block form.
mathematics
Photoacoustic Computed Tomography (PACT) is a major configuration of photoacoustic imaging, a hybrid noninvasive modality for both functional and molecular imaging. PACT has rapidly gained importance in the field of biomedical imaging due to superior performance as compared to conventional optical imaging counterparts. However, the overall cost of developing a PACT system is one of the challenges towards clinical translation of this novel technique. The cost of a typical commercial PACT system originates from optical source, ultrasound detector, and data acquisition unit. With growing applications of photoacoustic imaging, there is a tremendous demand towards reducing its cost. In this review article, we have discussed various approaches to reduce the overall cost of a PACT system, and provided a cost estimation to build a low-cost PACT system.
physics
In the last years, the consolidation of deep neural network architectures for information extraction in document images has brought big improvements in the performance of each of the tasks involved in this process, consisting of text localization, transcription, and named entity recognition. However, this process is traditionally performed with separate methods for each task. In this work we propose an end-to-end model that combines a one stage object detection network with branches for the recognition of text and named entities respectively in a way that shared features can be learned simultaneously from the training error of each of the tasks. By doing so the model jointly performs handwritten text detection, transcription, and named entity recognition at page level with a single feed forward step. We exhaustively evaluate our approach on different datasets, discussing its advantages and limitations compared to sequential approaches. The results show that the model is capable of benefiting from shared features for simultaneously solving interdependent tasks.
computer science
To predict the radiative forcing of clouds it is necessary to know the rate with which ice homogeneously nucleates in supercooled water. Such rate is often measured in drops to avoid the presence of impurities. At large supercooling small (nanoscopic) drops must be used to prevent simultaneous nucleation events. The pressure inside such drops is larger than the atmospheric one by virtue of the Laplace equation. In this work, we take into account such pressure raise in order to predict the nucleation rate in droplets using the TIP4P/Ice water model. We start from a recent estimate of the maximum drop size that can be used at each supercooling avoiding simultaneous nucleation events [Espinosa et al. J. Chem. Phys., 2016]. We then evaluate the pressure inside the drops with the Laplace equation. Finally, we obtain the rate as a function of the supercooling by interpolating our previous results for 1 and 2000 bar [Espinosa et al. Phys. Rev. Lett. 2016] using the Classical Nucleation Theory expression for the rate. This requires, in turn, interpolating the ice-water interfacial free energy and chemical potential difference. The TIP4P/Ice rate curve thus obtained is in good agreement with most droplet-based experiments. In particular, we find a good agreement with measurements performed using nanoscopic drops, that are currently under debate. The successful comparison between model and experiments suggests that TIP4P/Ice is a reliable model to study the water-to-ice transition and that Classical Nucleation Theory is a good framework to understand it.
condensed matter
We discuss the special holonomy metrics of Gibbons, Lu, Pope and Stelle, which were constructed as nilmanifold bundles over a line by uplifting supersymmetric domain wall solutions of supergravity to 11 dimensions. We show that these are dual to intersecting brane solutions, and considering these leads us to a more general class of special holonomy metrics. Further dualities relate these to non-geometric backgrounds involving intersections of branes and exotic branes. We discuss the possibility of resolving these spaces to give smooth special holonomy manifolds.
high energy physics theory
Whole brain segmentation using deep learning (DL) is a very challenging task since the number of anatomical labels is very high compared to the number of available training images. To address this problem, previous DL methods proposed to use a single convolution neural network (CNN) or few independent CNNs. In this paper, we present a novel ensemble method based on a large number of CNNs processing different overlapping brain areas. Inspired by parliamentary decision-making systems, we propose a framework called AssemblyNet, made of two "assemblies" of U-Nets. Such a parliamentary system is capable of dealing with complex decisions, unseen problem and reaching a consensus quickly. AssemblyNet introduces sharing of knowledge among neighboring U-Nets, an "amendment" procedure made by the second assembly at higher-resolution to refine the decision taken by the first one, and a final decision obtained by majority voting. During our validation, AssemblyNet showed competitive performance compared to state-of-the-art methods such as U-Net, Joint label fusion and SLANT. Moreover, we investigated the scan-rescan consistency and the robustness to disease effects of our method. These experiences demonstrated the reliability of AssemblyNet. Finally, we showed the interest of using semi-supervised learning to improve the performance of our method.
electrical engineering and systems science
In this paper, we study cubic surfaces in characteristic two from the perspective of positive characteristic commutative algebra and completely classify those which are Frobenius split. In particular, we explicitly describe the finitely many non-$F$-pure cubics (up to projective change of coordinates in $\mathbb{P}^3$), exactly one of which is smooth. We also describe the configurations of lines on these cubic surfaces; a cubic surface in characteristic two is Frobenius split unless every pair of intersecting lines meets in an Eckardt point, which, in the smooth case, means no three lines form a "triangle".
mathematics
The flow through a porous medium strongly depends on the boundary conditions, very often assumed to be static. Here, we consider changes in the medium due to swelling and erosion and extend existing Lattice-Boltzmann models to include both. We study two boundary conditions: a constant pressure drop and a constant flow rate. For a constant flow rate, the steady state depends solely on the erosion dynamics while for a constant pressure drop it depends also on the timescale of swelling. We analyze the competition between swelling and erosion and identify a transition between regimes where either swelling or erosion dominate.
physics
We show that in 3-dimensional ideal magnetohydrodynamics there exist infinitely many bounded solutions that are compactly supported in space-time and have non-trivial velocity and magnetic fields. The solutions violate conservation of total energy and cross helicity, but preserve magnetic helicity. For the 2-dimensional case we show that, in contrast, no nontrivial compactly supported solutions exist in the energy space.
mathematics
We investigate mean field game systems under invariance conditions for the state space, otherwise called {\it viability conditions} for the controlled dynamics. First we analyze separately the Hamilton-Jacobi and the Fokker-Planck equations, showing how the invariance condition on the underlying dynamics yields the existence and uniqueness, respectively in $L^\infty$ and in $L^1$. Then we apply this analysis to mean field games. We investigate further the regularity of solutions proving, under some extra conditions, that the value function is (globally) Lipschitz and semiconcave. This latter regularity eventually leads the distribution density to be bounded, under suitable conditions. The results are not restricted to smooth domains.
mathematics
We propose a model-independent framework to classify and study neutrino mass models and their phenomenology. The idea is to introduce one particle beyond the Standard Model which couples to leptons and carries lepton number together with an operator which violates lepton number by two units and contains this particle. This allows to study processes which do not violate lepton number, while still working with an effective field theory. The contribution to neutrino masses translates to a robust upper bound on the mass of the new particle. We compare it to the stronger but less robust upper bound from Higgs naturalness and discuss several lower bounds. Our framework allows to classify neutrino mass models in \emph{just} 20 categories, further reduced to 14 once nucleon decay limits are taken into account, and \emph{possibly} to 9 if also Higgs naturalness considerations and direct searches are considered.
high energy physics phenomenology
In this paper, we investigate the design of robust and secure transmission in intelligent reflecting surface (IRS) aided wireless communication systems. In particular, a multi-antenna access point (AP) communicates with a single-antenna legitimate receiver in the presence of multiple single-antenna eavesdroppers, where the artificial noise (AN) is transmitted to enhance the security performance. Besides, we assume that the cascaded AP-IRS-user channels are imperfect due to the channel estimation error. To minimize the transmit power, the beamforming vector at the transmitter, the AN covariance matrix, and the IRS phase shifts are jointly optimized subject to the outage rate probability constraints under the statistical cascaded channel state information (CSI) error model that usually models the channel estimation error. To handle the resulting non-convex optimization problem, we first approximate the outage rate probability constraints by using the Bernstein-type inequality. Then, we develop a suboptimal algorithm based on alternating optimization, the penalty-based and semidefinite relaxation methods. Simulation results reveal that the proposed scheme significantly reduces the transmit power compared to other benchmark schemes.
electrical engineering and systems science
This paper takes a step towards theoretical analysis of the relationship between word embeddings and context embeddings in models such as word2vec. We start from basic probabilistic assumptions on the nature of word vectors, context vectors, and text generation. These assumptions are well supported either empirically or theoretically by the existing literature. Next, we show that under these assumptions the widely-used word-word PMI matrix is approximately a random symmetric Gaussian ensemble. This, in turn, implies that context vectors are reflections of word vectors in approximately half the dimensions. As a direct application of our result, we suggest a theoretically grounded way of tying weights in the SGNS model.
statistics
Many Reinforcement Learning (RL) approaches use joint control signals (positions, velocities, torques) as action space for continuous control tasks. We propose to lift the action space to a higher level in the form of subgoals for a motion generator (a combination of motion planner and trajectory executor). We argue that, by lifting the action space and by leveraging sampling-based motion planners, we can efficiently use RL to solve complex, long-horizon tasks that could not be solved with existing RL methods in the original action space. We propose ReLMoGen -- a framework that combines a learned policy to predict subgoals and a motion generator to plan and execute the motion needed to reach these subgoals. To validate our method, we apply ReLMoGen to two types of tasks: 1) Interactive Navigation tasks, navigation problems where interactions with the environment are required to reach the destination, and 2) Mobile Manipulation tasks, manipulation tasks that require moving the robot base. These problems are challenging because they are usually long-horizon, hard to explore during training, and comprise alternating phases of navigation and interaction. Our method is benchmarked on a diverse set of seven robotics tasks in photo-realistic simulation environments. In all settings, ReLMoGen outperforms state-of-the-art Reinforcement Learning and Hierarchical Reinforcement Learning baselines. ReLMoGen also shows outstanding transferability between different motion generators at test time, indicating a great potential to transfer to real robots.
computer science
Profiles of static solitons in one-dimensional scalar field theory satisfy the same equations as trajectories of a fictitious particle in multidimensional mechanics. We argue that the structure and properties of the solitons are essentially different if the respective mechanical motions are chaotic. This happens in multifield models and models with spatially dependent potential. We illustrate our findings using one-field sine-Gordon model in external Dirac comb potential. First, we show that the number of different "chaotic" solitons grows exponentially with their length, and the growth rate is related to the topological entropy of the mechanical system. Second, the field values of stable solitons form a fractal; we compute its box-counting dimension. Third, we demonstrate that the distribution of field values in the fractal is related to the metric entropy of the analogous mechanical system.
high energy physics theory
Objective: Individuals with spinal cord injury (SCI) report upper limb function as their top recovery priority. To accurately represent the true impact of new interventions on patient function and independence, evaluation should occur in a natural setting. Wearable cameras can be used to monitor hand function at home, using computer vision to automatically analyze the resulting videos (egocentric video). A key step in this process, hand detection, is difficult to do robustly and reliably, hindering deployment of a complete monitoring system in the home and community. We propose an accurate and efficient hand detection method that uses a simple combination of existing detection and tracking algorithms. Methods: Detection, tracking, and combination methods were evaluated on a new hand detection dataset, consisting of 167,622 frames of egocentric videos collected on 17 individuals with SCI performing activities of daily living in a home simulation laboratory. Results: The F1-scores for the best detector and tracker alone (SSD and Median Flow) were 0.90$\pm$0.07 and 0.42$\pm$0.18, respectively. The best combination method, in which a detector was used to initialize and reset a tracker, resulted in an F1-score of 0.87$\pm$0.07 while being two times faster than the fastest detector alone. Conclusion: The combination of the fastest detector and best tracker improved the accuracy over online trackers while improving the speed of detectors. Significance: The method proposed here, in combination with wearable cameras, will help clinicians directly measure hand function in a patient's daily life at home, enabling independence after SCI.
computer science
Galaxies in the reionization-era have been shown to have prominent [OIII]+H$\beta$ emission. Little is known about the gas conditions and radiation field of this population, making it challenging to interpret the spectra emerging at $z\gtrsim6$. Motivated by this shortcoming, we have initiated a large MMT spectroscopic survey identifying rest-frame optical emission lines in 227 intense [OIII] emitting galaxies at $1.3<z<2.4$. This sample complements the MOSDEF and KBSS surveys, extending to much lower stellar masses ($10^7-10^8 M_\odot$) and larger specific star formation rates ($5-300$ Gyr$^{-1}$), providing a window on galaxies directly following a burst or recent upturn in star formation. The hydrogen ionizing production efficiency ($\xi_{\rm{ion}}$) is found to increase with the [OIII] EW, in a manner similar to that found in local galaxies by Chevallard et al. (2018). We describe how this relationship helps explain the anomalous success rate in identifying Ly$\alpha$ emission in $z\gtrsim7$ galaxies with strong [OIII]+H$\beta$ emission. We probe the impact of the intense radiation field on the ISM using O32 and Ne3O2, two ionization-sensitive indices. Both are found to scale with the [OIII] EW, revealing extreme ionization conditions not commonly seen in older and more massive galaxies. In the most intense line emitters, the indices have very large average values (O32 $=9.1$, Ne3O2 $=0.5$) that have been shown to be linked to ionizing photon escape. We discuss implications for the nature of galaxies most likely to have O32 values associated with significant LyC escape. Finally we consider the optimal strategy for JWST spectroscopic investigations of galaxies at $z\gtrsim10$ where the strongest rest-frame optical lines are no longer visible with NIRSpec.
astrophysics
We derive the nucleon-nucleon interaction from the Skyrme model using second order perturbation theory and the dipole approximation to skyrmion dynamics. Unlike previous derivations, our derivation accounts for the non-trivial kinetic and potential parts of the skyrmion-skyrmion interaction lagrangian and how they couple in the quantum calculation. We derive the eight low energy interaction potentials and compare them with the phenomenological Paris model, finding qualitative agreement in seven cases. This is a substantial improvement on previous calculations and serves as an excellent starting point for describing the nucleon-nucleon interaction from the Skyrme model.
high energy physics theory
We propose a sparse grids based adaptive noise reduction strategy for electrostatic particle-in-cell (PIC) simulations. Our approach is based on the key idea of relying on sparse grids instead of a regular grid in order to increase the number of particles per cell for the same total number of particles, as first introduced in Ricketson and Cerfon (Plasma Phys. and Control. Fusion, 59(2), 024002). Adopting a new filtering perspective for this idea, we construct the algorithm so that it can be easily integrated into high performance large-scale PIC code bases. Unlike the physical and Fourier domain filters typically used in PIC codes, our approach automatically adapts to mesh size, number of particles per cell, smoothness of the density profile and the initial sampling technique. Thanks to the truncated combination technique, we can reduce the larger grid-based error of the standard sparse grids approach for non-aligned and non-smooth functions. We propose a heuristic based on formal error analysis for selecting the optimal truncation parameter at each time step, and develop a natural framework to minimize the total error in sparse PIC simulations. We demonstrate its efficiency and performance by means of two test cases: the diocotron instability in two dimensions, and the three-dimensional electron dynamics in a Penning trap. Our run time performance studies indicate that our new scheme can provide significant speedup and memory reduction as compared to regular PIC for achieving comparable accuracy in the charge density deposition.
physics