text
stringlengths
11
9.77k
label
stringlengths
2
104
One of the major challenges in stochastic thermodynamics is to compute the distributions of stochastic observables for small-scale systems for which fluctuations play a significant role. Hitherto much theoretical and experimental research has focused on systems composed of passive Brownian particles. In this paper, we study the heat fluctuations in a system of interacting active particles. Specifically we consider a one-dimensional harmonic chain of $N$ active Ornstein-Uhlenbeck particles, with the chain ends connected to heat baths of different temperatures. We compute the moment-generating function for the heat flow in the steady state. We employ our general framework to explicitly compute the moment-generating function for two example single-particle systems. Further, we analytically obtain the scaled cumulants for the heat flow for the chain. Numerical Langevin simulations confirm the long-time analytical expressions for first and second cumulants for the heat flow for a two-particle chain.
condensed matter
The task of video prediction is forecasting the next frames given some previous frames. Despite much recent progress, this task is still challenging mainly due to high nonlinearity in the spatial domain. To address this issue, we propose a novel architecture, Frequency Domain Transformer Network (FDTN), which is an end-to-end learnable model that estimates and uses the transformations of the signal in the frequency domain. Experimental evaluations show that this approach can outperform some widely used video prediction methods like Video Ladder Network (VLN) and Predictive Gated Pyramids (PGP).
computer science
Topological data analysis computes and analyses topological features of the point clouds by constructing and studying a simplicial representation of the underlying topological structure. The enthusiasm that followed the initial successes of topological data analysis was curbed by the computational cost of constructing such simplicial representations. The lazy witness complex is a computationally feasible approximation of the underlying topological structure of a point cloud. It is built in reference to a subset of points, called landmarks, rather than considering all the points as in the \v{C}ech and Vietoris-Rips complexes. The choice and the number of landmarks dictate the effectiveness and efficiency of the approximation. We adopt the notion of $\epsilon$-cover to define $\epsilon$-net. We prove that $\epsilon$-net, as a choice of landmarks, is an $\epsilon$-approximate representation of the point cloud and the induced lazy witness complex is a $3$-approximation of the induced Vietoris-Rips complex. Furthermore, we propose three algorithms to construct $\epsilon$-net landmarks. We establish the relationship of these algorithms with the existing landmark selection algorithms. We empirically validate our theoretical claims. We empirically and comparatively evaluate the effectiveness, efficiency, and stability of the proposed algorithms on synthetic and real datasets.
computer science
The Fermi Gamma-Ray Space Telescope has provided evidence for diffuse gamma-ray emission in the central parts of the Milky Way and the Andromeda galaxy. This excess has been interpreted either as dark matter annihilation emission or as emission from thousands of millisecond pulsars (MSPs). We have recently shown that old massive globular clusters may move towards the center of the Galaxy by dynamical friction and carry within them enough MSPs to account for the observed gamma-ray excess. In this paper we revisit the MSP scenario for the Andromeda galaxy, by modeling the formation and disruption of its globular cluster system. We find that our model predicts gamma-ray emission $\sim 2-3$ times larger than for the Milky Way, but still nearly an order of magnitude smaller than the observed Fermi excess in the Andromeda. Our MSP model can reproduce the observed excess only by assuming $\sim 8$ times larger number of old clusters than inferred from galaxy scaling relations. To explain the observations we require either that Andromeda deviates significantly from the scaling relations, or that a large part of its high-energy emission comes from additional sources.
astrophysics
We report on measurements of quantized conductance in gate-defined quantum point contacts in bilayer graphene that allow the observation of subband splittings due to spin-orbit coupling. The size of this splitting can be tuned from 40 to 80 $\mu$eV by the displacement field. We assign this gate-tunable subband-splitting to a gap induced by spin-orbit coupling of Kane-Mele type, enhanced by proximity effects due to the substrate. We show that this spin-orbit coupling gives rise to a complex pattern in low perpendicular magnetic fields, increasing the Zeeman splitting in one valley and suppressing it in the other one. In addition, we observe the existence of a spin-polarized channel of 6 e$^2$/h at high in-plane magnetic field and of signatures of interaction effects at the crossings of spin-split subbands of opposite spins at finite magnetic field.
condensed matter
Ferroelectrics are characterized by domain structures as are other ferroics. At the nanoscale though, ferroelectrics may exhibit non-trivial or exotic polarization configurations under proper electrostatic and elastic conditions. These polar states may possess emerging properties not present in the bulk compounds and are promising for technological applications. Here, the observation of rotational polarization topologies at the nanoscale by means of aberration-corrected scanning transmission electron microscopy is reported in BaTiO3/SrTiO3 superlattices grown on cubic SrTiO3(001). The transition from a highly homogeneous polarization state to the formation of rotational nanodomains has been achieved by controlling the superlattice period while maintaining compressive clamping of the superlattice to the cubic SrTiO3 substrate. The nanodomains revealed in BaTiO3 prove that its nominal tetragonal structure also allows rotational polar textures.
condensed matter
In contrast with scalar-weighted networks, where bipartite consensus can be achieved if and only if the underlying signed network is structurally balanced, structural balance is no longer a graph-theoretic equivalence to the bipartite consensus in the case of signed matrix-weighted networks. To re-establish the relationship between the network structure and the bipartite consensus, the non-trivial balancing set is introduced which is a set of edges whose sign negation can transform a structurally imbalanced network into a structurally balanced one and the weight matrices associated with edges in this set have a non-trivial intersection of null spaces. We show that necessary and/or sufficient conditions for bipartite consensus on matrix-weighted networks can be characterized by the uniqueness of the associated non-trivial balancing set, in the meanwhile, the contribution of the associated non-trivial intersection of null spaces to the steady-state of the matrix-weighted network is examined. Moreover, for matrix-weighted networks with a positive-negative spanning tree, necessary and sufficient condition for bipartite consensus using non-trivial balancing set is obtained. Simulation examples are provided to demonstrate the theoretical results.
electrical engineering and systems science
We propose a categorical foundation of gradient-based machine learning algorithms in terms of lenses, parametrised maps, and reverse derivative categories. This foundation provides a powerful explanatory and unifying framework: it encompasses a variety of gradient descent algorithms such as ADAM, AdaGrad, and Nesterov momentum, as well as a variety of loss functions such as as MSE and Softmax cross-entropy, shedding new light on their similarities and differences. Our approach also generalises beyond neural networks (modelled in categories of smooth maps), accounting for other structures relevant to gradient-based learning such as boolean circuits. Finally, we also develop a novel implementation of gradient-based learning in Python, informed by the principles introduced by our framework.
computer science
Recurrent neural networks (RNNs) are powerful dynamical models for data with complex temporal structure. However, training RNNs has traditionally proved challenging due to exploding or vanishing of gradients. RNN models such as LSTMs and GRUs (and their variants) significantly mitigate these issues associated with training by introducing various types of gating units into the architecture. While these gates empirically improve performance, how the addition of gates influences the dynamics and trainability of GRUs and LSTMs is not well understood. Here, we take the perspective of studying randomly initialized LSTMs and GRUs as dynamical systems, and ask how the salient dynamical properties are shaped by the gates. We leverage tools from random matrix theory and mean-field theory to study the state-to-state Jacobians of GRUs and LSTMs. We show that the update gate in the GRU and the forget gate in the LSTM can lead to an accumulation of slow modes in the dynamics. Moreover, the GRU update gate can poise the system at a marginally stable point. The reset gate in the GRU and the output and input gates in the LSTM control the spectral radius of the Jacobian, and the GRU reset gate also modulates the complexity of the landscape of fixed-points. Furthermore, for the GRU we obtain a phase diagram describing the statistical properties of fixed-points. We also provide a preliminary comparison of training performance to the various dynamical regimes realized by varying hyperparameters. Looking to the future, we have introduced a powerful set of techniques which can be adapted to a broad class of RNNs, to study the influence of various architectural choices on dynamics, and potentially motivate the principled discovery of novel architectures.
computer science
Random sampling has been used to find low-rank structure and to build fast direct solvers for multiscale partial differential equations of various types. In this work, we design an accelerated Schwarz method for radiative transfer equations that makes use of approximate local solution maps constructed offline via a random sampling strategy. Numerical examples demonstrate the accuracy, robustness, and efficiency of the proposed approach.
mathematics
Beam search is an effective and widely used decoding algorithm in many sequence-to-sequence (seq2seq) text generation tasks. However, in open-ended text generation, beam search is often found to produce repetitive and generic texts, sampling-based decoding algorithms like top-k sampling and nucleus sampling are more preferred. Standard seq2seq models suffer from label bias due to its locally normalized probability formulation. This paper provides a series of empirical evidence that label bias is a major reason for such degenerate behaviors of beam search. By combining locally normalized maximum likelihood estimation and globally normalized sequence-level training, label bias can be reduced with almost no sacrifice in perplexity. To quantitatively measure label bias, we test the model's ability to discriminate the groundtruth text and a set of context-agnostic distractors. We conduct experiments on large-scale response generation datasets. Results show that beam search can produce more diverse and meaningful texts with our approach, in terms of both automatic and human evaluation metrics. Our analysis also suggests several future working directions towards the grand challenge of open-ended text generation.
computer science
An axion-like particle (ALP) with mass $m_\phi \sim 10^{-15}$eV oscillates with frequency $\sim$1 Hz. This mass scale lies in an open window of astrophysical constraints, and appears naturally as a consequence of grand unification (GUT) in string/M-theory. However, with a GUT-scale decay constant such an ALP overcloses the Universe, and cannot solve the strong CP problem. In this paper, we present a two axion model in which the 1 Hz ALP constitutes the entirety of the dark matter (DM) while the QCD axion solves the strong CP problem but contributes negligibly to the DM relic density. The mechanism to achieve the correct relic densities relies on low-scale inflation ($m_\phi \lesssim H_{\rm inf}\lesssim 1$ MeV), and we present explicit realisations of such a model. The scale in the axion potential leading to the 1 Hz axion generates a value for the strong CP phase which oscillates around $\bar{\theta}_{\rm QCD}\sim 10^{-12}$, within reach of the proton storage ring electric dipole moment experiment. The 1 Hz axion is also in reach of near future laboratory and astrophysical searches.
high energy physics phenomenology
We study the 3-parametric family of vertex operator algebras based on the unitary Grassmannian coset CFT $\mathfrak{u}(M+N)_k/(\mathfrak{u}(M)_k \times \mathfrak{u}(N)_k)$. This VOA serves as a basic building block for a large class of cosets and generalizes the $\mathcal{W}_\infty$ algebra. We analyze representations and their characters in detail and find surprisingly simple character formulas for the representations in the generic parameter regime that admit an elegant combinatorial formulation. We also discuss truncations of the algebra and give a conjectural formula for the complete set of truncation curves. We develop a theory of gluing for these algebras in order to build more complicated coset and non-coset algebras. We demonstrate the power of this technology with some examples and show in particular that the $\mathcal{N}=2$ supersymmetric Grassmannian can be obtained by gluing three bosonic Grassmannian algebras in a loop. We finally speculate about the tantalizing possibility that this algebra is a specialization of an even larger 4-parametric family of algebras exhibiting pentality symmetry. Specialization of this conjectural family should include both the unitary Grassmannian family as well as the Lagrangian Grassmannian family of VOAs which interpolates between the unitary and the orthosymplectic cosets.
high energy physics theory
The bulk photovoltaic effect (BPVE) refers to current generation due to illumination by light in a homogeneous bulk material lacking inversion symmetry. In addition to the intensively studied shift current, the ballistic current, which originates from asymmetric carrier generation due to scattering processes, also constitutes an important contribution to the overall kinetic model of the BPVE. In this letter, we use a perturbative approach to derive a formula for the ballistic current resulting from the intrinsic electron-phonon scattering in a form amenable to first-principles calculation. We then implement the theory and calculate the ballistic current of the prototypical BPVE material \ch{BaTiO3} using quantum-mechanical density functional theory. The magnitude of the ballistic current is comparable to that of shift current, and the total spectrum (shift plus ballistic) agrees well with the experimentally measured photocurrents. Furthermore, we show that the ballistic current is sensitive to structural change, which could benefit future photovoltaic materials design.
condensed matter
Squeezed light are optical beams with variance below the Shot Noise Level. They are a key resource for quantum technologies based on photons, they can be used to achieve better precision measurements, improve security in quantum key distribution channels and as a fundamental resource for quantum computation. To date, the majority of experiments based on squeezed light have been based on non-linear crystals and discrete optical components, as the integration of quadrature squeezed states of light in a nanofabrication-friendly material is a challenging technological task. Here we measure 0.45 dB of GHz-broad quadrature squeezing produced by a ring resonator integrated on a Silicon Nitride photonic chip that we fabricated with CMOS compatible steps. The result corrected for the off-chip losses is estimated to be 1 dB below the Shot Noise Level. We identify and verify that the current results are limited by excess noise produced in the chip, and propose ways to reduce it. Calculations suggest that an improvement in the optical properties of the chip achievable with existing technology can develop scalable quantum technologies based on light.
quantum physics
We determine the asymptotic growth rate of the diameter of the random hyperbolic surfaces constructed by Brooks and Makover. This model consists of a uniform gluing of $2n$ hyperbolic ideal triangles along their sides followed by a compactification to get a random hyperbolic surface of genus roughly $n/2$. We show that the diameter of those random surfaces is asymptotic to $2 \log n$ in probability as $n \to \infty$.
mathematics
Two widely used but distinct approaches to the dynamics of open quantum systems are the Nakajima-Zwanzig and time-convolutionless quantum master equation, respectively. Although both describe identical quantum evolutions with strong memory effects, the first uses a time-nonlocal memory kernel $\mathcal{K}$, whereas the second achieves the same using a time-local generator $\mathcal{G}$. Here we show that the two are connected by a simple yet general fixed-point relation: $\mathcal{G} = \hat{\mathcal{K}}[\mathcal{G}]$. This allows one to extract nontrivial relations between the two completely different ways of computing the time-evolution and combine their strengths. We first discuss the stationary generator, which enables a Markov approximation that is both nonperturbative and completely positive for a large class of evolutions. We show that this generator is not equal to the low-frequency limit of the memory kernel, but additionally "samples" it at nonzero characteristic frequencies. This clarifies the subtle roles of frequency dependence and semigroup factorization in existing Markov approximation strategies. Second, we prove that the fixed-point equation sums up the time-domain gradient / Moyal expansion for the time-nonlocal quantum master equation, providing nonperturbative insight into the generation of memory effects. Finally, we show that the fixed-point relation enables a direct iterative numerical computation of both the stationary and the transient generator from a given memory kernel. For the transient generator this produces non-semigroup approximations which are constrained to be both initially and asymptotically accurate at each iteration step.
quantum physics
We give a short review of gamma-ray burst (GRB) observations with the Konus-WIND (KW) experiment, which has been providing a continuous all-sky coverage in the 20 keV-15 MeV band during the period from 1994 to present. The recent results include a systematic study of GRBs with known redshifts and a search for ultra-long GRBs in the KW archival data. We also discuss the KW capabilities for multi-messenger astronomy.
astrophysics
The Marchenko method retrieves the responses to virtual sources in the subsurface, accounting for all orders of multiples. The method is based on two integral representations for focusing and Green's functions. In discretized form these integrals are represented by finite summations over the acquisition geometry. Consequently, the method requires ideal geometries of regularly sampled and co-located sources and receivers. However, a recent study showed that this restriction can, in theory, be relaxed by deconvolving the irregularly-sampled results with certain point spread functions (PSFs).The results are then reconstructed as if they were acquired using a perfect geometry. Here, the iterative Marchenko scheme is adapted in order to include these PSFs; thus, showing how imperfect sampling can be accounted for in practical situations. Next, the new methodology is tested on a 2D numerical example. The results show clear improvement between the proposed scheme and the standard iterative scheme. By removing the requirement for perfect geometries the Marchenko method can be more widely applied to field data.
physics
Quantitative lung measures derived from computed tomography (CT) have been demonstrated to improve prognostication in coronavirus disease (COVID-19) patients, but are not part of the clinical routine since required manual segmentation of lung lesions is prohibitively time-consuming. We propose a new fully automated deep learning framework for rapid quantification and differentiation between lung lesions in COVID-19 pneumonia from both contrast and non-contrast CT images using convolutional Long Short-Term Memory (ConvLSTM) networks. Utilizing the expert annotations, model training was performed 5 times with separate hold-out sets using 5-fold cross-validation to segment ground-glass opacity and high opacity (including consolidation and pleural effusion). The performance of the method was evaluated on CT data sets from 197 patients with positive reverse transcription polymerase chain reaction test result for SARS-CoV-2. Strong agreement between expert manual and automatic segmentation was obtained for lung lesions with a Dice score coefficient of 0.876 $\pm$ 0.005; excellent correlations of 0.978 and 0.981 for ground-glass opacity and high opacity volumes. In the external validation set of 67 patients, there was dice score coefficient of 0.767 $\pm$ 0.009 as well as excellent correlations of 0.989 and 0.996 for ground-glass opacity and high opacity volumes. Computations for a CT scan comprising 120 slices were performed under 2 seconds on a personal computer equipped with NVIDIA Titan RTX graphics processing unit. Therefore, our deep learning-based method allows rapid fully-automated quantitative measurement of pneumonia burden from CT and may generate results with an accuracy similar to the expert readers.
electrical engineering and systems science
For any $E_\infty$ ring spectrum $E$, we show that there is an algebra $\mathrm{Pow}(E)$ of stable power operations that acts naturally on the underlying spectrum of any $E$-algebra. Further, we show that there are maps of rings $E \to \mathrm{Pow}(E) \to \mathrm{End}(E)$, where the latter determines a restriction from power operations to stable operations in the cohomology of spaces. In the case where $E$ is the mod-$p$ Eilenberg-Mac Lane spectrum, this realizes a natural quotient from Mandell's algebra of generalized Steenrod operations to the mod-$p$ Steenrod algebra. More generally, this arises as part of a classification of endomorphisms of representable functors from an $\infty$-category $\mathcal{C}$ to spectra, with particular attention to the case where $\mathcal{C}$ is an $\mathcal{O}$-monoidal $\infty$-category.
mathematics
We propose here that the lithium decrease at super-solar metallicities observed in high resolution spectroscopic surveys can be explained by the interplay of mixed populations, coming from the inner regions of the Milky Way disc. The lower lithium content of these stars is a consequence of inside-out disc formation, plus radial migration. In this framework, local stars with super-solar metallicities would have migrated to the solar vicinity and depleted their original lithium during their travel time. To arrive to such a result, we took advantage of the AMBRE catalog of lithium abundances combined with chemical evolution models which take into account the contribution to the lithium enrichment by different nucleosynthetic sources. A large proportion of migrated stars can explain the observed lower lithium abundance at super-solar metallicities. We stress that nowadays, there is no stellar model able to predict Li-depletion for such super-solar metallicity stars, and the Solar Li-depletion has to be assumed. In addition, it currently exists no solid quantitative estimate of the proportion of migrated stars in the Solar neighborhood and their travel time. Our results illustrate how important it is to properly include radial migration when comparing chemical evolution models to observations, and that in this case, the lithium decrease at larger metallicities does not necessarily imply that stellar yields have to be modified, contrary to previous claims in literature.
astrophysics
Autoencoder-based learning has emerged as a staple for disciplining representations in unsupervised and semi-supervised settings. This paper analyzes a framework for improving generalization in a purely supervised setting, where the target space is high-dimensional. We motivate and formalize the general framework of target-embedding autoencoders (TEA) for supervised prediction, learning intermediate latent representations jointly optimized to be both predictable from features as well as predictive of targets---encoding the prior that variations in targets are driven by a compact set of underlying factors. As our theoretical contribution, we provide a guarantee of generalization for linear TEAs by demonstrating uniform stability, interpreting the benefit of the auxiliary reconstruction task as a form of regularization. As our empirical contribution, we extend validation of this approach beyond existing static classification applications to multivariate sequence forecasting, verifying their advantage on both linear and nonlinear recurrent architectures---thereby underscoring the further generality of this framework beyond feedforward instantiations.
statistics
We present all-multiplicity formulae for the tree-level S-matrix of gluons in self-dual radiative backgrounds, which are chiral, asymptotically flat gauge fields characterised by their free radiative data. Twistor theory captures the underlying integrability of these backgrounds, and the tree-level scattering amplitudes are written as integrals over the moduli space of holomorphic maps from the Riemann sphere to twistor space, with the degree of the map related to the helicity configuration of the external gluons. In the MHV sector, our formula is derived from first principles; for general helicities the formulae are motivated by twistor string theory and shown to pass several consistency tests. Unlike amplitudes on a trivial background, there are residual integrals due to the functional freedom in the self-dual scattering background, but for scattering of momentum eigenstates we are able to do many of these explicitly and even more is possible in the special case of plane wave backgrounds. In general, the number of these integrals is always less than expected from standard perturbation theory, but matches the number associated with space-time MHV rules in a self-dual background field, which we develop for self-dual plane wave backgrounds.
high energy physics theory
Geostatistical modeling for continuous point-referenced data has been extensively applied to neuroimaging because it produces efficient and valid statistical inference. However, diffusion tensor imaging (DTI), a neuroimaging characterizing the brain structure produces a positive definite (p.d.) matrix for each voxel. Current geostatistical modeling has not been extended to p.d. matrices because introducing spatial dependence among positive definite matrices properly is challenging. In this paper, we use the spatial Wishart process, a spatial stochastic process (random field) where each p.d. matrix-variate marginally follows a Wishart distribution, and spatial dependence between random matrices is induced by latent Gaussian processes. This process is valid on an uncountable collection of spatial locations and is almost surely continuous, leading to a reasonable means of modeling spatial dependence. Motivated by a DTI dataset of cocaine users, we propose a spatial matrix-variate regression model based on the spatial Wishart process. A problematic issue is that the spatial Wishart process has no closed-form density function. Hence, we propose approximation methods to obtain a feasible working model. A local likelihood approximation method is also applied to achieve fast computation. The simulation studies and real data analysis demonstrate that the working model produces reliable inference and improved performance compared to other methods.
statistics
We examine the mechanisms by which atmosphere can be eroded by giant impacts onto Earth-like planets with thin atmospheres, using 3D smoothed particle hydrodynamics simulations with sufficient resolution to directly model the fate of low-mass atmospheres. We present a simple scaling law to estimate the fraction lost for any impact angle and speed in this regime. In the canonical Moon-forming impact, only around 10% of the atmosphere would have been lost from the immediate effects of the collision. There is a gradual transition from removing almost none to almost all of the atmosphere for a grazing impact as it becomes more head-on or increases in speed, including complex, non-monotonic behaviour at low impact angles. In contrast, for head-on impacts, a slightly greater speed can suddenly remove much more atmosphere. Our results broadly agree with the application of 1D models of local atmosphere loss to the ground speeds measured directly from our simulations. However, previous analytical models of shock-wave propagation from an idealised point-mass impact significantly underestimate the ground speeds and hence the total erosion. The strong dependence on impact angle and the interplay of multiple non-linear and asymmetrical loss mechanisms highlight the need for 3D simulations in order to make realistic predictions.
astrophysics
We study horizon shells and soldering freedom for extreme black holes and how supertranslation-like Bondi-Metzner-Sachs (BMS) symmetries appear as soldering transformations. Further, for a null shell placed infinitesimally close to the horizon of an extreme Reissner-Nordstr$\ddot{o}$m (RN) black hole, we show superrotation-like symmetries also arise as soldering freedom. Next, considering the interaction of impulsive gravitational waves supported at the horizon shell with test particles, we study how the "memory" (or the imprints) of BMS-like symmetries gets encoded in the geodesics (test particles) crossing the shell. Our study shows, timelike test particles get displaced from their initial plane when they cross the horizon shell. For a null geodesic congruence crossing the horizon shell, the optical tensors corresponding to the congruence suffer jumps. In both the cases, the changes are induced by BMS parameters that constitute the gravity wave and matter degrees of freedom of the shell.
high energy physics theory
We present a sequence of driven-dissipative protocols for controlling cold atoms in tilted optical lattices. These experimentally accessible examples are templates that demonstrate how dissipation can be used to manipulate quantum many-body systems. We consider bosonic atoms trapped in a tilted optical lattice, immersed in a superfluid bath, and excited by coherent Raman lasers. With these ingredients, we are able to controllably transport atoms in the lattice and produce self-healing quantum states: a Mott insulator and the topologically ordered spin-1 AKLT state.
condensed matter
Several classification methods assume that the underlying distributions follow tree-structured graphical models. Indeed, trees capture statistical dependencies between pairs of variables, which may be crucial to attain low classification errors. The resulting classifier is linear in the log-transformed univariate and bivariate densities that correspond to the tree edges. In practice, however, observed data may not be well approximated by trees. Yet, motivated by the importance of pairwise dependencies for accurate classification, here we propose to approximate the optimal decision boundary by a sparse linear combination of the univariate and bivariate log-transformed densities. Our proposed approach is semi-parametric in nature: we non-parametrically estimate the univariate and bivariate densities, remove pairs of variables that are nearly independent using the Hilbert-Schmidt independence criteria, and finally construct a linear SVM on the retained log-transformed densities. We demonstrate using both synthetic and real data that our resulting classifier, denoted SLB (Sparse Log-Bivariate density), is competitive with popular classification methods.
statistics
We consider the oscillation properties of the remainder term $\Delta(x)$ in the prime number formula for Beurling primes, and their relation to the distribution of the nontrivial zeroes of the Beurling zeta function $\zeta$. The two main directions in this study are obtaining correspondingly high oscillation given the existence of a certain zero $\rho$ of $\zeta$ (a direction of study initiated by Littlewood in 1937 for the Riemann zeta function), and to describe some function theoretical connection between the general oscillation behavior of $\Delta$ on the one hand and the general distribution of nontrivial zeroes (a direction initiated by Phragmen and Ingham). As a first part of our study, here we describe results about the behavior, and in particular fine distribution of zeroes, of $\zeta$. The analysis here brings about some news, sometimes even for the classical case of the Riemann zeta function. Theorem 4 provides a zero density estimate, which is a complement to known results for the Selberg class, relying on use of the Riemann-type functional equation, which we do not assume in the Beurling context. We deduce a variant of a well-known theorem of Tur\'an, extending its range of validity even for rectangles of height only h=2. We extend a zero clustering result of Ramachandra from the Riemann zeta case. A weaker result -- which, on the other hand, is a strong sharpening of the average result from the classic book of Montgomery -- was worked out by Diamond, Montgomery and Vorhauer.
mathematics
We establish a Lie theory for singular subalgebroids, objects which generalize singular foliations to the setting of Lie algebroids. First we carry out the longitudinal version of the theory. For the global one, a guiding example is provided by the holonomy groupoid, which carries a natural diffeological structure in the sense of Souriau. We single out a class of diffeological groupoids satisfying specific properties and introduce a differentiation-integration process under which they correspond to singular subalgebroids. In the regular case, we compare our procedure to the usual integration by Lie groupoids. We also specify the diffeological properties which distinguish the holonomy groupoid from the graph.
mathematics
NEMETODE, a network of low-light video cameras in and around the British Isles, operated in conjunction with the BAA Meteor Section and other groups, monitors the activity of meteors, enabling the precision measurement of radiant positions and the altitudes and geocentric velocities of meteoroids and their solar system orbits. The results from observations of the Quadrantid and December alpha Draconid meteor showers during 2012-2019 are presented and discussed.
astrophysics
We propose that the Ultra-Diffuse Galaxy (UDG) population represents a set of satellite galaxies born in $\sim10^{10}-10^{11}$ M$_\odot$ halos, similar to field dwarfs, which suffer a dramatic reduction in surface brightness due to tidal stripping and heating. This scenario is observationally motivated by the radial alignment of UDGs in Coma as well as the significant dependence of UDG abundance on cluster mass. As a test of this formation scenario, we apply a semi-analytic model describing the change in stellar mass and half-light radius of dwarf satellites, occupying either cored or cuspy halos, to cluster subhalos in the Illustris-dark simulation. Key to this model are results from simulations which indicate that galaxies in cored dark-matter halos expand significantly in response to tidal stripping and heating, whereas galaxies in cuspy halos experience limited size evolution. Our analysis indicates that a population of tidally-stripped dwarf galaxies, residing in cored halos (like those hosting low-surface brightness field dwarfs), is able to reproduce the observed sizes and stellar masses of UDGs in clusters remarkably well.
astrophysics
In recent years, we have witnessed a remarkable surge of usage in shared vehicles in our cities. Shared mobility offers a future of no congestion in busy city roads with increasing populations of travelers, passengers, and drivers. Given the behavioral decision-making of travelers and the shared vehicles' operators, however, the question is "how can we ensure a socially-acceptable assignment between travelers and vehicles?" In other words, how can we design a shared mobility system that assigns each traveler to the "right" vehicle? In this paper, we design a shared mobility market consisted of travelers and vehicles in a transportation network. We formulate a linear programming problem and derive the optimal assignment between travelers and vehicles. In addition, we provide the necessary and sufficient conditions for the stable traveler-vehicle profit allocation. Our goal is twofold: maximize the social welfare of all travelers with the optimal assignment while ensuring the feasibility and stability of the traveler-vehicle profit allocation while respecting the decision-making of both the travelers and the vehicles' operators.
electrical engineering and systems science
Most often in chemical physics, long range van der Waals surface interactions are approximated by the exact asymptotic result at vanishing distance, the well known additive approximation of London dispersion forces due to Hamaker. However, the description of retardation effects that is known since the time of Casimir is completely neglected for lack of a tractable expression. Here we show that it is possible to describe surface van der Waals forces at arbitrary distances in one single simple equation. The result captures the long sought crossover from non-retarded (London) to retarded (Casimir) interactions, the effect of polarization in condensed media and the full suppression of retarded interactions at large distance. This is achieved with similar accuracy and the same material properties that are used to approximate the Hamaker constant in conventional applications. The results show that at ambient temperature, retardation effects significantly change the power law exponent of the conventional Hamaker result for distances of just a few nanometers.
condensed matter
We study the asymptotic behavior of deterministic, continuous-time imitation dynamics for population games over networks. The basic assumption of this learning mechanism -- encompassing the replicator dynamics -- is that players belonging to a single population exchange information through pairwise interactions, whereby they get aware of the actions played by the other players and the corresponding rewards. Using this information, they can revise their current action, imitating the one of the players they interact with. The pattern of interactions regulating the learning process is determined by a community structure. First, the set of equilibrium points of such network imitation dynamics is characterized. Second, for the class of potential games and for undirected and connected community networks, global asymptotic convergence is proved. In particular, our results guarantee convergence to a Nash equilibrium from every fully supported initial population state in the special case when the Nash equilibria are isolated and fully supported. Examples and numerical simulations are offered to validate the theoretical results and counterexamples are discussed for scenarios when the assumptions on the community structure are not verified.
electrical engineering and systems science
The notion of $n$-th indicator for a finite-dimensional Hopf algebra was introduced by Kashina, Montgomery and Ng as gauge invariance of the monoidal category of its representations. The properties of these indicators were further investigated by Shimizu. In this short note, we show that the indicators appearing in positive characteristic all share the same sequence pattern if we assume the coradical of the Hopf algebra is a local Hopf subalgebra.
mathematics
Deep learning has brought great progress for the sequential recommendation (SR) tasks. With advanced network architectures, sequential recommender models can be stacked with many hidden layers, e.g., up to 100 layers on real-world recommendation datasets. Training such a deep network is difficult because it can be computationally very expensive and takes much longer time, especially in situations where there are tens of billions of user-item interactions. To deal with such a challenge, we present StackRec, a simple, yet very effective and efficient training framework for deep SR models by iterative layer stacking. Specifically, we first offer an important insight that hidden layers/blocks in a well-trained deep SR model have very similar distributions. Enlightened by this, we propose the stacking operation on the pre-trained layers/blocks to transfer knowledge from a shallower model to a deep model, then we perform iterative stacking so as to yield a much deeper but easier-to-train SR model. We validate the performance of StackRec by instantiating it with four state-of-the-art SR models in three practical scenarios with real-world datasets. Extensive experiments show that StackRec achieves not only comparable performance, but also substantial acceleration in training time, compared to SR models that are trained from scratch. Codes are available at https://github.com/wangjiachun0426/StackRec.
computer science
We present an experimental feasible proposal for synthesizing second-order topological superfluids that support Majorana corner modes in spin-orbit coupled Fermi gases. For this purpose, we consider the staggered spin-orbit coupling introduced in one direction. This results in a system consisted of two sublattices, providing extra degree of freedom for the emergent higher-order topological state. We find the topological trivial superfluids, first-order topological superfluids and boundary-obstructed second-order topological superfluids, as well as different topological phase transitions among them with respect to the the experimental tunable parameters. At the weak interaction regime, the phase transition is characterized by the Chern number accompanied by the bulk gap closing and reopening. However, at the strong interaction regime, we find the system can support the boundary-obstructed topological superfluids with Majorana corner modes, but topological phase transition dose not undergo the gap-closing of bulk bands. Instead the transition is refined by the quadrupole moment and signaled out by the gap-closing of edge-state. The proposal is simply based on the $s$-wave interaction and readily feasible via existing experimental techniques, which suggests new possibilities in interacting spin-orbit coupled systems by unifying both first- and higher-order topological superfluids in a simple but realistic microscopic model.
condensed matter
The analysis of the results of a depletion code is often considered a tedious and delicate task for it requires both the processing of large volume of information (the time dependent composition of up to thousands isomeric states) and an extensive experience of nuclear reactions and associated nuclear data. From these observations, dedicated developments have been integrated to the upcoming version of the Monte Carlo depletion code VESTA 2.2, in order to implement an innovative representation of depletion problems. The aim is to provide user with an adapted and efficient framework to ease the analysis of the results of the code and facilitate their interpretation. This effort ultimately culminated in the development of the representation of the isotope evolution of a given system as a directed graph. In this paper, it is shown that the Bateman equation encoded in the VESTA code indeed possesses a natural interpretation in terms of directed cyclic graph and it is proposed to explore some of the insight one can gain from the graph representation of a depletion problem. Starting from the new capabilities of the code, it is shown how one can build on the wealth of existing methods of graph theory in order to gain useful information about the nuclear reactions taking place in a material under irradiation. The graph representation of a depletion problem being especially simple in activation problems, for then only a limited number of nuclides and reactions are involved, the graph representation and its associated tools will be used to study the evolution of the structure materials of a simplifed model of the ITER fusion reactor.
physics
Tropical precipitation extremes are expected to strengthen with warming, but quantitative estimates remain uncertain because of a poor understanding of changes in convective dynamics. This uncertainty is addressed here by analyzing idealized convection-permitting simulations of radiative-convective equilibrium in long-channel geometry. Across a wide range of climates, the thermodynamic contribution to changes in instantaneous precipitation extremes follows near-surface moisture, and the dynamic contribution is positive and small, but sensitive to domain size. The shapes of mass flux profiles associated with precipitation extremes are determined by conditional sampling that favors strong vertical motion at levels where the vertical saturation specific humidity gradient is large, and mass flux profiles collapse to a common shape across climates when plotted in a moisture-based vertical coordinate. The collapse, robust to changes in microphysics and turbulence schemes, implies a thermodynamic contribution that scales with near-surface moisture despite substantial convergence aloft and allows the dynamic contribution to be defined by the pressure velocity at a single level. Linking the simplified dynamic mode to vertical velocities from entraining plume models reveals that the small dynamic mode in channel simulations (<~2 %/K) is caused by opposing height-dependences of vertical velocity and density, together with the buffering influence of cloud-base buoyancies that vary little with surface temperature. These results reinforce an emerging picture of the response of extreme tropical precipitation rates to warming: a thermodynamic mode of about 7 %/K dominates, with a minor contribution from changes in dynamics.
physics
We propose and demonstrate a method for the adaptive wavefront correction of dynamic multimode fiber beams for the first time. The wavefront of incident beam is reconstructed in real-time based on the complete modal information, which obtained by using the modal decomposition of correlation filter method. For the proof of principle, both of the modal decomposition and the wavefront correction are implemented using the same computer-generated hologram, which encoded into a phase-only spatial light modulator. We demonstrate the wavefront correction of dynamic multimode beam at a rate of 5Hz and achieve a 1.73-fold improvement on the average power-in-the-bucket. The experimental results indicate the feasibility of the real-time wavefront correction for the large mode area fiber laser by adaptive optics.
physics
Using EUV images and line-of-sight magnetograms from Solar Dynamics Observatory, we examine eight emerging bipolar magnetic regions (BMRs) in central-disk coronal holes for whether the emerging magnetic arch made any noticeable coronal jets directly, via reconnection with ambient open field as modeled by Yokoyama and Shibata (1995). During emergence, each BMR produced no obvious EUV coronal jet of normal brightness, but each produced one or more faint EUV coronal jets that are discernible in AIA 193 {\AA} images. The spires of these jets are much fainter and usually narrower than for typical EUV jets that have been observed to be produced by minifilament eruptions in quiet regions and coronal holes. For each of 26 faint jets from the eight emerging BMRs, we examine whether the faint spire was evidently made a la Yokoyama and Shibata (1995). We find: (1) 16 of these faint spires evidently originate from sites of converging opposite-polarity magnetic flux and show base brightenings like those in minifilament-eruption-driven coronal jets, (2) the 10 other faint spires maybe were made by a burst of the external-magnetic-arcade-building reconnection of the emerging magnetic arch with the ambient open field, reconnection directly driven by the arch's emergence, but (3) none were unambiguously made by such emergence-driven reconnection. Thus, for these eight emerging BMRs the observations indicate that emergence-driven external reconnection of the emerging magnetic arch with ambient open field at most produces a jet spire that is much fainter than in previously-reported, much more obvious coronal jets driven by minifilament eruptions.
astrophysics
Viscous dissipation, as a small perturbative effect about the Bondi flow, shrinks its sonic sphere. An Eulerian perturbation on the steady flow gives a wave equation and the corresponding dispersion relation. The perturbation is a high-frequency travelling acoustic wave, in which small dissipation is taken iteratively. The wave, propagating radially outwards against the bulk inflow, is blocked just within the sonic horizon, where the amplitude of the wave diverges because of viscosity. The blocked acoustic wave can still tunnel outward through the horizon with a viscosity-dependent decaying amplitude, scaled by the analogue Hawking temperature. The escape of acoustic waves (analogue Hawking phonons) through the sonic horizon is compatible with the radial contraction of the sonic sphere.
astrophysics
Anomalous X-ray pulsars and soft gamma repeaters are slowly rotating, young, and isolated neutron stars exhibiting sporadic outbursts and high X-ray quiescent luminosities. They are believed to be powered by ultrastrong magnetic fields, $B\sim10^{14}-10^{15}$ G, associated with `magnetars'. In the peculiar case of SGR 0418+5729, timing parameters imply a dipolar $B$-field of $6.1\times10^{12}$ G. This discovery has challenged the traditional picture of magnetars in terms of $B$-field strengths, evolutionary stages, and ages. Here we provide a novel approach to estimate a magnetar's age by considering the self-consistent time evolution of a plasma-filled oblique pulsar with the state-of-the-art magnetospheric particle acceleration gaps. The rotational period of magnetars increases over time due to angular momentum extraction by gravitational-wave radiations, magnetic dipole radiations, and particle winds. These torques also change the obliquity angle between the magnetic and rotation axes. For SGR 0418+5729, we obtain a dipolar $B$-field of $1.0\times10^{14}$ G, and a realistic age of $\sim18$ kyr, consistent within the magnetar paradigm.
astrophysics
When approaching to problems in computer science, we often encounter situations where a subset of a finite set maximizing some utility function needs to be selected. Some of such utility functions are known to be approximately submodular. For the problem of maximizing an approximately submodular function (ASFM problem), a greedy algorithm quickly finds good feasible solutions for many instances while guaranteeing ($1-e^{-\gamma}$)-approximation ratio for a given submodular ratio $\gamma$. However, we still encounter its applications that ask more accurate or exactly optimal solutions within a reasonable computation time. In this paper, we present an efficient branch-and-cut algorithm for the non-decreasing ASFM problem based on its binary integer programming (BIP) formulation with an exponential number of constraints. To this end, we first derive a BIP formulation of the ASFM problem and then, develop an improved constraint generation algorithm that starts from a reduced BIP problem with a small subset of constraints and repeats solving the reduced BIP problem while adding a promising set of constraints at each iteration. Moreover, we incorporate it into a branch-and-cut algorithm to attain good upper bounds while solving a smaller number of nodes of a search tree. The computational results for three types of well-known benchmark instances show that our algorithm performs better than the conventional exact algorithms.
computer science
In the last 150 years, CO2 concentration in the atmosphere has increased from 280 parts per million to 400 parts per million. This has caused an increase in the average global temperatures by nearly 0.7 degree centigrade due to the greenhouse effect. However, the most prosperous states are the highest emitters of greenhouse gases (specially, CO2). This indicates a strong relationship between gaseous emissions and the gross domestic product (GDP) of the states. Such a relationship is highly volatile and nonlinear due to its dependence on the technological advancements and constantly changing domestic and international regulatory policies and relations. To analyse such vastly nonlinear relationships, soft computing techniques has been quite effective as they can predict a compact solution for multi-variable parameters without any explicit insight into the internal system functionalities. This paper reports a novel transfer learning based approach for GDP prediction, which we have termed as Domain Adapted Transfer Learning for GDP Prediction. In the proposed approach per capita GDP of different nations is predicted using their CO2 emissions via a model trained on the data of any developed or developing economy. Results are comparatively presented considering three well-known regression methods such as Generalized Regression Neural Network, Extreme Learning Machine and Support Vector Regression. Then the proposed approach is used to reliably estimate the missing per capita GDP of some of the war-torn and isolated countries.
electrical engineering and systems science
Some ideas from quantum theory are just beginning to percolate back to classical probability theory. For example, there is a widely used and successful theory of "chemical reaction networks", which describes the interactions of molecules in a stochastic rather than quantum way. Computer science and population biology use the same ideas under a different name: "stochastic Petri nets". But if we look at these theories from the perspective of quantum theory, they turn out to involve creation and annihilation operators, coherent states and other well-known ideas - but in a context where probabilities replace amplitudes. We explain this connection as part of a detailed analogy between quantum mechanics and stochastic mechanics. We use this analogy to present new proofs of two major results in the theory of chemical reaction networks: the deficiency zero theorem and the Anderson-Craciun-Kurtz theorem. We also study the overlap of quantum mechanics and stochastic mechanics, which involves Hamiltonians that can generate either unitary or stochastic time evolution. These Hamiltonians are called "Dirichlet forms", and they arise naturally from electrical circuits made only of resistors.
quantum physics
We study a large $N$ tensor model with $O(N)^3$ symmetry containing two flavors of Majorana fermions, $\psi_1^{abc}$ and $\psi_2^{abc}$. We also study its random counterpart consisting of two coupled Sachdev-Ye-Kitaev models, each one containing $N_{\rm SYK}$ Majorana fermions. In these models we assume tetrahedral quartic Hamiltonians which depend on a real coupling parameter $\alpha$. We find a duality relation between two Hamiltonians with different values of $\alpha$, which allows us to restrict the model to the range of $-1\leq \alpha\leq 1/3$. The scaling dimension of the fermion number operator $Q=i\psi_1^{abc} \psi_2^{abc}$ is complex and of the form $1/2 +i f(\alpha)$ in the range $-1\leq \alpha<0$, indicating an instability of the conformal phase. Using Schwinger-Dyson equations to solve for the Green functions, we show that in the true low-temperature phase this operator acquires an expectation value. This demonstrates the breaking of an anti-unitary particle-hole symmetry and other discrete symmetries. We also calculate spectra of the coupled SYK models for values of $N_{\rm SYK}$ where exact diagonalizations are possible. For negative $\alpha$ we find a gap separating the two lowest energy states from the rest of the spectrum; this leads to exponential decay of the zero-temperature correlation functions. For $N_{\rm SYK}$ divisible by $4$, the two lowest states have a small splitting. They become degenerate in the large $N_{\rm SYK}$ limit, as expected from the spontaneous breaking of a $\mathbb{Z}_2$ symmetry.
high energy physics theory
Online change-point detection (OCPD) is important for application in various areas such as finance, biology, and the Internet of Things (IoT). However, OCPD faces major challenges due to high-dimensionality, and it is still rarely studied in literature. In this paper, we propose a novel, online, graph-based, change-point detection algorithm to detect change of distribution in low- to high-dimensional data. We introduce a similarity measure, which is derived from the graph-spanning ratio, to test statistically if a change occurs. Through numerical study using artificial online datasets, our data-driven approach demonstrates high detection power for high-dimensional data, while the false alarm rate (type I error) is controlled at a nominal significant level. In particular, our graph-spanning approach has desirable power with small and multiple scanning window, which allows timely detection of change-point in the online setting.
statistics
We find exact multi-instanton solutions to the selfdual Yang-Mills equation on a large class of curved spaces with $SO(3)$ isometry, generalizing the results previously found on $\mathbb{R}^4$. The solutions are featured with explicit multi-centered expressions and topological properties. As examples, we demonstrate the approach on several different curved spaces, including the Einstein static universe and $\mathbb{R} \times$ dS$_3^E$, and show that the exact multi-instanton solutions exist on these curved backgrounds.
high energy physics theory
We estimate the beam-normal single-spin asymmetry in elastic lepton-proton scattering without employing the ultrarelativistic approximation. Our calculation is relevant for analyses of muon scattering at energies of few hundred MeV and below -- when effects of the muon mass become essential. At such energies, the transverse polarization of the muon beam is expected to contribute significantly to the systematic uncertainty of precision measurements of elastic muon-proton scattering. We evaluate such systematics using an example of the MUSE experiment at PSI. The muon asymmetry is estimated at about 0.1\% in kinematics of MUSE and it is the largest for scattering into a backward hemisphere.
high energy physics phenomenology
The multi-messenger astrophysics of compact objects presents a vast range of environments where neutrino flavor transformation may occur and may be important for nucleosynthesis, dynamics, and a detected neutrino signal. Development of efficient techniques for surveying flavor evolution solution spaces in these diverse environments, which augment and complement existing sophisticated computational tools, could leverage progress in this field. To this end we continue our exploration of statistical data assimilation (SDA) to identify solutions to a small-scale model of neutrino flavor transformation. SDA is a machine learning (ML) formula wherein a dynamical model is assumed to generate any measured quantities. Specifically, we use an optimization formulation of SDA wherein a cost function is extremized via the variational method. Regions of state space in which the extremization identifies the global minimum of the cost function will correspond to parameter regimes in which a model solution can exist. Our example study seeks to infer the flavor transformation histories of two mono-energetic neutrino beams coherently interacting with each other and with a matter background. We require that the solution be consistent with measured neutrino flavor fluxes at the point of detection, and with constraints placed upon the flavor content at various locations along their trajectories, such as the point of emission, and the locations of the Mikheyev-Smirnov-Wolfenstein (MSW) resonances. We show how the procedure efficiently identifies solution regimes and rules out regimes where solutions are infeasible. Overall, results intimate the promise of this "variational annealing" methodology to efficiently probe an array of fundamental questions that traditional numerical simulation codes render difficult to access.
astrophysics
We present a first attempt to design a quantum circuit for the determination of the parton content of the proton through the estimation of parton distribution functions (PDFs), in the context of high energy physics (HEP). The growing interest in quantum computing and the recent developments of new algorithms and quantum hardware devices motivates the study of methodologies applied to HEP. In this work we identify architectures of variational quantum circuits suitable for PDFs representation (qPDFs). We show experiments about the deployment of qPDFs on real quantum devices, taking into consideration current experimental limitations. Finally, we perform a global qPDF determination from collider data using quantum computer simulation on classical hardware and we compare the obtained partons and related phenomenological predictions involving hadronic processes to modern PDFs.
high energy physics phenomenology
We investigate the influence of the different vertices of two-point correlation functions in the infrared regime of Yang-Mills theory using a phenomenological description. This regime is studied in Landau-gauge and using perturbation theory within a phenomenological massive model. We perform a one-loop calculation for two-point correlation functions taking into account the different role of the various interactions in the infrared. Our results show a good agreement with the lattice data.
high energy physics theory
We obtain explicit formulas for the eight bosonic and eight fermionic fluctuations around the mixed-flux generalization of the Hofman-Maldacena giant magnon on AdS$_3 \times$S$^3 \times$T$^4$ and AdS$_3 \times$S$^3 \times$S$^3 \times$S$^1$. As a check of our results, we confirm that the semiclassical quantization of these fluctuations leads to a vanishing one-loop correction to the magnon energy, as expected from symmetry based arguments.
high energy physics theory
We discuss a general five-dimensional completely anisotropic holographic model with three different spatial scale factors, characterized by a Van der Waals-like phase transition between small and large black holes. A peculiar feature of the model is the relation between anisotropy of the background and anisotropy of the colliding heavy ions geometry. We calculate the holographic entanglement entropy (HEE) of the slab-shaped region, the orientation of which relatively to the beams line and the impact parameter is characterized by the Euler angles. We study the dependences of the HEE and its density on the thermodynamic (temperature, chemical potential) and geometric (parameters of anisotropy, thickness, and orientation of entangled regions) parameters. As a particular case the model with two equal transversal scaling factors is considered. This model is supported by the dilaton and two Maxwell fields. In this case we discuss the HEE and its density in detail: interesting features of this model are jumps of the entanglement entropy and its density near the line of the small/large black hole phase transition. These jumps depend on the anisotropy parameter, chemical potential, and orientation. We also discuss different definitions and behavior of c-functions in this model. The c-function calculated in the Einstein frame decreases while increasing $\ell$ for all $\ell$ in the isotropic case (in regions of $(\mu,T)$-plane far away from the line of the phase transition). We find the non-monotonicity of the c-functions for several anisotropic configurations, which however does not contradict with any of the existing c-theorems since they all base on Lorentz invariance.
high energy physics theory
We discuss the quark mass matrices in the $A_4$ modular symmetry, where the $A_4$ triplet of Higgs is introduced for each up-quark and down-quark sectors, respectively. The model has six real parameters and two complex parameters in addition to the modulus $\tau$. By inputting six quark masses and three CKM mixing angles, we can predict the CP violation phase $\delta$ and the Jarlskog invariant $J_{CP}$. The predicted ranges of $\delta$ and $J_{CP}$ are consistent with the observed values. The absolute value of $V_{ub}$ is smaller than $0.0043$, while $V_{cb}$ is lager than $0.0436$. In conclusion, our quark mass matrices with the $A_4$ modular symmetry can reproduce the CKM mixing matrix completely with observed quark masses.
high energy physics phenomenology
In this work we deal with vortices in Maxwell-Higgs or Chern-Simons-Higgs models that engender long range tails. We find first order differential equations that support minimum energy solutions which solve the equations of motion. In the Maxwell scenario, we work with generalised magnetic permeabilities that lead to vortices described by solutions, magnetic field and energy density with power-law tails that extend farther than the standard exponential ones. We also find a manner to obtain a Chern-Simons model with the same scalar and magnetic field profiles of the Maxwell case. By doing so, we also find vortices with the aforementioned long range feature, which is also present in the electric field in the Chern-Simons model. The present results may motivate investigations on nonrelativistic models, in particular in the case involving Rydberg atoms, which are known to present long range interactions and relatively long lifetimes.
high energy physics theory
Understanding the mechanisms which relate properties of liquid and solid phases is crucial for fabricating new advanced solid materials, such as glasses, quasicrystals and high-entropy alloys. Here we address this issue for quasicrystal-forming Al-Cu-Fe alloys which can serve as a model for studying microscopic mechanisms of quasicrystal formation. We study experimentally two structural-sensitive properties of the liquid -- viscosity and undercoolability -- and compare results with \textit{ab initio} investigations of short-range order (SRO). We observe that SRO in Al-Cu-Fe melts is polytetrahedral and mainly presented by distorted Kasper polyhedra. However, topologically perfect icosahedra are almost absent an even stoichiometry of icosahedral quasicrystal phase that suggests the topological structure of local polyhedra does not survive upon melting. It is shown that the main features of interatomic interaction in Al-Cu-Fe system, extracted from radial distribution function and bong-angle distribution function, are the same for both liquid and solid states. In particular, the system demonstrates pronounced repulsion between Fe and Cu as well as strong chemical interaction between Fe and Al, which are almost concentration-independent. We argue that SRO and structural-sensitive properties of a melt may serve as useful indicators of solid phase formation. In particular, in the concentration region corresponding to the composition of the icosahedral phase, a change in the chemical short-range order is observed, which leads to minima on the viscosity and udercoolability isotherms and has a noticeable effect on the initial stage of solidification.
physics
In this work, the quantum-mechanical properties of the strongly non-linear quantum oscillator described by the Poschl-Teller PT model is examined. This model has a relation with two well-known models, the free particle FP in a box and the harmonic oscillator HO, as described in our previous works. Using the PT model, a quantum-mechanical analog of the Joule-Brayton JB cycle and Otto cycle have been constructed through changes of both, the width of the well and its the quantum state. The efficiency of quantum engines based on the Poschl-Teller-like potential is derived, which is analogous to classical thermodynamic engine
quantum physics
We answer four questions from a recent paper of Rao and Shinkar on Lipschitz bijections between functions from $\{0,1\}^n$ to $\{0,1\}$. (1) We show that there is no $O(1)$-bi-Lipschitz bijection from $\mathrm{Dictator}$ to $\mathrm{XOR}$ such that each output bit depends on $O(1)$ input bits. (2) We give a construction for a mapping from $\mathrm{XOR}$ to $\mathrm{Majority}$ which has average stretch $O(\sqrt{n})$, matching a previously known lower bound. (3) We give a 3-Lipschitz embedding $\phi : \{0,1\}^n \to \{0,1\}^{2n+1}$ such that $\mathrm{XOR}(x) = \mathrm{Majority}(\phi(x))$ for all $x \in \{0,1\}^n$. (4) We show that with high probability there is a $O(1)$-bi-Lipschitz mapping from $\mathrm{Dictator}$ to a uniformly random balanced function.
mathematics
Recently, consumer-facing health technologies such as Artificial Intelligence (AI)-based symptom checkers (AISCs) have sprung up in everyday healthcare practice. AISCs solicit symptom information from users and provide medical suggestions and possible diagnoses, a responsibility that people usually entrust with real-person authorities such as physicians and expert patients. Thus, the advent of AISCs begs a question of whether and how they transform the notion of medical authority in everyday healthcare practice. To answer this question, we conducted an interview study with thirty AISC users. We found that users assess the medical authority of AISCs using various factors including automated decisions and interaction design patterns of AISC apps, associations with established medical authorities like hospitals, and comparisons with other health technologies. We reveal how AISCs are used in healthcare delivery, discuss how AI transforms conventional understandings of medical authority, and derive implications for designing AI-enabled health technology.
computer science
Quantum state merging is one of the most important protocols in quantum information theory. In this task two parties aim to merge their parts of a pure tripartite state by making use of additional singlets while preserving correlations with a third party. We study a variation of this scenario where the shared state is not necessarily pure, and the merging parties have free access to local operations, classical communication, and PPT entangled states. We provide general conditions for a state to admit perfect merging, and present a family of fully separable states which cannot be perfectly merged if the merging parties have no access to additional singlets. We also show that free PPT entangled states do not give any advantage for merging of pure states, and the conditional entropy plays the same role as in standard quantum state merging quantifying the rate of additional singlets needed to perfectly merge the state.
quantum physics
We investigate the effects of producing dark matter by Hawking evaporation of primordial black holes (PBHs) in scenarios that may have a second well-motivated dark matter production mechanism, such as freeze-out, freeze-in, or gravitational production. We show that the interplay between PBHs and the alternative sources of dark matter can give rise to model-independent modifications to the required dark matter abundance from each production mechanism, which in turn affect the prospects for dark matter detection. In particular, we demonstrate that for the freeze-out mechanism, accounting for evaporation of PBHs after freeze-out demands a larger annihilation cross section of dark matter particles than its canonical value for a thermal dark matter. For mechanisms lacking thermalization due to a feeble coupling to the thermal bath, we show that the PBH contribution to the dark matter abundance leads to the requirement of an even feebler coupling. Moreover, we show that when a large initial abundance of PBHs causes an early matter-dominated epoch, PBH evaporation alone cannot explain the whole abundance of dark matter today. In this case, an additional production mechanism is required, in contrast to the case when PBHs are formed and evaporate during a radiation-dominated epoch.
high energy physics phenomenology
Based on quantum origin of the universe, in this article we find that the universal wave function can be far richer than the superposition of many classical worlds studied by Everett. By analyzing the more general universal wave function and its unitary evolutions, we find that on small scale we can obtain Newton's law of universal gravity, while on the scale of galaxies we naturally derive gravitational effects corresponding to dark matter, without modifying any physical principles or hypothesizing the existence of new elementary particles. We find that an auxiliary function having formal symmetry is very useful to predict the evolution of the classical information in the universal wave function.
physics
Relational databases are among the most widely used architectures to store massive amounts of data in the modern world. However, there is a barrier between these databases and the average user. The user often lacks the knowledge of a query language such as SQL required to interact with the database. The NL2SQL task aims at finding deep learning approaches to solve this problem by converting natural language questions into valid SQL queries. Given the sensitive nature of some databases and the growing need for data privacy, we have presented an approach with data privacy at its core. We have passed RoBERTa embeddings and data-agnostic knowledge vectors into LSTM based submodels to predict the final query. Although we have not achieved state of the art results, we have eliminated the need for the table data, right from the training of the model, and have achieved a test set execution accuracy of 76.7%. By eliminating the table data dependency while training we have created a model capable of zero shot learning based on the natural language question and table schema alone.
computer science
Recently in a paper by Lebedev et al it was demonstrated that the efficiency of the energy transfer from the drive bunch to the witness bunch in the plasma wakefield accelerator has a limit due to the BBU instability of the witness bunch. It was stated that the efficiency-instability relation is universal and thus should be considered as a fundamental limit. In this note, we show that recent results on the short-range wakefields indicate that this relation should be modified and conclusions of the paper by Lebedev et al should be reconsidered. In particular, we argue that the efficiency-instability relation produces only the lower bound for efficiency and thus does not produce a fundamental limit.
physics
The resolution of Mansuripur's paradox appears in numerous papers in the physics literature, preserves the Lorentz force but depends on the concept of hidden momentum. Here I propose a different resolution based on the overlooked fact that the charge-magnetic dipole system contains linear and angular electromagnetic field momentum. The time rate of change of the field angular momentum in the frame through which the system is moving cancels that due to the charge-electric dipole interaction. From this point of view hidden momentum is not needed in the resolution of the paradox.
physics
This is a white paper submitted to the Planetary Science and Astrobiology Decadal Survey. The deep atmosphere of Venus is largely unexplored and yet may harbor clues to the evolutionary pathways for a major silicate planet with implications across the solar system and beyond. In situ data is needed to resolve significant open questions related to the evolution and present-state of Venus, including questions of Venus' possibly early habitability and current volcanic outgassing. Deep atmosphere "probe-based" in situ missions carrying analytical suites of instruments are now implementable in the upcoming decade (before 2030), and will both reveal answers to fundamental questions on Venus and help connect Venus to exoplanet analogs to be observed in the JWST era of astrophysics.
astrophysics
While we have witnessed a rapid growth of ethics documents meant to guide AI development, the promotion of AI ethics has nonetheless proceeded with little input from AI practitioners themselves. Given the proliferation of AI for Social Good initiatives, this is an emerging gap that needs to be addressed in order to develop more meaningful ethical approaches to AI use and development. This paper offers a methodology, a shared fairness approach, aimed at identifying the needs of AI practitioners when it comes to confronting and resolving ethical challenges and to find a third space where their operational language can be married with that of the more abstract principles that presently remain at the periphery of their work experiences. We offer a grassroots approach to operational ethics based on dialog and mutualised responsibility. This methodology is centred around conversations intended to elicit practitioners perceived ethical attribution and distribution over key value laden operational decisions, to identify when these decisions arise and what ethical challenges they confront, and to engage in a language of ethics and responsibility which enables practitioners to internalise ethical responsibility. The methodology bridges responsibility imbalances that rest in structural decision making power and elite technical knowledge, by commencing with personal, facilitated conversations, returning the ethical discourse to those meant to give it meaning at the sharp end of the ecosystem. Our primary contribution is to add to the recent literature seeking to bring AI practitioners' experiences to the fore by offering a methodology for understanding how ethics manifests as a relational and interdependent sociotechnical practice in their work.
computer science
Electron proton (ep) colliders could provide particle collisions at TeV energies with large data rates while maintaining the clean and pile~up-free environment of lepton colliders, which makes them very attractive for heavy neutrino searches. Heavy (mainly sterile) neutrinos with masses around the electroweak scale are proposed in low scale seesaw models for neutrino mass generation. In this paper, we analyse two of the most promising signatures of heavy neutrinos at ep colliders, the lepton-flavour violating (LFV) lepton-trijet signature and the displaced vertex signature. In the considered benchmark model, we find that for heavy neutrino masses around a few hundred GeV, the LFV lepton-trijet signature at ep colliders yields the best sensitivity of all currently discussed heavy neutrino signatures (analysed at the reconstructed level) up to now.
high energy physics phenomenology
Let $f(z,w)=(p(z),q(z,w))$ be a holomorphic skew product with a superattracting fixed point at the origin. In the previous paper we have succeeded to specify a dominant term of $q$ by the order of $p$ and the Newton polygon of $q$, and to construct a B\"{o}ttcher coordinate on an invariant wedge. By using the same idea and terminologies, we give inequalities of attraction rates for the vertical dynamics of $f$ in this paper. The results hold not only for the superattracting case, but for all the other cases.
mathematics
Cavity-polaritons in semiconductor microstructures have emerged as a promising system for exploring nonequilibrium dynamics of many-body systems. Key advances in this field, including the observation of polariton condensation, superfluidity, realization of topological photonic bands, and dissipative phase transitions, generically allow for a description based on a mean-field Gross-Pitaevskii formalism. While observation of polariton intensity squeezing and decoherence of a polarization entangled photon pair by a polariton condensate provide counter-examples, quantum effects in these experiments show up at high polariton occupancy. Going beyond into the regime of strongly correlated polaritons requires the observation of a photon blockade effect where interactions are strong enough to suppress double occupancy of a photonic lattice site. Here, we report the observation of quantum correlations between polaritons in a fiber cavity which spatially confines polaritons into an area of 3 $\mu$m$^2$. Photon correlation measurements show that careful tuning of the coupled system allows for a modest photon blockade effect as evidenced by a reduction of simultaneous two-polariton generation probability by 5 %. Concurrently, our experiments provide an unequivocal measurement of the polariton interaction strength, thereby resolving the controversy stemming from recent experimental reports. Our findings constitute a first essential step towards the realization of strongly interacting photonic systems.
condensed matter
The Open Cluster Chemical Abundances and Mapping (OCCAM) survey aims to constrain key Galactic dynamical and chemical evolution parameters by the construction of a large, comprehensive, uniform, infrared-based spectroscopic data set of hundreds of open clusters. This fourth contribution from the OCCAM survey presents analysis using SDSS/APOGEE DR16 of a sample of 128 open clusters, 71 of which we designate to be "high quality" based on the appearance of their color-magnitude diagram. We find the APOGEE DR16 derived [Fe/H] abundances to be in good agreement with previous high resolution spectroscopic open cluster abundance studies. Using the high quality sample, we measure Galactic abundance gradients in 16 elements, and find evolution of some of the [X/Fe] gradients as a function of age. We find an overall Galactic [Fe/H] vs R_GC gradient of $-0.068 \pm 0.001$ dex kpc$^{-1}$ over the range of $6 <$ R_GC $< 13.9$ kpc; however, we note that this result is sensitive to the distance catalog used, varying as much as 15%. We formally derive the location a break in the [Fe/H] abundance gradient as a free parameter in the gradient fit for the first time. We also measure significant Galactic gradients in O, Mg, S, Ca, Mn, Cr, Cu, Na, Al, and K, some of which are measured for the first time. Our large sample allows us to explore four well-populated age bins to explore the time evolution of gradients for a large number of elements and comment on possible implications for Galactic chemical evolution and radial migration.
astrophysics
Feature learning in the presence of a mixed type of variables, numerical and categorical types, is an important issue for related modeling problems. For simple neighborhood queries under mixed data space, standard practice is to consider numerical and categorical variables separately and combining them based on some suitable distance functions. Alternatives, such as Kernel learning or Principal Component do not explicitly consider the inter-dependence structure among the mixed type of variables. In this work, we propose a novel strategy to explicitly model the probabilistic dependence structure among the mixed type of variables by an undirected graph. Spectral decomposition of the graph Laplacian provides the desired feature transformation. The Eigen spectrum of the transformed feature space shows increased separability and more prominent clusterability among the observations. The main novelty of our paper lies in capturing interactions of the mixed feature type in an unsupervised framework using a graphical model. We numerically validate the implications of the feature learning strategy
statistics
Renewable energy is essential for energy security and global warming mitigation. However, power generation from renewable energy sources is uncertain due to volatile weather conditions and complex equipment operations. To improve equipment's operation efficiency, it is important to understand and characterize the uncertainty in renewable power generation. In this paper, we proposed a conditional kernel density estimation method to model the distribution of equipment's power output given any weather conditions. It explicitly accounts for the temporal dependence in the data stream and uses an iterative procedure to reduce the bias in kernel density estimation. Compared with existing literature, our approach is especially useful for the purposes of equipment condition monitoring or short-term renewable energy forecasting, where the data dependence plays a more significant role. We demonstrate our method and compare it with alternatives through real applications.
statistics
In this paper we connect selection principles on a topological space to corresponding selection principles on one of its hyperspaces. We unify techniques and generalize theorems from the known results about selection principles for common hyperspace constructions. This includes results of Lj.D.R. Ko\v{c}inac, Z. Li, and others. We use selection games to generalize selection principles and we work with strategies of various strengths for these games. The selection games we work with are primarily abstract versions of the selection principles of Rothberger, Menger, and Hurewicz type, as well as games of countable fan tightness and selective separability. The hyperspace constructions that we work with are the Vietoris and Fell topologies, both upper and full, generated by ideals of closed sets. Using a new technique we are able to extend straightforward connections between topological constructs to connections between selection games related to those constructs. This extension process works regardless of the length of the game, the kind of selection being performed, or the strength of the strategy being considered.
mathematics
We construct a complete basis of dimension-8 operators in the Low-Energy Effective Field Theory below the Electroweak Scale (LEFT). We find there are 35058 dimension-8 operators in the LEFT for two generations of up-type quarks and three generations of down-type quarks, charged leptons, and left-handed neutrinos. The existence of this operator basis is a necessary prerequisite for matching to the Standard Model Effective Field Theory at the dimension-8 level.
high energy physics phenomenology
Mapping of spatial hotspots, i.e., regions with significantly higher rates or probability density of generating certain events (e.g., disease or crime cases), is a important task in diverse societal domains, including public health, public safety, transportation, agriculture, environmental science, etc. Clustering techniques required by these domains differ from traditional clustering methods due to the high economic and social costs of spurious results (e.g., false alarms of crime clusters). As a result, statistical rigor is needed explicitly to control the rate of spurious detections. To address this challenge, techniques for statistically-robust clustering have been extensively studied by the data mining and statistics communities. In this survey we present an up-to-date and detailed review of the models and algorithms developed by this field. We first present a general taxonomy of the clustering process with statistical rigor, covering key steps of data and statistical modeling, region enumeration and maximization, significance testing, and data update. We further discuss different paradigms and methods within each of key steps. Finally, we highlight research gaps and potential future directions, which may serve as a stepping stone in generating new ideas and thoughts in this growing field and beyond.
statistics
Recently, $O(\alpha^3)$ corrections to the muon decay rate and $O(\alpha_s^3)$ to heavy quark decays have been determined by Fael, Sch\"onwald, and Steinhauser. This is the first such perturbative improvement of these important quantities in more than two decades. We reveal and explain a symmetry pattern in these new corrections, and confirm the most technically difficult parts of their evaluation.
high energy physics phenomenology
We have performed an unbiased dense core survey toward the Orion A Giant Molecular Cloud in the C$^{18}$O ($J$=1--0) emission line taken with the Nobeyama Radio Observatory (NRO) 45-m telescope. The effective angular resolution of the map is 26", which corresponds to $\sim$ 0.05 pc at a distance of 414 pc. By using the Herschel-Planck H$_2$ column density map, we calculate the C$^{18}$O fractional abundance and find that it is roughly constant over the column density range of $\lesssim$ 5 $\times$ 10$^{22}$ cm$^{-3}$, although a trend of C$^{18}$O depletion is determined toward higher column density. Therefore, C$^{18}$O intensity can follow the cloud structure reasonably well. The mean C$^{18}$O abundance in Orion A is estimated to be 5.7$\times$10$^{-7}$, which is about 3 times larger than the fiducial value. We identified 746 C$^{18}$O cores with astrodendro and classified 709 cores as starless cores. We compute the core masses by decomposing the Herschel-Planck dust column density using the relative proportions of the C$^{18}$O integrated intensities of line-of-sight components. Applying this procedure, we attempt to remove the contribution of the background emission, i.e., the ambient gas outside the cores. Then, we derived mass function for starless cores and found that it resembles the stellar initial mass function (IMF). The CMF for starless cores, $dN/dM$, is fitted with a power-law relation of $M^\alpha$ with a power index of $\alpha = -$2.25$\pm$ 0.16 at the high-mass slope ($\gtrsim$ 0.44 $M_\odot$). We also found that the ratio of each core mass to the total mass integrated along the line of sight is significantly large. Therefore, in the previous studies, the core masses derived from the dust image are likely to be overestimated at least by a factor of a few. Accordingly, such previous studies may underestimate the star formation efficiency of individual cores.
astrophysics
This paper considers the temporal discretization of an inverse problem subject to a time fractional diffusion equation. Firstly, the convergence of the L1 scheme is established with an arbitrary sectorial operator of spectral angle $< \pi/2 $, that is the resolvent set of this operator contains $ \{z\in\mathbb C\setminus\{0\}:\ |\operatorname{Arg} z|< \theta\}$ for some $ \pi/2 < \theta < \pi $. The relationship between the time fractional order $\alpha \in (0, 1)$ and the constants in the error estimates is precisely characterized, revealing that the L1 scheme is robust as $ \alpha $ approaches $ 1 $. Then an inverse problem of a fractional diffusion equation is analyzed, and the convergence analysis of a temporal discretization of this inverse problem is given. Finally, numerical results are provided to confirm the theoretical results.
mathematics
Applications based on aggregates of magnetic nanoparticles are becoming increasingly widespread, ranging from hyperthermia to magnetic recording. However, although some uses require a collective behavior, other need a more individual-like response, the conditions leading to either of these behaviors are still poorly understood. Here we use nanoscale-uniform binary random dense mixtures with different proportions of oxide magnetic nanoparticles with low$/$high anisotropy as a valuable tool to explore the crossover from individual to collective behavior. Two different anisotropy scenarios have been studied in two series of binary compacts: M1, comprising maghemite ($\gamma$-Fe$_2$O$_3$) nanoparticles of different sizes (9.0 nm $/$ 11.5 nm) with barely a factor of 2 between their anisotropy energies and M2, mixing equally-sized pure maghemite (low-anisotropy) and Co-doped maghemite (high-anisotropy) nanoparticles with a large difference in anisotropy energy (ratio $>$ 8). Interestingly, while the M1 series exhibits collective behavior typical of strongly-coupled dipolar systems, the M2 series presents a more complex scenario where different magnetic properties resemble either "individual-like" or "collective", crucially emphasizing that the collective character must be ascribed to specific properties and not to the system as a whole. The strong differences between the two series, offer new insight (systematically ratified by simulations) into the subtle interplay between dipolar interactions, local anisotropy and sample heterogeneity, to determine the behavior of dense assemblies of magnetic nanoparticles.
condensed matter
We consider linear groups and Lie groups over a non-Archimedean local field $\mathbb F$ for which the power map $x\mapsto x^k$ has a dense image or it is surjective. We prove that the group of $\mathbb F$-points of such algebraic groups is a compact extension of unipotent groups with the order of the compact group being relatively prime to $k$. This in particular shows that the power map is surjective for all $k$ is possible only when the group is unipotent or trivial depending on whether the characteristic of $\mathbb F$ is zero or positive. Similar results are proved for Lie groups via the adjoint representation. To a large extent, these results are extended to linear groups over local fields and global fields.
mathematics
The accurate prognosis of Glioblastoma Multiforme (GBM) plays an essential role in planning correlated surgeries and treatments. The conventional models of survival prediction rely on radiomic features using magnetic resonance imaging (MRI). In this paper, we propose a radiogenomic overall survival (OS) prediction approach by incorporating gene expression data with radiomic features such as shape, geometry, and clinical information. We exploit TCGA (The Cancer Genomic Atlas) dataset and synthesize the missing MRI modalities using a fully convolutional network (FCN) in a conditional Generative Adversarial Network (cGAN). Meanwhile, the same FCN architecture enables the tumor segmentation from the available and the synthesized MRI modalities. The proposed FCN architecture comprises octave convolution (OctConv) and a novel decoder, with skip connections in spatial and channel squeeze & excitation (skip-scSE) block. The OctConv can process low and high-frequency features individually and improve model efficiency by reducing channel-wise redundancy. Skip-scSE applies spatial and channel-wise excitation to signify the essential features and reduces the sparsity in deeper layers learning parameters using skip connections. The proposed approaches are evaluated by comparative experiments with state-of-the-art models in synthesis, segmentation, and overall survival (OS) prediction. We observe that adding missing MRI modality improves the segmentation prediction, and expression levels of gene markers have a high contribution in the GBM prognosis prediction, and fused radiogenomic features boost the OS estimation.
electrical engineering and systems science
Evaluation for many natural language understanding (NLU) tasks is broken: Unreliable and biased systems score so highly on standard benchmarks that there is little room for researchers who develop better systems to demonstrate their improvements. The recent trend to abandon IID benchmarks in favor of adversarially-constructed, out-of-distribution test sets ensures that current models will perform poorly, but ultimately only obscures the abilities that we want our benchmarks to measure. In this position paper, we lay out four criteria that we argue NLU benchmarks should meet. We argue most current benchmarks fail at these criteria, and that adversarial data collection does not meaningfully address the causes of these failures. Instead, restoring a healthy evaluation ecosystem will require significant progress in the design of benchmark datasets, the reliability with which they are annotated, their size, and the ways they handle social bias.
computer science
Which matrix norms induce proper measures for quantifying quantum coherence? We study this problem for two important classes of norms and show that (i) coherence measures cannot be induced by any unitary similarity invariant norm, and (ii) the $\ell_{q,p}$-norm induces a coherence measure if and only if $q=1$ and $1 \leq p \leq 2$, thus giving a new class of coherence measures with simple closed forms that are easy to compute. These results extend and unify previously known facts about norm-induced coherence measures, and lead to a broader framework for understanding what functionals can be coherence measures.
quantum physics
Measurements of the isotopic abundances in meteoritic amino acids have found enhancements of $^2$H/H, $^{15}$N/$^{14}$N, and $^{13}$C/$^{12}$C in the amino acids in the meteorites studied. We show that they are consistent with the processing of the constituents of the meteorites by electron anti-neutrinos that would be expected from a core-collapse supernova or neutron-star merger. Using theoretical electron anti-neutrino cross sections we are able to predict these isotopic ratio variations depending on the time-integrated anti-neutrino flux at the site where the amino acids were processed.
astrophysics
The absolute zeta function for a scheme $X$ of finite type over $\mathbb{Z}$ satisfying a certain condition is defined as the limit as $p\to 1$ of the congruent zeta function for $X\otimes\mathbb{F}_p$. In 2016, after calculating absolute zeta functions for a few specific schemes, Kurokawa suggested that an absolute zeta function for a general scheme of finite type over $\mathbb{Z}$ should have an infinite product structure which he called the absolute Euler product. In this article, formulating his suggestion using a torsion free Noetherian $\mathbb{F}_1$-scheme defined by Connes and Consani, we give a proof of his suggestion. Moreover, we show that each factor of the absolute Euler product is derived from the counting function of the $\mathbb{F}_1$-scheme.
mathematics
We reveal the emergence of environment-induced spontaneous synchronization between two spin-1/2 quantum objects in a collision model setting. In particular, we determine the conditions for the dynamical establishment of synchronous evolution between local spin observables of a pair of spins undergoing open-system dynamics in the absence of an external drive. Exploiting the versatility of the collision model framework, we show that the formation of quantum or classical correlations between the principal spin pair are of no significant relevance to the manifestation of spontaneous quantum synchronization between them. Furthermore, we discuss the consequences of thermal effects on the environmental spins for the emergence of quantum synchronization.
quantum physics
Given a collection of $n$ points in $\mathbb{R}^d$, the goal of the $(k,z)$-clustering problem is to find a subset of $k$ "centers" that minimizes the sum of the $z$-th powers of the Euclidean distance of each point to the closest center. Special cases of the $(k,z)$-clustering problem include the $k$-median and $k$-means problems. Our main result is a unified two-stage importance sampling framework that constructs an $\varepsilon$-coreset for the $(k,z)$-clustering problem. Compared to the results for $(k,z)$-clustering in [Feldman and Langberg, STOC 2011], our framework saves a $\varepsilon^2 d$ factor in the coreset size. Compared to the results for $(k,z)$-clustering in [Sohler and Woodruff, FOCS 2018], our framework saves a $\operatorname{poly}(k)$ factor in the coreset size and avoids the $\exp(k/\varepsilon)$ term in the construction time. Specifically, our coreset for $k$-median ($z=1$) has size $\tilde{O}(\varepsilon^{-4} k)$ which, when compared to the result in [Sohler and Woodruff, STOC 2018], saves a $k$ factor in the coreset size. Our algorithmic results rely on a new dimensionality reduction technique that connects two well-known shape fitting problems: subspace approximation and clustering, and may be of independent interest. We also provide a size lower bound of $\Omega\left(k\cdot \min \left\{2^{z/20},d \right\}\right)$ for a $0.01$-coreset for $(k,z)$-clustering, which has a linear dependence of size on $k$ and an exponential dependence on $z$ that matches our algorithmic results.
computer science
In a recent paper we presented evidence for the occurence of Leray-like singularities with positive Sedov-Taylor exponent $\alpha$ in turbulent flows recorded in Modane's wind tunnel, by looking at simultaneous acceleration and velocity records. Here we use another tool which allows to get other informations on the dynamics of turbulent bursts. We compare the structure functions for velocity and acceleration in the same turbulent flows. This shows the possible contribution of other types of self-similar solutions because this new study shows that statistics is seemingly dominated by singularities with small positive or even negative values of the exponent $\alpha$, that corresponds to "weakly singular" solutions with singular acceleration, and regular velocity. We present several reasons explaining that the exponent $\alpha$ derived from the structure functions curves, may look to be negative.
physics
We offer a survey of recent results on covariance estimation for heavy-tailed distributions. By unifying ideas scattered in the literature, we propose user-friendly methods that facilitate practical implementation. Specifically, we introduce element-wise and spectrum-wise truncation operators, as well as their $M$-estimator counterparts, to robustify the sample covariance matrix. Different from the classical notion of robustness that is characterized by the breakdown property, we focus on the tail robustness which is evidenced by the connection between nonasymptotic deviation and confidence level. The key observation is that the estimators needs to adapt to the sample size, dimensionality of the data and the noise level to achieve optimal tradeoff between bias and robustness. Furthermore, to facilitate their practical use, we propose data-driven procedures that automatically calibrate the tuning parameters. We demonstrate their applications to a series of structured models in high dimensions, including the bandable and low-rank covariance matrices and sparse precision matrices. Numerical studies lend strong support to the proposed methods.
statistics
Wave scattering from a cylinder with a tensor impedance surface is investigated based on the Lorentz-Mie theory. A practical example of such a cylinder is a subwavelength metallic rod with helical dielectric-filled corrugations. The investigation is performed with the aim to maximize scattering cross-section by tailoring the surface impedance of cylindrical scatterers. For the normally incident TEz and TMz waves the required surface impedance of a subwavelength cylinder can be produced by longitudinal (axial) and transverse (circumferential) corrugations, respectively. It is shown that such corrugations induce superscattering at multiple frequencies, which can be widely tuned with either or both the size and permittivity of dielectric-filled corrugations. In the microwave band, this effect is demonstrated to be robust to material losses and is validated against the full-wave simulations and experiment. For the TEz waves the enhanced scattering from the cylinder is found to have a broad frequency bandwidth, provided that the relative permittivity of corrugations is low or equal unity. In the latter case, the corrugated cylinder acts as an all-metal superscatterer. For such cylinders the near-field measurements are implemented and provide the first experimental evidence of the superscattering phenomenon for all-metal objects. In addition to multifrequency superscattering, the dielectric-filled corrugations are shown to provide multifrequency cloaking of the cylinder under the incidence of the TMz waves. Simultaneous superscattering and cloaking at multiple frequencies distinguishes corrugated cylinders from other known practicable scatterers for potential applications in antenna designing, sensing, and energy harvesting.
physics
Distributed quantum sensing can provide quantum-enhanced sensitivity beyond the shot-noise limit (SNL) for sensing spatially distributed parameters. To date, distributed quantum sensing experiments have been mostly accomplished in laboratory environments without a real space separation for the sensors. In addition, the post-selection is normally assumed to demonstrate the sensitivity advantage over the SNL. Here, we demonstrate distributed quantum sensing in field and show the unconditional violation (without post-selection) of SNL up to 0.916 dB for the field distance of 240 m. The achievement is based on a loophole free Bell test setup with entangled photon pairs at the averaged heralding efficiency of 73.88%. Moreover, to test quantum sensing in real life, we demonstrate the experiment for long distances (with 10-km fiber) together with the sensing of a completely random and unknown parameter. The results represent an important step towards a practical quantum sensing network for widespread applications.
quantum physics
We study the extent to which wide neural networks may be approximated by Gaussian processes when initialized with random weights. It is a well-established fact that as the width of a network goes to infinity, its law converges to that of a Gaussian process. We make this quantitative by establishing explicit convergence rates for the central limit theorem in an infinite-dimensional functional space, metrized with a natural transportation distance. We identify two regimes of interest; when the activation function is polynomial, its degree determines the rate of convergence, while for non-polynomial activations, the rate is governed by the smoothness of the function.
mathematics
We calculate the spectra, electric dipole transition rates and isotope shifts of the super heavy elements Ds (Z=110), Rg (Z=111) and Cn (Z=112) and their ions. These calculations were performed using a recently developed, efficient version of the ab initio configuration interaction combined with perturbation theory to treat distant effects. The successive ionization potentials of the three elements are also calculated and compared to lighter elements.
physics
We present the MKID Exoplanet Camera (MEC), a z through J band (800 - 1400 nm) integral field spectrograph located behind The Subaru Coronagraphic Extreme Adaptive Optics (SCExAO) at the Subaru Telescope on Maunakea that utilizes Microwave Kinetic Inductance Detectors (MKIDs) as the enabling technology for high contrast imaging. MEC is the first permanently deployed near-infrared MKID instrument and is designed to operate both as an IFU, and as a focal plane wavefront sensor in a multi-kHz feedback loop with SCExAO. The read noise free, fast time domain information attainable by MKIDs allows for the direct probing of fast speckle fluctuations that currently limit the performance of most high contrast imaging systems on the ground and will help MEC achieve its ultimate goal of reaching contrasts of $10^{-7}$ at 2$\lambda / D$. Here we outline the instrument details of MEC including the hardware, firmware, and data reduction and analysis pipeline. We then discuss MEC's current on-sky performance and end with future upgrades and plans.
astrophysics
In this work, we study the dynamics of particles around Bennu. The goal is to understand the stability, evolution, and final outcome of the simulated particles around the asteroid. According to the results, the particle sizes can be divided into two main groups depending on their behavior. Particles smaller than a centimeter are quickly removed from the system by solar radiation pressure, while the dynamics of particles larger than a few centimeters is dominated by the gravitational field of Bennu. Because of its shape and spin period, Bennu has eight equilibrium points around it. The structure of the phase space near its equatorial surface is directly connected to these equilibrium points. Therefore, we performed numerical simulations to obtain information about the orbital evolution near the equilibrium points. The results show that most of the particles larger than a few centimeters fall in the equatorial region close to the Kingfisher area or close to the region diametrically opposite to it. In contrast, almost none of these particles fall in the equatorial region close to the Osprey area. In addition, we also performed computational experiments considering a spherical cloud of particles initially orbiting Bennu. Most of the particles in prograde orbits fall on the surface within our integration period, which was limited to 1.14 years. The particles preferentially fall near high-altitude regions at low equatorial latitudes and close to the north pole. The mid-latitudes are those more depleted of falls, as in the Nightingale and Sandpiper areas.
astrophysics