text
stringlengths
11
9.77k
label
stringlengths
2
104
Stars are changing entities in a constant evolution during their lives. At non-secular time scales (from seconds to years) the effect of dynamical processes such as convection, rotation, and magnetic fields can modify the stellar oscillations. Convection excites acoustic modes in solar-like stars, while rotation and magnetic fields can perturb the oscillation frequencies lifting the degeneracy in the azimuthal component m of the eigenfrequencies. Moreover, the interaction between rotation, convection, and magnetic fields can produce magnetic dynamos, which sometimes yield to regular magnetic activity cycles. In this chapter we review how stellar dynamics can be studied and explain what long-term seismic observations can bring to the understanding of this field. Thus, we show how we can study some properties of the convective time scales operating in a star like the Sun. We also compare the stratified information we can obtain on the internal (radial) differential rotation from main sequence solar-like stars, to the Sun, and to more evolved sub giants, and giants. We complement this information on the internal rotation with the determination of the surface (latitudinal differential) rotation obtained directly from the light curves. Indeed, when stars are active there can be spots on their surfaces dimming the light emitted. When the star rotates, the emitted light will be modulated by the presence of these spots with a period corresponding to the rotation rate at the active latitudes (where the spots develop). We conclude this chapter by discussing the seismology of fast rotating stars and, from a theoretical point of view, what are the current challenges to infer properties of the internal structure and dynamics of intermediate- and high-mass stars.
astrophysics
We argue that the confined and deconfined phases in gauge theories are connected by a partially deconfined phase (i.e. SU(M) in SU(N), where M<N, is deconfined), which can be stable or unstable depending on the details of the theory. When this phase is unstable, it is the gauge theory counterpart of the small black hole phase in the dual string theory. Partial deconfinement is closely related to the Gross-Witten-Wadia transition, and is likely to be relevant to the QCD phase transition. The mechanism of partial deconfinement is related to a generic property of a class of systems. As an instructive example, we demonstrate the similarity between the Yang-Mills theory/string theory and a mathematical model of the collective behavior of ants [Beekman et al., Proceedings of the National Academy of Sciences, 2001]. By identifying the D-brane, open string and black hole with the ant, pheromone and ant trail, the dynamics of two systems closely resemble with each other, and qualitatively the same phase structures are obtained.
high energy physics theory
We introduce an on shell renormalization scheme in which the mass parameter of minimal MS scheme is replaced with the pole mass obtained from the loop order expansion of the pole mass in the MS scheme. As a consequence, the quartic coupling constant remains same as that of the MS scheme and the vacuum expectation value gets contributions from the one-particle-irreducible diagrams. We also show the renormalization group invariance of the pole mass in this scheme.
high energy physics phenomenology
Since its discovery, 1E 1048.1$-$5937 has been one of the most active magnetars, both in terms of radiative outbursts, and changes to its spin properties. Here we report on a continuing monitoring campaign with the Neil Gehrels Swift Observatory X-ray Telescope in which we observe two new outbursts from this source. The first outburst occurred in 2016 July, and the second in 2017 December, reaching peak 0.5-10 keV absorbed fluxes of $3.2^{+0.2}_{-0.3}\times 10^{-11}$ erg s$^{-1}$ cm$^{-2}$ and $2.2^{+0.2}_{-0.2}\times10^{-11}$ erg s$^{-1}$ cm$^{-2}$, respectively, factors of $\sim$5 and $\sim 4$ above the quiescent flux. Both new outbursts were accompanied by spin-up glitches with amplitudes of $\Delta\nu= 4.47(6)\times10^{-7}$ Hz and $\Delta\nu= 4.32(5)\times10^{-7}$ Hz, respectively. Following the 2016 July outburst, we observe, as for past outbursts, a period of delayed torque fluctuations, which reach a peak spin-down of $1.73\pm0.01$ times the quiescent rate, and which dominates the spin evolution compared to the spin-up glitches. We also report an observation near the peak of the first of these outbursts with NuSTAR in which hard X-ray emission is detected from the source. This emission is well characterized by an absorbed blackbody plus a broken power law, with a power-law index above $13.4\pm0.6$ keV of $0.5_{-0.2}^{+0.3}$, similar to those observed in both persistent and transient magnetars. The hard X-ray results are broadly consistent with models of electron/positron cooling in twisted magnetic field bundles in the outer magnetosphere. However the repeated outbursts and associated torque fluctuations in this source remain puzzling.
astrophysics
Efimov physics is drastically affected by the change of spatial dimensions. Efimov states occur in a tridimensional (3D) environment, but disappear in two (2D) and one (1D) dimensions. In this paper, dedicated to the memory of Prof. Faddeev, we will review some recent theoretical advances related to the effect of dimensionality in the Efimov phenomenon considering three-boson systems interacting by a zero-range potential. We will start with a very ideal case with no physical scales, passing to a system with finite energies in the Born-Oppenheimer (BO) approximation and finishing with a general system. The physical reason for the appearance of the Efimov effect is given essentially by two reasons which can be revealed by the BO approximation - the form of the effective potential is proportional to $1/R^2$ ($R$ is the relative distance between the heavy particles) and its strength is smaller than the critical value given by $-(D-2)^2/4$, where $D$ is the effective dimension.
physics
The intensity buildup of light inside a lossy microring resonator can be used to enhance the generation of squeezed states via spontaneous parametric downconversion (SPDC). In this work, we model the generation of squeezed light in a microring resonator that is pumped with a Gaussian pulse via a side-coupled channel waveguide. We theoretically determine the optimum pump pulse duration and ring-to-channel coupling constant to minimize the quadrature noise (maximize the squeezing) in the ring for a fixed input pump energy. We derive approximate analytic expressions for the optimal coupling and pump pulse duration as a function of scattering loss in the ring. These results will enable researchers to easily determine the optimal design of microring resonator systems for the generation of quadrature-squeezed states.
physics
We study motivic Chern classes of cones. First we show examples of projective cones of smooth curves such that their various $K$-classes (sheaf theoretic, push-forward and motivic) are all different. Then we show connections between the torus equivariant motivic Chern class of a projective variety and of its affine cone, generalizing results on projective Thom polynomials.
mathematics
Differential rotation is the basis of the solar dynamo theory. Synoptic maps of He I intensity from Carrington rotations 2032 to 2135 are utilized to investigate the differential rotation of the solar chromosphere in the He I absorption line. The chromosphere is surprisingly found to rotate faster than the photosphere below it. The anomalous heating of the chromosphere and corona has been a big problem in modern astronomy. It is speculated that the small-scale magnetic elements with magnetic flux in the range of $(2.9 - 32.0)\times 10^{18}$ Mx which are anchored in the leptocline, heat the quiet chromosphere to present the anomalous temperature increase, causing it to rotate at the same rate as the leptocline. The differential of rotation rate in the chromosphere is found to be strengthened by strong magnetic fields, but in stark contrast, at the photosphere strong magnetic fields repress the differential of rotation rate. A plausible explanation is given for these findings.
astrophysics
Semantic web technologies have significantly contributed with effective solutions for the problems of data integration and knowledge graph creation. However, with the rapid growth of big data in diverse domains, different interoperability issues still demand to be addressed, being scalability one of the main challenges. In this paper, we address the problem of knowledge graph creation at scale and provide MapSDI, a mapping rule-based framework for optimizing semantic data integration into knowledge graphs. MapSDI allows for the semantic enrichment of large-sized, heterogeneous, and potentially low-quality data efficiently. The input of MapSDI is a set of data sources and mapping rules being generated by a mapping language such as RML. First, MapSDI pre-processes the sources based on semantic information extracted from mapping rules, by performing basic database operators; it projects out required attributes, eliminates duplicates, and selects relevant entries. All these operators are defined based on the knowledge encoded by the mapping rules which will be then used by the semantification engine (or RDFizer) to produce a knowledge graph. We have empirically studied the impact of MapSDI on existing RDFizers, and observed that knowledge graph creation time can be reduced on average in one order of magnitude. It is also shown, theoretically, that the sources and rules transformations provided by MapSDI are data-lossless.
computer science
The Five-hundred-meter Aperture Spherical radio Telescope (FAST), the largest single dish radio telescope in the world, has implemented an innovative technology for its huge reflector, which changes the shape of the primary reflector from spherical to that of a paraboloid of 300 m aperture. Here we explore how the current FAST sensitivity can potentially be further improved by increasing the illuminated area (i.e., the aperture of the paraboloid embedded in the spherical surface). Alternatively, the maximum zenith angle can be increased to give greater sky coverage by decreasing the illuminated aperture.Different parabolic apertures within the FAST capability are analyzed in terms of how far the spherical surface would have to move to approximate a paraboloid. The sensitivity of FAST can be improved by approximately 10 % if the aperture of the paraboloid is increased from 300 m to 315 m. The parabolic aperture lies within the main spherical surface and does not extend beyond its edge. The maximum zenith angle can be increased to approximately 35 degrees from 26.4 degrees, if we decrease the aperture of the paraboloid to 220 m. This would still give a sensitivity similar to the Arecibo 305 m radio telescope. Radial deviations between paraboloids of different apertures and the spherical surfaces of differing radii are also investigated. Maximum zenith angles corresponding to different apertures of the paraboloid are further derived. A spherical surface with a different radius can provide a reference baseline for shape-changing applied through active reflector technology to FAST-like telescopes.
astrophysics
Observations of ammonia in interstellar environments have revealed high levels of deuteration, and all its D-containing variants, including ND$_3$, have been detected in cold prestellar cores and around young protostars. The observation of these deuterated isotopologues is very useful to elucidate the chemical and physical processes taking place during the very early stages of star formation, as the abundance of deuterated molecules is highly enhanced in dense and cold gas. Nitrogen hydride radicals are key species lying at the very beginning of the reaction pathway leading to the formation of NH$_3$ and organic molecules of pre-biotic interest, but relatively little information is known about their D-bearing isotopologues. To date, only ND has been detected in the interstellar gas. To aid the identification of further deuterated nitrogen radicals, we have thoroughly re-investigated the rotational spectrum of NHD employing two different instruments: a frequency-modulation submillimetre spectrometer operating in the THz region and a synchrotron-based Fourier Transform infrared spectrometer operating in the 50-240 cm$^{-1}$ wavelength range. NHD was produced in a plasma of NH$_3$ and D$_2$. A wide range of rotational energy levels has been probed thanks to the observation of high $N$ (up to 15) and high $K_a$ (up to 9) transitions. A global analysis including our new data and those already available in the literature has provided a comprehensive set of very accurate spectroscopic parameters. A highly reliable line catalogue has been generated to assist archival data searches and future astronomical observations of NHD at submillimetre and THz regimes.
astrophysics
In this paper, we develop the theory of flag manifold over a semifield for any Kac-Moody root datum. We show that the flag manifold over a semifield admits a natural action of the monoid over that semifield associated with the Kac-Moody datum and admits a cellular decomposition. This extends the previous work of Lusztig, Postnikov, Rietsch and others on the totally nonnegative flag manifolds (of finite type) and the work of Lusztig, Speyer, Williams on the tropical flag manifolds (of finite type). As a by-product, we prove a conjecture of Lusztig on the duality of totally nonnegative flag manifold of finite type.
mathematics
In this work we study the phase sensitivity of generic linear interferometric schemes using Gaussian resources and measurements. Our formalism is based on the Fisher information. This allows us to separate the contributions of the measurement scheme, the experimental imperfections, and auxiliary systems. We demonstrate the strength of this formalism using a broad class of multimode Gaussian states that includes well-known results from single- and two-mode metrology scenarios. Using this, we prove that input coherent states or squeezing beat the non-classical states proposed in preceding boson-sampling-inspired phase-estimation schemes. We also develop a novel polychromatic interferometric protocol, demonstrating an enhanced sensitivity with respect to two-mode squeezed-vacuum states, for which the ideal homodyne detection is formally shown to be optimal.
quantum physics
We compute the Picard group of the moduli stack of smooth curves of genus $g$ for $3\leq g\leq 5$, using methods of equivariant intersection theory. We base our proof on the computation of some relations in the integral Chow ring of certain moduli stacks of smooth complete intersections. As a byproduct, we compute the cycle classes of some divisors on $\mathcal{M}_g$.
mathematics
Known topological quantum matter, including topological insulators and Dirac/Weyl semimetals, often hosts robust boundary states in the gaps between bulk bands in energy-momentum space. Beyond one-gap systems, quantum crystals may also feature more than one inter-band gap. The manifestation of higher-fold topology with multiple nontrivial gaps in quantum materials remains elusive. In this work, we leverage a photoemission spectroscopy probe to discover the multi-gap topology of a chiral fermion material. We identify two sets of chiral surface states. These Fermi arcs exhibit an emergent ladder structure in energy-momentum space, unprecedented in topological materials. Furthermore, we determine the multi-gap chiral charge $\textbf{C}=(2,2)$. Our results provide a general framework to explore future complex topological materials.
condensed matter
We consider a general system with weakly broken time and translation symmetries. We assume the system also possesses a $U(1)$ symmetry which is not only weakly broken, but is anomalous. We use the second order chiral quasi-hydrodynamics to compute the magneto-conductivities in the system in the presence of a weak magnetic field. Analogous to electrical and thermoelectric conductivities, it turns out that the thermal conductivity is identified with a coefficient which depends on the mixed gauge-gravitational anomaly. By applying our general formulas to a free system of Weyl fermions at low temperature limit $T\ll \mu$, we find that our system is Onsager reciprocal if the relaxation in all energy, momentum and charge channels occurs at the same rate. In the high temperature limit $T\gg \mu$, we consider a strongly coupled $SU(N_c)$ gauge theory with $N_c\gg1$ in the hydrodynamic limit. Its gravity dual is a magnetized charged brane to which, we apply our formulas and compute the conductivities. On the way, we show that analogous to the weak regime, an energy cut-off emerges to regulate the thermodynamic quantities. From this gravity background we also find the coefficients of chiral magnetic effect in agreement with the well-known result of the Son-Surowka.
high energy physics theory
We introduce a new boundedness condition for affine permutations, motivated by the fruitful concept of periodic boundary conditions in statistical physics. We study pattern avoidance in bounded affine permutations. In particular, we show that if $\tau$ is one of the finite increasing oscillations, then every $\tau$-avoiding affine permutation satisfies the boundedness condition. We also explore the enumeration of pattern-avoiding affine permutations that can be decomposed into blocks, using analytic methods to relate their exact and asymptotic enumeration to that of the underlying ordinary permutations. Finally, we perform exact and asymptotic enumeration of the set of all bounded affine permutations of size $n$. A companion paper will focus on avoidance of monotone decreasing patterns in bounded affine permutations.
mathematics
In this paper we present the results of the first low frequency all-sky search of continuous gravitational wave signals conducted on Virgo VSR2 and VSR4 data. The search covered the full sky, a frequency range between 20 Hz and 128 Hz with a range of spin-down between $-1.0 \times 10^{-10}$ Hz/s and $+1.5 \times 10^{-11}$ Hz/s, and was based on a hierarchical approach. The starting point was a set of short Fast Fourier Transforms (FFT), of length 8192 seconds, built from the calibrated strain data. Aggressive data cleaning, both in the time and frequency domains, has been done in order to remove, as much as possible, the effect of disturbances of instrumental origin. On each dataset a number of candidates has been selected, using the FrequencyHough transform in an incoherent step. Only coincident candidates among VSR2 and VSR4 have been examined in order to strongly reduce the false alarm probability, and the most significant candidates have been selected. Selected candidates have been subject to a follow-up by constructing a new set of longer FFTs followed by a further incoherent analysis, still based on the FrequencyHough transform. No evidence for continuous gravitational wave signals was found, therefore we have set a population-based joint VSR2-VSR4 90$\%$ confidence level upper limit on the dimensionless gravitational wave strain in the frequency range between 20 Hz and 128 Hz. This is the first all-sky search for continuous gravitational waves conducted, on data of ground-based interferometric detectors, at frequencies below 50 Hz. We set upper limits in the range between about $10^{-24}$ and $2\times 10^{-23}$ at most frequencies. Our upper limits on signal strain show an improvement of up to a factor of $\sim$2 with respect to the results of previous all-sky searches at frequencies below $80~\mathrm{Hz}$.
astrophysics
We present heyoka, a new, modern and general-purpose implementation of Taylor's integration method for the numerical solution of ordinary differential equations. Detailed numerical tests focused on difficult high-precision gravitational problems in astrodynamics and celestial mechanics show how our general-purpose integrator is competitive with and often superior to state-of-the-art specialised symplectic and non-symplectic integrators in both speed and accuracy. In particular, we show how Taylor methods are capable of satisfying Brouwer's law for the conservation of energy in long-term integrations of planetary systems over billions of dynamical timescales. We also show how close encounters are modelled accurately during simulations of the formation of the Kirkwood gaps and of Apophis' 2029 close encounter with the Earth (where heyoka surpasses the speed and accuracy of domain-specific methods). heyoka can be used from both C++ and Python, and it is publicly available as an open-source project.
astrophysics
Metric learning for classification has been intensively studied over the last decade. The idea is to learn a metric space induced from a normed vector space on which data from different classes are well separated. Different measures of the separation thus lead to various designs of the objective function in the metric learning model. One classical metric is the Mahalanobis distance, where a linear transformation matrix is designed and applied on the original dataset to obtain a new subspace equipped with the Euclidean norm. The kernelized version has also been developed, followed by Multiple-Kernel learning models. In this paper, we consider metric learning to be the identification of the best kernel function with respect to a high class separability in the corresponding metric space. The contribution is twofold: 1) No pairwise computations are required as in most metric learning techniques; 2) Better flexibility and lower computational complexity is achieved using the CLAss-Specific (Multiple) Kernel - Metric Learning (CLAS(M)K-ML). The proposed techniques can be considered as a preprocessing step to any kernel method or kernel approximation technique. An extension to a hierarchical learning structure is also proposed to further improve the classification performance, where on each layer, the CLASMK is computed based on a selected "marginal" subset and feature vectors are constructed by concatenating the features from all previous layers.
computer science
Loss minimization in distribution networks (DN) is of great significance since the trend to the distributed generation (DG) requires the most efficient operating scenario possible for economic viability variations. Moreover, voltage instability in DNs is a critical phenomenon and can lead to a major blackout in the system. The decreasing voltage stability level restricts the increase of load served by distribution companies. DG can be used to improve DN capabilities and brings new opportunities to traditional DNs. However, installation of DG in non-optimal places can result in an increase in system losses, voltage problems, etc. In this paper, genetic algorithm (GA), harmony search algorithm (HSA) and improved HSA have been applied to determine the optimal location of DGs. Simulation results for an IEEE 33 bus network are compared for different algorithms, and the best algorithm is stated for minimum losses.
electrical engineering and systems science
We investigate a generic discrete quantum system prepared in state $|\psi_\text{in}\rangle$, under repeated detection attempts aimed to find the particle in state $|d\rangle$, for example a quantum walker on a finite graph searching for a node. For the corresponding classical random walk, the total detection probability $P_\text{det}$ is unity. Due to destructive interference, one may find initial states $|\psi_\text{in}\rangle$ with $P_\text{det}<1$. We first obtain an uncertainty relation which yields insight on this deviation from classical behavior, showing the relation between $P_\text{det}$ and energy fluctuations: $ \Delta P \,\mathrm{Var}[\hat{H}]_d \ge | \langle d| [\hat{H}, \hat{D}] | \psi_\text{in} \rangle |^2$ where $\Delta P = P_\text{det} - |\langle\psi_\text{in}|d\rangle |^2$, and $\hat{D} = |d\rangle\langle d|$ is the measurement projector. Secondly, exploiting symmetry we show that $P_\text{det}\le 1/\nu$ where the integer $\nu$ is the number of states equivalent to the initial state. These bounds are compared with the exact solution for small systems, obtained from an analysis of the dark and bright subspaces, showing the usefulness of the approach. The upper bounds works well even in large systems, and we show how to tighten the lower bound in this case.
quantum physics
A series of oxytetrahalides WO$X_4$ ($X$: a halogen element) that form quasi-one-dimensional chains is investigated using first-principles calculations. The crystal structures, electronic structures, as well as ferroelectric and piezoelectric properties are discussed in detail. Group theory analysis shows that the ferroelectricity in this family originates from an unstable polar phonon mode $\Gamma_1^-$ induced by the W's $d^0$ orbital configuration. Their polarization magnitudes are found to be comparable to widely used ferroelectric perovskites. Because of its quasi-one-dimensional characteristics, the inter-chain domain wall energy density is low, leading to loosely-coupled ferroelectric chains. This is potentially beneficial for high density ferroelectric memories: we estimate that the upper-limit of memory density in these compounds could reach hundreds of terabytes per square inch.
condensed matter
The presence of AGB stars in clusters provides key constraints for stellar models, as has been demonstrated with historical data from the Magellanic Clouds. In this work, we look for candidate AGB stars in M31 star clusters from the Panchromatic Hubble Andromeda Treasury (PHAT) survey. Our photometric criteria selects stars brighter than the tip of the red giant branch, which includes the bulk of the thermally-pulsing AGB stars as well as early-AGB stars and other luminous cool giants expected in young stellar populations (e.g. massive red supergiants, and intermediate-mass red helium-burning stars). The AGB stars can be differentiated, a posteriori, using the ages already estimated for our cluster sample. 937 candidates are found within the cluster aperture radii, half (450) of which are very likely cluster members. Cross-matching with additional databases reveals two carbon stars and ten secure variables among them. The field-corrected age distribution reveals the presence of young supergiants peaking at ages smaller than 100 Myr, followed by a long tail of AGB stars extending up to the oldest possible ages. This long tail reveals the general decrease in the numbers of AGB stars from initial values of 50e-6/Msun at 100 Myr down to 5e-6/Msun at 10 Gyr. Theoretical models of near-solar metallicity reproduce this general trend, although with localized discrepancies over some age intervals, whose origin is not yet identified. The entire catalogue is released together with finding charts to facilitate follow-up studies.
astrophysics
Climate change has been identified as one of the greatest challenges facing nations, governments, businesses and citizens of the globe. The threats of climate change demand an increase in the share of renewable energy from the total of energy generation. Meanwhile, there are tremendous efforts to decrease the reliance on fossil fuel energies which opens the venue for increasing the usage of alternative resources such as nuclear energy. Many countries (e.g. Egypt) are planning to meet increasing electricity demands by increasing both renewable (especially wind energy) and nuclear energies contributions in electricity generation. In the planning phase of siting both new Wind Farms (WFs) and Nuclear Power Plants (NPPs), many benefits and challenges exist. An important aspect taken into consideration during the NPP siting is the existence of ultimate heat sink which is sea water in most cases. That is why most NPPs are sited on sea coasts. On the other hand, during WF siting, the main influential aspect is the existence of good wind resources. Many coastal areas around the world fulfill this requirement for WF siting. Coupling both NPPs and WFs in one site or nearby has many benefits and obstacles as well. In this thesis, based on international experience and literature reviews, the benefits and obstacles of this coupling/adjacency are studied and evaluated. Various case studies are carried out to verify the coupling/adjacency concept.
electrical engineering and systems science
Shape- and scale-selective digital-filters, with steerable finite/infinite impulse responses (FIR/IIRs) and non-recursive/recursive realizations, that are separable in both spatial dimensions and adequately isotropic, are derived. The filters are conveniently designed in the frequency domain via derivative constraints at dc, which guarantees orthogonality and monomial selectivity in the pixel domain (i.e. vanishing moments), unlike more commonly used FIR filters derived from Gaussian functions. A two-stage low-pass/high-pass architecture, for blur/derivative operations, is recommended. Expressions for the coefficients of a low-order IIR blur filter with repeated poles are provided, as a function of scale; discrete Butterworth (IIR), and colored Savitzky-Golay (FIR), blurs are also examined. Parallel software implementations on central processing units (CPUs) and graphics processing units (GPUs), for scale-selective blob-detection in aerial surveillance imagery, are analyzed. It is shown that recursive IIR filters are significantly faster than non-recursive FIR filters when detecting large objects at coarse scales, i.e. using filters with long impulse responses; however, the margin of outperformance decreases as the degree of parallelization increases.
electrical engineering and systems science
While statistical analysis of a single network has received a lot of attention in recent years, with a focus on social networks, analysis of a sample of networks presents its own challenges which require a different set of analytic tools. Here we study the problem of classification of networks with labeled nodes, motivated by applications in neuroimaging. Brain networks are constructed from imaging data to represent functional connectivity between regions of the brain, and previous work has shown the potential of such networks to distinguish between various brain disorders, giving rise to a network classification problem. Existing approaches tend to either treat all edge weights as a long vector, ignoring the network structure, or focus on graph topology as represented by summary measures while ignoring the edge weights. Our goal is to design a classification method that uses both the individual edge information and the network structure of the data in a computationally efficient way, and that can produce a parsimonious and interpretable representation of differences in brain connectivity patterns between classes. We propose a graph classification method that uses edge weights as predictors but incorporates the network nature of the data via penalties that promote sparsity in the number of nodes, in addition to the usual sparsity penalties that encourage selection of edges. We implement the method via efficient convex optimization and provide a detailed analysis of data from two fMRI studies of schizophrenia.
statistics
Most SUSY searches at the LHC are optimised for the MSSM, where gauginos are Majorana particles. By introducing Dirac gauginos, we obtain an enriched phenomenology, from which considerable differences in the LHC signatures and limits are expected as compared to the MSSM. Concretely, in the minimal Dirac gaugino model (MDGSSM) we have six neutralino and three chargino states. Moreover, production cross sections are enhanced for gluinos, while for squarks they are suppressed. In this contribution, we explore the consequences of the current LHC limits on gluinos and squarks in this model.
high energy physics phenomenology
Using Cayley transform, we show how to construct rotation matrices \emph{infinitely near} the identity matrix over a non-archimedean pythagorean field. As an application, an alternative way to construct non-central proper normal subgroups of the rotation group over such fields is provided.
mathematics
One main obstacle for the wide use of deep learning in medical and engineering sciences is its interpretability. While neural network models are strong tools for making predictions, they often provide little information about which features play significant roles in influencing the prediction accuracy. To overcome this issue, many regularization procedures for learning with neural networks have been proposed for dropping non-significant features. Unfortunately, the lack of theoretical results casts doubt on the applicability of such pipelines. In this work, we propose and establish a theoretical guarantee for the use of the adaptive group lasso for selecting important features of neural networks. Specifically, we show that our feature selection method is consistent for single-output feed-forward neural networks with one hidden layer and hyperbolic tangent activation function. We demonstrate its applicability using both simulation and data analysis.
statistics
We describe, realize, and experimentally investigate a method to perform physical rotations of ion chains, trapped in a segmented surface Paul trap, as a building block for large scale quantum computational sequences. Control of trapping potentials is achieved by parametrizing electrode voltages in terms of spherical harmonic potentials. Voltage sequences that enable crystal rotations are numerically obtained by optimizing time-dependent ion positions and motional frequencies, taking into account the effect of electrical filters in our set-up. We minimize rotation-induced heating by expanding the sequences into Fourier components, and optimizing the resulting parameters with a machine-learning approach. Optimized sequences rotate $^{40}$Ca$^+$ - $^{40}$Ca$^+$ crystals with axial heating rates of $\Delta\bar{n}_{com}=0.6^{(+3)}_{(-2)}$ and $\Delta\bar{n}_{str}=3.9(5)$ phonons per rotation for the common and stretch modes, at mode frequencies of 1.24 and 2.15 MHz. Qubit coherence loss is 0.2(2)$\%$ per rotation. We also investigate rotations of mixed species crystals ($^{40}$Ca$^+$ - $^{88}$Sr$^+$) and achieve unity success rate.
quantum physics
This article presents the motivation for developing a comprehensive modeling framework in which different models and parameter inputs can be compared and evaluated for a large range of jet-quenching observables measured in relativistic heavy-ion collisions at RHIC and the LHC. The concept of a framework us discussed within the context of recent efforts by the JET Collaboration, the authors of JEWEL, and the JETSCAPE collaborations. The framework ingredients for each of these approaches is presented with a sample of important results from each. The role of advanced statistical tools in comparing models to data is also discussed, along with the need for a more detailed accounting of correlated errors in experimental results.
physics
High-order quantum nonlinearity is an important prerequisite for the advanced quantum technology leading to universal quantum processing with large information capacity of continuous variables. Levitated optomechanics, a field where motion of dielectric particles is driven by precisely controlled tweezer beams, is capable of attaining the required nonlinearity via engineered potential landscapes of mechanical motion. Importantly, to achieve nonlinear quantum effects, the evolution caused by the free motion of mechanics and thermal decoherence have to be suppressed. For this purpose, we devise a method of stroboscopic application of a highly nonlinear potential to a mechanical oscillator that leads to the motional quantum non-Gaussian states exhibiting nonclassical negative Wigner function and squeezing of a nonlinear combination of mechanical quadratures. We test the method numerically by analysing highly instable cubic potential with relevant experimental parameters of the levitated optomechanics, prove its feasibility within reach, and propose an experimental test. The method paves a road for unique experiments instantaneously transforming a ground state of mechanical oscillators to applicable nonclassical states by nonlinear optical force.
quantum physics
We compare bipartite (Euclidean) matching problems in classical and quantum mechanics. The quantum case is treated in terms of a quantum version of the Wasserstein distance introduced in [F. Golse, C. Mouhot, T. Paul, Commun. Math. Phys. 343 (2016), 165-205]. We show that the optimal quantum cost can be cheaper than the classical one. We treat in detail the case of two particles: the equal mass case leads to equal quantum and classical costs. Moreover, we show examples with different masses for which the quantum cost is strictly cheaper than the classical cost.
mathematics
Undersampling the k-space in MRI allows saving precious acquisition time, yet results in an ill-posed inversion problem. Recently, many deep learning techniques have been developed, addressing this issue of recovering the fully sampled MR image from the undersampled data. However, these learning based schemes are susceptible to differences between the training data and the image to be reconstructed at test time. One such difference can be attributed to the bias field present in MR images, caused by field inhomogeneities and coil sensitivities. In this work, we address the sensitivity of the reconstruction problem to the bias field and propose to model it explicitly in the reconstruction, in order to decrease this sensitivity. To this end, we use an unsupervised learning based reconstruction algorithm as our basis and combine it with a N4-based bias field estimation method, in a joint optimization scheme. We use the HCP dataset as well as in-house measured images for the evaluations. We show that the proposed method improves the reconstruction quality, both visually and in terms of RMSE.
electrical engineering and systems science
Solving physical problems by deep learning is accurate and efficient mainly accounting for the use of an elaborate neural network. We propose a novel hybrid network which integrates two different kinds of neural networks: LSTM and ResNet, in order to overcome the difficulty met in solving strongly-oscillating dynamics of the system's time evolution. By taking the double-well model as an example we show that our new method can benefit from a pre-learning and verification of the periodicity of frequency by using the LSTM network, simultaneously making a high-fidelity prediction about the whole dynamics of system with ResNet, which is impossibly achieved in the case of single network. Such a hybrid network can be applied for solving cooperative dynamics in a system with fast spatial or temporal modulations, promising for realistic oscillation calculations under experimental conditions.
physics
Strain engineering of perovskite oxide thin films has proven to be an extremely powerful method for enhancing and inducing ferroelectric behavior. In ferroelectric thin films and superlattices, the polarization is intricately linked to crystal structure, but we show here that it can also play an important role in the growth process, influencing growth rates, relaxation mechanisms, electrical properties and domain structures. We have studied this effect in detail by focusing on the properties of BaTiO$_{3}$ thin films grown on very thin layers of PbTiO$_{3}$ using a combination of x-ray diffraction, piezoforce microscopy, electrical characterization and rapid in-situ x-ray diffraction reciprocal space maps during the growth using synchrotron radiation. Using a simple model we show that the changes in growth are driven by the energy cost for the top material to sustain the polarization imposed upon it by the underlying layer, and these effects may be expected to occur in other multilayer systems where polarization is present during growth. Our research motivates the concept of polarization engineering during the growth process as a new and complementary approach to strain engineering.
condensed matter
Modern approaches to causal modeling give a central role to interventions, which require the active input of an observer and introduces an explicit `causal arrow of time'. Causal models typically adopt a mechanistic interpretation, according to which the direction of the causal arrow is intrinsic to the process being studied. Here we investigate whether the direction of the causal arrow might be a contribution from the observer, rather than an intrinsic property of the process. Working within a counterfactual and non-mechanistic interpretation of causal modeling developed in arXiv:1806.00895, we propose a definition of a `quantum observational scheme' that we argue characterizes the observer-invariant properties of a causal model. By restricting to quantum processes that preserve the maximally mixed state (unbiasedness) we find that the statistics is symmetric under reversal of the time-ordering. The resulting model can therefore accommodate the idea that the causal arrow is observer-dependent, indicating a route towards reconciling the causal arrow with time-symmetric laws of physics.
quantum physics
We investigate the nonlinear dynamics of cold atom systems that can in principle serve as quantum simulators of false vacuum decay. The analog false vacuum manifests as a metastable vacuum state for the relative phase in a two-species Bose-Einstein condensate (BEC), induced by a driven periodic coupling between the two species. In the appropriate low energy limit, the evolution of the relative phase is approximately governed by a relativistic wave equation exhibiting true and false vacuum configurations. In previous work, a linear stability analysis identified exponentially growing short-wavelength modes driven by the time-dependent coupling. These modes threaten to destabilize the analog false vacuum. Here, we employ numerical simulations of the coupled Gross-Pitaevski equations (GPEs) to determine the non-linear evolution of these linearly unstable modes. We find that unless a physical mechanism modifies the GPE on short length scales, the analog false vacuum is indeed destabilized. We briefly discuss various physically expected corrections to the GPEs that may act to remove the exponentially unstable modes. To investigate the resulting dynamics in cases where such a removal mechanism exists, we implement a hard UV cutoff that excludes the unstable modes as a simple model for these corrections. We use this to study the range of phenomena arising from such a system. In particular, we show that by modulating the strength of the time-dependent coupling, it is possible to observe the crossover between a second and first order phase transition out of the false vacuum.
high energy physics theory
In this work, we introduce iviz, a mobile application for visualizing ROS data. In the last few years, the popularity of ROS has grown enormously, making it the standard platform for open source robotic programming. A key reason for this success is the availability of polished, general-purpose modules for many tasks, such as localization, mapping, path planning, and quite importantly, data visualization. However, the availability of the latter is generally restricted to PCs with the Linux operating system. Thus, users that want to see what is happening in the system with a smartphone or a tablet are stuck with solutions such as screen mirroring or using web browser versions of rviz, which are difficult to interact with from a mobile interface. More importantly, this makes newer visualization modalities such as Augmented Reality impossible. Our application iviz, based on the Unity engine, addresses these issues by providing a visualization platform designed from scratch to be usable in mobile platforms, such as iOS, Android, and UWP, and including native support for Augmented Reality for all three platforms. If desired, it can also be used in a PC with Linux, Windows, or macOS without any changes.
computer science
Perturbative fermion anomalies in spacetime dimension $d$ have a well-known relation to Chern-Simons functions in dimension $D=d+1$. This relationship is manifested in a beautiful way in "anomaly inflow" from the bulk of a system to its boundary. Along with perturbative anomalies, fermions also have global or nonperturbative anomalies, which can be incorporated by using the $\eta$-invariant of Atiyah, Patodi, and Singer instead of the Chern-Simons function. Here we give a nonperturbative description of anomaly inflow, involving the $\eta$-invariant. This formula has been expected in the past based on the Dai-Freed theorem, but has not been fully justified. It leads to a general description of perturbative and nonperturbative fermion anomalies in $d$ dimensions in terms of an $\eta$-invariant in $D$ dimensions. This $\eta$-invariant is a cobordism invariant whenever perturbative anomalies cancel.
high energy physics theory
One of the molecular properties most intuitive to the human perception is the geometrical shape. However, when exploring a large chemical space the determination of shape needs to be automated. We present a fast and simple approach to identify a molecule as linear, planar, cube, cuboid, disk, elliptical disk, spheroid and sphere which is more fine grained than existing approaches. The method is applied to more than one billion molecules ranging from small organic molecules to whole proteins. The results show that current chemistry research is biased towards planar geometries. Moreover, we demonstrate that our molecular shape classification correlates with sought-after properties like the band gap, dipole moment, and heat capacity. This allows to increase the efficiency of molecular design studies by driving high-throughput-screening efforts towards desired values of molecular properties.
physics
The squeezed state is important in quantum metrology and quantum information. The most effective generation tool known is the optical parametric oscillator (OPO). Currently, only the squeezed states of lower-order spatial modes can be generated by an OPO. However, the squeezed states of higher-order complex spatial modes are more useful for applications such as quantum metrology, quantum imaging and quantum information. A major challenge for future applications is efficient generation. Here, we use cascaded phase-only spatial light modulators to modulate the amplitude and phase of the incident fundamental mode squeezed state. This efficiently generates a series of squeezed higher-order Hermite-Gauss modes and a squeezed arbitrary complex amplitude distributed mode. The method may yield new applications in biophotonics, quantum metrology and quantum information processing.
quantum physics
The concept of Granger causality is increasingly being applied for the characterization of directional interactions in different applications. A multivariate framework for estimating Granger causality is essential in order to account for all the available information from multivariate time series. However, the inclusion of non-informative or non-significant variables creates estimation problems related to the 'curse of dimensionality'. To deal with this issue, direct causality measures using variable selection and dimension reduction techniques have been introduced. In this comparative work, the performance of an ensemble of bivariate and multivariate causality measures in the time domain is assessed, focusing on dimension reduction causality measures. In particular, different types of high-dimensional coupled discrete systems are used (involving up to 100 variables) and the robustness of the causality measures to time series length and different noise types is examined. The results of the simulation study highlight the superiority of the dimension reduction measures, especially for high-dimensional systems.
statistics
We discuss exact plaquette-ordered ground states of the generalized Hubbard model based on the projection operator method for several corner sharing lattices: Kagome, checkerboard, and pyrochlore lattices. The obtained exact ground states are interpreted as N\'eel ordered states on the plaquette-located electrons. We demonstrate that these models also have exact edge states. We also calculate the entanglement entropy exactly in these systems.
condensed matter
We propose a novel solution for semi-supervised video object segmentation. By the nature of the problem, available cues (e.g. video frame(s) with object masks) become richer with the intermediate predictions. However, the existing methods are unable to fully exploit this rich source of information. We resolve the issue by leveraging memory networks and learn to read relevant information from all available sources. In our framework, the past frames with object masks form an external memory, and the current frame as the query is segmented using the mask information in the memory. Specifically, the query and the memory are densely matched in the feature space, covering all the space-time pixel locations in a feed-forward fashion. Contrast to the previous approaches, the abundant use of the guidance information allows us to better handle the challenges such as appearance changes and occlussions. We validate our method on the latest benchmark sets and achieved the state-of-the-art performance (overall score of 79.4 on Youtube-VOS val set, J of 88.7 and 79.2 on DAVIS 2016/2017 val set respectively) while having a fast runtime (0.16 second/frame on DAVIS 2016 val set).
computer science
We extend the key notion of Martin-L\"of randomness for infinite bit sequences to the quantum setting, where the sequences become states of an infinite dimensional system. We work towards showing an analogy with the Levin-Schnorr theorem to characterise quantum ML-randomness of states by incompressibility (in the sense of quantum Turing machines) of all initial segments.
quantum physics
Cosmological CPT violation will rotate the polarized direction of CMB photons, convert partial CMB E mode into B mode and vice versa. It will generate non-zero EB, TB spectra and change the EE, BB, TE spectra. This phenomenon gives us a way to detect the CPT-violation signature from CMB observations, and also provides a new mechanism to produce B mode polarization. In this paper, we perform a global analysis on tensor-to-scalar ratio $r$ and polarization rotation angles based on current CMB datasets with both low $\ell$ (Planck, BICEP2/Keck Array) and high $\ell$ (POLARBEAR, SPTpol, ACTPol). Benefited from the high precision of CMB data, we obtain the isotropic rotation angle $\bar{\alpha} = -0.01^\circ \pm 0.37^\circ $ at 68% C.L., the variance of the anisotropic rotation angles $C^{\alpha}(0)<0.0032\,\mathrm{rad}^2$, the scale invariant power spectrum $D^{\alpha\alpha}_{\ell \in [2, 350]}<4.71\times 10^{-5} \,\mathrm{rad}^2$ and $r<0.057$ at 95% C.L.. Our result shows that with the polarization rotation effect, the 95% upper limit on $r$ gets tightened by 17%.
astrophysics
We argue that the necessity of picking an imaginary time direction for the analytic continuation to Minkowski signature gives a new point of view on the problem of Euclidean spinor fields, allowing an interpretation in Minkowski space-time of one of the chiral factors of Spin(4)=SU(2)xSU(2) as an internal symmetry. The imaginary time direction spontaneously breaks this SU(2), playing the role of the Higgs Field. Twistor geometry provides a compelling framework for formulating spinor fields in complexified four-dimensional space-time and implementing the above suggestion. Projective twistor space PT naturally includes an internal SU(3) symmetry as well as the above SU(2), and spinors on this space behave like a generation of leptons. Since only one chirality of the Euclidean Spin(4) is a space-time symmetry after analytic continuation and the Higgs field defines the imaginary time direction, the space-time geometry degrees of freedom are only a chiral SU(2) connection and a spatial frame. These may allow a consistent quantization of gravity in a chiral formulation, unified in the twistor framework with the degrees of freedom of the Standard Model. This unification proposal is incomplete, still requiring implementation as a theory with gauge symmetry on PT, perhaps related to known correspondences between super Yang-Mills theories and supersymmetric holomorphic Chern-Simons theories on PT.
high energy physics theory
The Indian Scintillator Matrix for Reactor Anti-Neutrino detection - ISMRAN experiment aims to detect electron anti-neutrinos ($\bar\nu_e$) emitted from a reactor via inverse beta decay reaction (IBD). The setup, consisting of 1 ton segmented Gadolinium foil wrapped plastic scintillator array, is planned for remote reactor monitoring and sterile neutrino search. The detection of prompt positron and delayed neutron from IBD will provide the signature of $\bar\nu_e$ event in ISMRAN. The number of segments with energy deposit ($\mathrm{N_{bars}}$) and sum total of these deposited energies are used as discriminants for identifying prompt positron event and delayed neutron capture event. However, a simple cut based selection of above variables leads to a low $\bar\nu_e$ signal detection efficiency due to overlapping region of $\mathrm{N_{bars}}$ and sum energy for the prompt and delayed events. Multivariate analysis (MVA) tools, employing variables suitably tuned for discrimination, can be useful in such scenarios. In this work we report the results from an application of artificial neural network -- the multilayer perceptron (MLP), particularly the Bayesian extension -- MLPBNN, to the simulated signal and background events in ISMRAN. The results from application of MLP to classify prompt positron events from delayed neutron capture events on Hydrogen, Gadolinium nuclei and also from the typical reactor $\gamma$-ray and fast neutron backgrounds is reported. An enhanced efficiency of $\sim$91$\%$ with a background rejection of $\sim$73$\%$ for prompt selection and an efficiency of $\sim$89$\%$ with a background rejection of $\sim$71$\%$ for the delayed capture event, is achieved using the MLPBNN classifier for the ISMRAN experiment.
physics
Federated learning has emerged as an innovative paradigm of collaborative machine learning. Unlike conventional machine learning, a global model is collaboratively learned while data remains distributed over a tremendous number of client devices, thus not compromising user privacy. However, several challenges still remain despite its glowing popularity; above all, the global aggregation in federated learning involves the challenge of biased model averaging and lack of prior knowledge in client sampling, which, in turn, leads to high generalization error and slow convergence rate, respectively. In this work, we propose a novel algorithm called FedCM that addresses the two challenges by utilizing prior knowledge with multi-armed bandit based client sampling and filtering biased models with combinatorial model averaging. Based on extensive evaluations using various algorithms and representative heterogeneous datasets, we showed that FedCM significantly outperformed the state-of-the-art algorithms by up to 37.25% and 4.17 times, respectively, in terms of generalization accuracy and convergence rate.
computer science
These notes have the intent to introduce the study of the nonlinear aspects of operator space theory. We investigate some results on the nonlinear theory of Banach spaces which remain valid in the noncommutative case. In particular, we show that Ribe's theorem has a complete analog to preduals of von Neumann algebras and introduce the concept of the Lipschitz free operator space of an operator space. We use those to prove results about injective von Neumann algebras, Pisier's operator space OH, and the existence of complete linear isometric embeddings between operator spaces.
mathematics
According to the Kubo formulas we employ the (3+1)-d parton cascade, Boltzmann approach of multiparton scatterings (BAMPS), to calculate the anisotropic transport coefficients (shear viscosity and electric conductivity) for an ultrarelativistic Boltzmann gas in the presence of a magnetic field. The results are compared with those recently obtained by using the Grad's approximation. We find good agreements between both results, which confirms the general use of the derived Kubo formulas for calculating the anisotropic transport coefficients of quark-gluon plasma in a magnetic field.
high energy physics phenomenology
We construct a $d=11$ supergravity analogue of the open-closed string map in the context of SL(5) Exceptional Field Theory (ExFT). The deformation parameter tri-vector $\Omega$ generalizes the non-commutativity bi-vector parameter $\Theta$ of the open string. When applied to solutions in $d=11$, this map provides an economical way of performing TsT deformations, and may be used to recover $d=10$ Yang-Baxter deformations after dimensional reduction. We present a generalization of the Classical Yang-Baxter Equation (CYBE) for rank 3 objects, which emerges from $d=11$ supergravity and the SL(5) ExFT. This equation is shown to reduce to the $d=10$ CYBE upon dimensional reduction.
high energy physics theory
A plenoptic light field (LF) camera places an array of microlenses in front of an image sensor in order to separately capture different directional rays arriving at an image pixel. Using a conventional Bayer pattern, data captured at each pixel is a single color component (R, G or B). The sensed data then undergoes demosaicking (interpolation of RGB components per pixel) and conversion to an array of sub-aperture images (SAIs). In this paper, we propose a new LF image coding scheme based on graph lifting transform (GLT), where the acquired sensor data are coded in the original captured form without pre-processing. Specifically, we directly map raw sensed color data to the SAIs, resulting in sparsely distributed color pixels on 2D grids, and perform demosaicking at the receiver after decoding. To exploit spatial correlation among the sparse pixels, we propose a novel intra-prediction scheme, where the prediction kernel is determined according to the local gradient estimated from already coded neighboring pixel blocks. We then connect the pixels by forming a graph, modeling the prediction residuals statistically as a Gaussian Markov Random Field (GMRF). The optimal edge weights are computed via a graph learning method using a set of training SAIs. The residual data is encoded via low-complexity GLT. Experiments show that at high PSNRs -- important for archiving and instant storage scenarios -- our method outperformed significantly a conventional light field image coding scheme with demosaicking followed by High Efficiency Video Coding (HEVC).
electrical engineering and systems science
We study a first-order primal-dual subgradient method to optimize risk-constrained risk-penalized optimization problems, where risk is modeled via the popular conditional value at risk (CVaR) measure. The algorithm processes independent and identically distributed samples from the underlying uncertainty in an online fashion, and produces an $\eta/\sqrt{K}$-approximately feasible and $\eta/\sqrt{K}$-approximately optimal point within $K$ iterations with constant step-size, where $\eta$ increases with tunable risk-parameters of CVaR. We find optimized step sizes using our bounds and precisely characterize the computational cost of risk aversion as revealed by the growth in $\eta$. Our proposed algorithm makes a simple modification to a typical primal-dual stochastic subgradient algorithm. With this mild change, our analysis surprisingly obviates the need for a priori bounds or complex adaptive bounding schemes for dual variables assumed in many prior works. We also draw interesting parallels in sample complexity with that for chance-constrained programs derived in the literature with a very different solution architecture.
mathematics
We predict that triangle singularities of hadron spectroscopy are strongly affected in heavy ion collisions. To do it we examine various effects of finite temperature on the triangle loop yielding the singularity within the hadron phase. Pion-containing triangles can be enhanced by exchanging them with the medium, but in other cases, especially with heavy-quark hadrons, known thermal effects over the particles mass and width can quickly reduce the singularity: at temperatures of about 150 MeV, below the transition to a quark-gluon plasma, even by two orders of magnitude. It appears that peaks seen in central heavy ion collisions are more likely to be hadrons than rescattering effects unless perhaps if a pion is involved in the triangle. The medium then acts as a spectroscopic filter.
high energy physics phenomenology
Conventional multiuser detection techniques either require a large number of antennas at the receiver for a desired performance, or they are too complex for practical implementation. Moreover, many of these techniques, such as successive interference cancellation (SIC), suffer from errors in parameter estimation (user channels, covariance matrix, noise variance, etc.) that is performed before detection of user data symbols. As an alternative to conventional methods, this paper proposes and demonstrates a low-complexity practical Machine Learning (ML) based receiver that achieves similar (and at times better) performance to the SIC receiver. The proposed receiver does not require parameter estimation; instead it uses supervised learning to detect the user modulation symbols directly. We perform comparisons with minimum mean square error (MMSE) and SIC receivers in terms of symbol error rate (SER) and complexity.
electrical engineering and systems science
We give a strong direct sum theorem for computing $xor \circ g$. Specifically, we show that for every function g and every $k\geq 2$, the randomized query complexity of computing the xor of k instances of g satisfies $\overline{R}_\eps(xor\circ g) = \Theta(k \overline{R}_{\eps/k}(g))$. This matches the naive success amplification upper bound and answers a conjecture of Blais and Brody (CCC19). As a consequence of our strong direct sum theorem, we give a total function g for which $R(xor \circ g) = \Theta(k \log(k)\cdot R(g))$, answering an open question from Ben-David et al.(arxiv:2006.10957v1).
computer science
In this second paper of a series, we discuss the dynamics of a plasma entering the precursor of an unmagnetized, relativistic collisionless pair shock. We discuss how this background plasma is decelerated and heated through its interaction with a microturbulence that results from the growth of a current filamentation instability (CFI) in the shock precursor. We make use, in particular, of the reference frame $\mathcal R_{\rm w}$ in which the turbulence is mostly magnetic. This frame moves at relativistic velocities towards the shock front at rest, decelerating gradually from the far to the near precursor. In a first part, we construct a fluid model to derive the deceleration law of the background plasma expected from the scattering of suprathermal particles off the microturbulence. This law leads to the relationship $\gamma_{\rm p}\,\sim\,\xi_{\rm b}^{-1/2}$ between the background plasma Lorentz factor $\gamma_{\rm p}$ and the normalized pressure of the beam $\xi_{\rm b}$; it is found to match nicely the spatial profiles observed in large-scale 2D3V particle-in-cell simulations. In a second part, we model the dynamics of the background plasma at the kinetic level, incorporating the inertial effects associated with the deceleration of $\mathcal R_{\rm w}$ into a Vlasov-Fokker-Planck equation for pitch-angle diffusion. We show how the effective gravity in $\mathcal R_{\rm w}$ drives the background plasma particles through friction on the microturbulence, leading to efficient plasma heating. Finally, we compare a Monte Carlo simulation of our model with dedicated PIC simulations and conclude that it can satisfactorily reproduce both the heating and the deceleration of the background plasma in the shock precursor, thereby providing a successful 1D description of the shock transition at the microscopic level.
astrophysics
We aim to demonstrate the scientific potential of the Gaia Early Data Release 3 (EDR3) for the study of the Milky Way structure and evolution. We used astrometric positions, proper motions, parallaxes, and photometry from EDR3 to select different populations and components and to calculate the distances and velocities in the direction of the anticentre. We explore the disturbances of the current disc, the spatial and kinematical distributions of early accreted versus in-situ stars, the structures in the outer parts of the disc, and the orbits of open clusters Berkeley 29 and Saurer 1. We find that: i) the dynamics of the Galactic disc are very complex with vertical asymmetries, and new correlations, including a bimodality with disc stars with large angular momentum moving vertically upwards from below the plane, and disc stars with slightly lower angular momentum moving preferentially downwards; ii) we resolve the kinematic substructure (diagonal ridges) in the outer parts of the disc for the first time; iii) the red sequence that has been associated with the proto-Galactic disc that was present at the time of the merger with Gaia-Enceladus-Sausage is currently radially concentrated up to around 14 kpc, while the blue sequence that has been associated with debris of the satellite extends beyond that; iv) there are density structures in the outer disc, both above and below the plane, most probably related to Monoceros, the Anticentre Stream, and TriAnd, for which the Gaia data allow an exhaustive selection of candidate member stars and dynamical study; and v) the open clusters Berkeley~29 and Saurer~1, despite being located at large distances from the Galactic centre, are on nearly circular disc-like orbits. We demonstrate how, once again, the Gaia are crucial for our understanding of the different pieces of our Galaxy and their connection to its global structure and history.
astrophysics
The interplay of shaped signaling and fiber nonlinearities is reviewed in the asymptotic and finite-length regime. We present explanations and discuss implications of an optimum shaping length of just a few hundred symbols.
electrical engineering and systems science
We consider estimating average treatment effects (ATE) of a binary treatment in observational data when data-driven variable selection is needed to select relevant covariates from a moderately large number of available covariates $\mathbf{X}$. To leverage covariates among $\mathbf{X}$ predictive of the outcome for efficiency gain while using regularization to fit a parametric propensity score (PS) model, we consider a dimension reduction of $\mathbf{X}$ based on fitting both working PS and outcome models using adaptive LASSO. A novel PS estimator, the Double-index Propensity Score (DiPS), is proposed, in which the treatment status is smoothed over the linear predictors for $\mathbf{X}$ from both the initial working models. The ATE is estimated by using the DiPS in a normalized inverse probability weighting (IPW) estimator, which is found to maintain double-robustness and also local semiparametric efficiency with a fixed number of covariates $p$. Under misspecification of working models, the smoothing step leads to gains in efficiency and robustness over traditional doubly-robust estimators. These results are extended to the case where $p$ diverges with sample size and working models are sparse. Simulations show the benefits of the approach in finite samples. We illustrate the method by estimating the ATE of statins on colorectal cancer risk in an electronic medical record (EMR) study and the effect of smoking on C-reactive protein (CRP) in the Framingham Offspring Study.
statistics
This work studies Semantic Scene Completion which aims to predict a 3D semantic segmentation of our surroundings, even though some areas are occluded. For this we construct a Bayesian Convolutional Neural Network (BCNN), which is not only able to perform the segmentation, but also predict model uncertainty. This is an important feature not present in standard CNNs. We show on the MNIST dataset that the Bayesian approach performs equal or better to the standard CNN when processing digits unseen in the training phase when looking at accuracy, precision and recall. With the added benefit of having better calibrated scores and the ability to express model uncertainty. We then show results for the Semantic Scene Completion task where a category is introduced at test time on the SUNCG dataset. In this more complex task the Bayesian approach outperforms the standard CNN. Showing better Intersection over Union score and excels in Average Precision and separation scores.
computer science
This paper provides estimation and inference methods for an identified set where the selection among a very large number of covariates is based on modern machine learning tools. I characterize the boundary of the identified set (i.e., support function) using a semiparametric moment condition. Combining Neyman-orthogonality and sample splitting ideas, I construct a root-N consistent, uniformly asymptotically Gaussian estimator of the support function and propose a weighted bootstrap procedure to conduct inference about the identified set. I provide a general method to construct a Neyman-orthogonal moment condition for the support function. Applying my method to Lee (2008)'s endogenous selection model, I provide the asymptotic theory for the sharp (i.e., the tightest possible) bounds on the Average Treatment Effect in the presence of high-dimensional covariates. Furthermore, I relax the conventional monotonicity assumption and allow the sign of the treatment effect on the selection (e.g., employment) to be determined by covariates. Using JobCorps data set with very rich baseline characteristics, I substantially tighten the bounds on the JobCorps effect on wages under weakened monotonicity assumption.
statistics
Model-based algorithms, which learn a dynamics model from logged experience and perform some sort of pessimistic planning under the learned model, have emerged as a promising paradigm for offline reinforcement learning (offline RL). However, practical variants of such model-based algorithms rely on explicit uncertainty quantification for incorporating pessimism. Uncertainty estimation with complex models, such as deep neural networks, can be difficult and unreliable. We overcome this limitation by developing a new model-based offline RL algorithm, COMBO, that regularizes the value function on out-of-support state-action tuples generated via rollouts under the learned model. This results in a conservative estimate of the value function for out-of-support state-action tuples, without requiring explicit uncertainty estimation. We theoretically show that our method optimizes a lower bound on the true policy value, that this bound is tighter than that of prior methods, and our approach satisfies a policy improvement guarantee in the offline setting. Through experiments, we find that COMBO consistently performs as well or better as compared to prior offline model-free and model-based methods on widely studied offline RL benchmarks, including image-based tasks.
computer science
Continuous speech separation plays a vital role in complicated speech related tasks such as conversation transcription. The separation model extracts a single speaker signal from a mixed speech. In this paper, we use transformer and conformer in lieu of recurrent neural networks in the separation system, as we believe capturing global information with the self-attention based method is crucial for the speech separation. Evaluating on the LibriCSS dataset, the conformer separation model achieves state of the art results, with a relative 23.5% word error rate (WER) reduction from bi-directional LSTM (BLSTM) in the utterance-wise evaluation and a 15.4% WER reduction in the continuous evaluation.
electrical engineering and systems science
The emergence of various exciton-related effects in transition metal dichalcogenides (TMDC) and their heterostructures has inspired a significant number of studies and brought forth several possible applications. Often, standard photoluminescence (PL) with microscale lateral resolution is utilized to identify and characterize these excitonic phenomena, including interlayer excitons (IEXs). We studied the local PL signatures of van der Waals heterobilayers composed of exfoliated monolayers of the (Mo,W)(S,Se)$_2$ TMDC family with high spatial resolution (down to 30 nm) using tip-enhanced photoluminescence (TEPL) with different orders (top/bottom) and on different substrates. We evidence that other PL signals may appear near the reported energy of the IEX transitions, possibly interfering in the interpretation of the results. While we can distinguish and confirm the presence of IEX-related PL in MoS$_2$-WS$_2$ and MoSe$_2$-WSe$_2$, we find no such feature in the MoS$_2$-WSe$_2$ heterobilayer in the spectral region of 1.7-1.4 eV, where the IEXs of this heterobilayer is often reported. We assign the extra signals to the PL of the individual monolayers, in which the exciton energy is altered by the local strains caused by the formation of blisters and nanobubbles, and the PL is extremely enhanced due to the decoupling of the layers. We prove that even a single nanobubble as small as 60 nm---hence not optically visible---can induce such a suspicious PL feature in the micro-PL spectrum of an otherwise flat heterobilayer.
condensed matter
Unsupervised domain adaptation addresses the problem of transferring knowledge from a well-labeled source domain to an unlabeled target domain where the two domains have distinctive data distributions. Thus, the essence of domain adaptation is to mitigate the distribution divergence between the two domains. The state-of-the-art methods practice this very idea by either conducting adversarial training or minimizing a metric which defines the distribution gaps. In this paper, we propose a new domain adaptation method named Adversarial Tight Match (ATM) which enjoys the benefits of both adversarial training and metric learning. Specifically, at first, we propose a novel distance loss, named Maximum Density Divergence (MDD), to quantify the distribution divergence. MDD minimizes the inter-domain divergence ("match" in ATM) and maximizes the intra-class density ("tight" in ATM). Then, to address the equilibrium challenge issue in adversarial domain adaptation, we consider leveraging the proposed MDD into adversarial domain adaptation framework. At last, we tailor the proposed MDD as a practical learning loss and report our ATM. Both empirical evaluation and theoretical analysis are reported to verify the effectiveness of the proposed method. The experimental results on four benchmarks, both classical and large-scale, show that our method is able to achieve new state-of-the-art performance on most evaluations. Codes and datasets used in this paper are available at {\it github.com/lijin118/ATM}.
computer science
Stars form from large clouds of gas and dust that contract under their own gravity. The birth of a star occurred when a fusion reaction of hydrogen into helium has ignited in its core. The key variable that determines the formation of a star is mass. If the mass of the contracting cloud is below certain minimum value, instead of a star, a substelar object -- known as a brown dwarf -- will form. How much mass is required for a star to form? This article aims to answer this question by means of a simple heuristic argument. The found value is 0.016 solar masses, which is of the same order of magnitude as the accepted value 0.08 solar masses. This article may be useful as pedagogical material in an introductory undergraduate astronomy course.
physics
Suitable continuity and boundedness assumptions on the function f defining the dynamics of a time-varying nonimpulsive system with inputs are known to make the system inherit stability properties from the zero-input system. Whether this type of robustness holds or not for impulsive systems was still an open question. By means of suitable (counter)examples, we show that such stability robustness with respect to the inclusion of inputs cannot hold in general, not even for impulsive systems with time-invariant flow and jump maps. In particular, we show that zero-input global uniform asymptotic stability (0-GUAS) does not imply converging input converging state (CICS), and that 0-GUAS and uniform bounded-energy input bounded state (UBEBS) do not imply integral input-to-state stability (iISS). We also comment on available existing results that, however, show that suitable constraints on the allowed impulse-time sequences indeed make some of these robustness properties possible.
electrical engineering and systems science
Deep reinforcement learning offers a model-free alternative to supervised deep learning and classical optimization for solving the transmit power control problem in wireless networks. The multi-agent deep reinforcement learning approach considers each transmitter as an individual learning agent that determines its transmit power level by observing the local wireless environment. Following a certain policy, these agents learn to collaboratively maximize a global objective, e.g., a sum-rate utility function. This multi-agent scheme is easily scalable and practically applicable to large-scale cellular networks. In this work, we present a distributively executed continuous power control algorithm with the help of deep actor-critic learning, and more specifically, by adapting deep deterministic policy gradient. Furthermore, we integrate the proposed power control algorithm to a time-slotted system where devices are mobile and channel conditions change rapidly. We demonstrate the functionality of the proposed algorithm using simulation results.
electrical engineering and systems science
We present a new theorem describing stable solutions for a driven quantum system. The theorem, coined `inertial theorem', is applicable for fast driving, provided the acceleration rate is small. The theorem states that in the inertial limit eigenoperators of the propagator remain invariant throughout the dynamics, accumulating dynamical and geometric phases. The proof of the theorem utilizes the structure of Liouville space and a closed Lie algebra of operators. We demonstrate applications of the theorem by studying three explicit solutions of a harmonic oscillator, a two-level and three-level system models. These examples demonstrate that the inertial solution is superior to that obtained with the adiabatic approximation. Inertial protocols can be combined to generate a new family of solutions. The inertial theorem is then employed to extend the validity of the Markovian Master equation to strongly driven open quantum systems. In addition, we explore the consequence of new geometric phases associated with the driving parameters.
quantum physics
Starting from an engineered periodic optical structure formed by waveguide arrays comprised of two interleaved lattices, we simulate a deformed Dirac equation. We show that the system also simulate graphene nano ribbons under strain. This optical analogue allows us to study the phenomenon of Zitterbewegung for the modified Dirac equation. Our results show that the amplitude of Zitterbewegung oscillations changes as the deformation parameter is changed.
condensed matter
In this article, we present some new general forms of numerical radius inequalities for Hilbert space operators. The significance of these inequalities follow from the way they extend and refine some known results in this field. Among other inequalities, it is shown that if $A$ is a bounded linear operator on a complex Hilbert space, then \[{{w}^{2}}\left( A \right)\le \left\| \int_{0}^{1}{{{\left( t\left| A \right|+\left( 1-t \right)\left| {{A}^{*}} \right| \right)}^{2}}dt} \right\|\le \frac{1}{2}\left\| \;{{\left| A \right|}^{2}}+{{\left| {{A}^{*}} \right|}^{2}} \right\|\] where $w\left( A \right)$ and $\left\| A \right\|$ are the numerical radius and the usual operator norm of $A$, respectively.
mathematics
We comparatively studied the long-term variation (1992-2017) in polar brightening observed with the Nobeyama Radioheliograph, the polar solar wind velocity with interplanetary scintillation observations at the Institute for Space-Earth Environmental Research, and the coronal hole distribution computed by potential field calculations of the solar corona using synoptic magnetogram data obtained at Kitt Peak National Solar Observatory. First, by comparing the solar wind velocity (V) and the brightness temperature (T_b) in the polar region, we found good correlation coefficients (CCs) between V and T_b in the polar regions, CC = 0.91 (0.83) for the northern (southern) polar region, and we obtained the V-T_b relationship as V =12.6 (T_b-10,667)^{1/2}+432. We also confirmed that the CC of V-T_b is higher than those of V-B and V-B/f, where B and f are the polar magnetic field strength and magnetic flux expansion rate, respectively. These results indicate that T_b is a more direct parameter than B or B/f for expressing solar wind velocity. Next, we analyzed the long-term variation of the polar brightening and its relation to the area of the polar coronal hole (A). As a result, we found that the polar brightening matches the probability distribution of the predicted coronal hole and that the CC between T_b and A is remarkably high, CC = 0.97. This result indicates that the polar brightening is strongly coupled to the size of the polar coronal hole. Therefore, the reasonable correlation of V-T_b is explained by V-A. In addition, by considering the anti-correlation between A and f found in a previous study, we suggest that the V-T_b relationship is another expression of the Wang-Sheeley relationship (V-1/f) in the polar regions.
astrophysics
Network filtering is an important form of dimension reduction to isolate the core constituents of large and interconnected complex systems. We introduce a new technique to filter large dimensional networks arising out of dynamical behavior of the constituent nodes, exploiting their spectral properties. As opposed to the well known network filters that rely on preserving key topological properties of the realized network, our method treats the spectrum as the fundamental object and preserves spectral properties. Applying asymptotic theory for high dimensional data for the filter, we show that it can be tuned to interpolate between zero filtering to maximal filtering that induces sparsity and consistency while having the least spectral distance from a linear shrinkage estimator. We apply our proposed filter to covariance networks constructed from financial data, to extract the key subnetwork embedded in the full sample network.
statistics
In this work, we revisit the thermodynamical self-consistency of the quasiparticle model with the finite baryon chemical potential adjusted to lattice QCD calculations. Here, we investigate the possibility that the effective quasiparticle mass is also a function of its momentum, $k$, in addition to temperature $T$ and chemical potential $\mu$. It is found that the thermodynamic consistency can be expressed in terms of an integro-differential equation concerning $k$, $T$, and $\mu$. We further discuss two special solutions, both can be viewed as sufficient condition for the thermodynamical consistency, while expressed in terms of a particle differential equation. The first case is shown to be equivalent to those previously discussed by Peshier et al. The second one, obtained through an ad hoc assumption, is an intrinsically different solution where the particle mass is momentum dependent. These equations can be solved by using boundary condition determined by the lattice QCD data at vanishing baryon chemical potential. By numerical calculations, we show that both solutions can reasonably reproduce the recent lattice QCD results of the Wuppertal-Budapest and HotQCD Collaborations, and in particular, those concerning finite baryon density. Possible implications are discussed.
high energy physics phenomenology
The theory of angular momentum connects physical rotations and quantum spins together at a fundamental level. Physical rotation of a quantum system will therefore affect fundamental quantum operations, such as spin rotations in projective Hilbert space, but these effects are subtle and experimentally challenging to observe due to the fragility of quantum coherence. Here we report a measurement of a single-electron-spin phase shift arising directly from physical rotation, without transduction through magnetic fields or ancillary spins. This phase shift is observed by measuring the phase difference between a microwave driving field and a rotating two-level electron spin system, and can accumulate nonlinearly in time. We detect the nonlinear phase using spin-echo interferometry of a single nitrogen-vacancy qubit in a diamond rotating at 200,000rpm. Our measurements demonstrate the fundamental connections between spin, physical rotation and quantum phase, and will be applicable in schemes where the rotational degree of freedom of a quantum system is not fixed, such as spin-based rotation sensors and trapped nanoparticles containing spins.
quantum physics
To date, research on sensor-equipped mobile devices has primarily focused on the purely supervised task of human activity recognition (walking, running, etc), demonstrating limited success in inferring high-level health outcomes from low-level signals, such as acceleration. Here, we present a novel self-supervised representation learning method using activity and heart rate (HR) signals without semantic labels. With a deep neural network, we set HR responses as the supervisory signal for the activity data, leveraging their underlying physiological relationship. We evaluate our model in the largest free-living combined-sensing dataset (comprising more than 280,000 hours of wrist accelerometer & wearable ECG data) and show that the resulting embeddings can generalize in various downstream tasks through transfer learning with linear classifiers, capturing physiologically meaningful, personalized information. For instance, they can be used to predict (higher than 70 AUC) variables associated with individuals' health, fitness and demographic characteristics, outperforming unsupervised autoencoders and common bio-markers. Overall, we propose the first multimodal self-supervised method for behavioral and physiological data with implications for large-scale health and lifestyle monitoring.
computer science
Many quantum measurements, such as photodetection, can be destructive. In photodetection, when the detector clicks a photon has been absorbed and destroyed. Yet the lack of a click also gives information about the presence or absence of a photon. In monitoring the emission of photons from a source, one decomposes the strong measurement into a series of weak measurements, which describe the evolution of the state during the measurement process. Motivated by this example of destructive photon detection, a simple model of destructive weak measurements using qubits was studied in [1]. It has shown that the model can achieve any positive-operator valued measurement (POVM) with commuting POVM elements including projective measurements. In this paper, we use a different approach for decomposing any POVM into a series of weak measurements. The process involves three steps: randomly choose, with certain probabilities, a set of linearly independent POVM elements; perform that POVM by a series of destructive weak measurements; output the result of the series of weak measurements with certain probabilities. The probabilities of the outcomes from this process agree with those from the original POVM, and hence this model of destructive weak measurements can perform any qubit POVM.
quantum physics
We discuss the effects of exponential fragmentation of the Hilbert space on phase transitions in the context of coupled ferromagnetic Ising models in arbitrary dimension with special emphasis on the one dimensional case. We show that the dynamics generated by quantum fluctuations is bounded within spatial partitions of the system and weak mixing of these partitions caused by global transverse fields leads to a zero temperature phase with ordering in the local product of both Ising copies but no long range order in either species. This leads to a natural connection with the Ashkin-Teller universality class for general lattices. We confirm this for the periodic chain using quantum Monte Carlo simulations. We also point out that our treatment provides an explanation for pseudo-first order behavior seen in the Binder cumulants of the classical frustrated $J_1-J_2$ Ising model and the $q=4$ Potts model in 2D.
condensed matter
We update the constraints on the location of the nearest UHECR source. By analyzing recent data from the Pierre Auger Observatory using state-of-the-art CR propagation models, we reaffirm the need of local sources with a distance less than 25-100 Mpc, depending on mass composition. A new fast semi-analytical method for the propagation of UHECR in environments with turbulent magnetic fields is developed. The onset of an enhancement and a low-energy magnetic horizon of cosmic rays from sources located within a particular distance range is demonstrated. We investigate the distance to the nearest source, taking into account these magnetic field effects. The results obtained highlight the robustness of our constrained distances to the nearest source.
astrophysics
We investigate the dynamics brought on by an impulse perturbation in two infinite-range quantum Ising models coupled to each other and to a dissipative bath. We show that, if dissipation is faster the higher the excitation energy, the pulse perturbation cools down the low-energy sector of the system, at the expense of the high-energy one, eventually stabilising a transient symmetry-broken state at temperatures higher than the equilibrium critical one. Such non-thermal quasi-steady state may survive for quite a long time after the pulse, if the latter is properly tailored.
condensed matter
We study the quantum complexity of time evolution in large-$N$ chaotic systems, with the SYK model as our main example. This complexity is expected to increase linearly for exponential time prior to saturating at its maximum value, and is related to the length of minimal geodesics on the manifold of unitary operators that act on Hilbert space. Using the Euler-Arnold formalism, we demonstrate that there is always a geodesic between the identity and the time evolution operator $e^{-iHt}$ whose length grows linearly with time. This geodesic is minimal until there is an obstruction to its minimality, after which it can fail to be a minimum either locally or globally. We identify a criterion - the Eigenstate Complexity Hypothesis (ECH) - which bounds the overlap between off-diagonal energy eigenstate projectors and the $k$-local operators of the theory, and use it to show that the linear geodesic will at least be a local minimum for exponential time. We show numerically that the large-$N$ SYK model (which is chaotic) satisfies ECH and thus has no local obstructions to linear growth of complexity for exponential time, as expected from holographic duality. In contrast, we also study the case with $N=2$ fermions (which is integrable) and find short-time linear complexity growth followed by oscillations. Our analysis relates complexity to familiar properties of physical theories like their spectra and the structure of energy eigenstates and has implications for the hypothesized computational complexity class separations PSPACE $\nsubseteq$ BQP/poly and PSPACE $\nsubseteq$ BQSUBEXP/subexp, and the "fast-forwarding" of quantum Hamiltonians.
high energy physics theory
We study the $m$-graded quiver theories associated to CY $(m+2)$-folds and their order $(m+1)$ dualities. We investigate how monodromies give rise to mutation invariants, which in turn can be formulated as Diophantine equations characterizing the space of dual theories associated to a given geometry. We discuss these ideas in general and illustrate them in the case of orbifold theories. Interestingly, we observe that even in this simple context the corresponding Diophantine equations may admit an infinite number of seeds for $m\geq 2$, which translates into an infinite number of disconnected duality webs. Finally, we comment on the possible generalization of duality cascades beyond $m=1$.
high energy physics theory
Learning representations that clearly distinguish between normal and abnormal data is key to the success of anomaly detection. Most of existing anomaly detection algorithms use activation representations from forward propagation while not exploiting gradients from backpropagation to characterize data. Gradients capture model updates required to represent data. Anomalies require more drastic model updates to fully represent them compared to normal data. Hence, we propose the utilization of backpropagated gradients as representations to characterize model behavior on anomalies and, consequently, detect such anomalies. We show that the proposed method using gradient-based representations achieves state-of-the-art anomaly detection performance in benchmark image recognition datasets. Also, we highlight the computational efficiency and the simplicity of the proposed method in comparison with other state-of-the-art methods relying on adversarial networks or autoregressive models, which require at least 27 times more model parameters than the proposed method.
computer science
Classical tests of fit typically reject a model for large enough real data samples. In contrast, often in statistical practice a model offers a good description of the data even though it is not the "true" random generator. We consider a more flexible approach based on contamination neighbourhoods around a model. Using trimming methods and the Kolmogorov metric we introduce a functional statistic measuring departures from a contaminated model and the associated estimator corresponding to its sample version. We show how this estimator allows testing of fit for the (slightly) contaminated model vs sensible deviations from it, with uniformly exponentially small type I and type II error probabilities. We also address the asymptotic behavior of the estimator showing that, under suitable regularity conditions, it asymptotically behaves as the supremum of a Gaussian process. As an application we explore methods of comparison between descriptive models based on the paradigm of model falseness. We also include some connections of our approach with the False-Discovery-Rate setting, showing competitive behavior when estimating the contamination level, although applicable in a wider framework.
mathematics
We discuss the driven harmonic chain with fixed boundary conditions subject to weak coupling strength disorder. We discuss the evaluation of the Liapunov exponent in some detail expanding on the dynamical system theory approach by Levi et al. We show that including mass disorder the mass and coupling strength disorder can be combined in a renormalised mass disorder. We review the method of Dhar regarding the disorder-averaged heat current, apply the approach to the disorder-averaged large deviation function and finally comment on the validity of the Gallavotti-Cohen fluctuation theorem. The paper is also intended as an introduction to the field and includes detailed calculations.
condensed matter
We further study the cancellation of the one-loop corrections to the scalar mass in a six dimensional SU(2) gauge theory with higher dimensional operators, which is compactified on a torus with magnetic flux. Higher dimensional operators also contribute to the corrections to the scalar mass nontrivially. We explicitly show by the diagrammatic calculations that the corrections are exactly cancelled even with the leading terms of the higher dimensional operators.
high energy physics theory
Atmospheric muons are one of the main backgrounds for current Water- and Ice-Cherenkov neutrino telescopes designed to detect astrophysical neutrinos. The inclusive fluxes of atmospheric muons and neutrinos from hadronic interactions of cosmic rays have been extensively studied with Monte Carlo and cascade equation methods, for example, CORSIKA and MCEq. However, the muons that are pair produced in electromagnetic interaction of high energy photons are quantitatively not well understood. We present new simulation results and assess the model dependencies of the high-energy atmospheric muon flux including those from electromagnetic interactions, using a new numerical electromagnetic cascade equation solver EmCa that can be easily coupled with the hadronic solver MCEq. Both codes are in active development with the particular aim to become part of the next generation CORSIKA 8 air shower simulation package. The combination of EmCa and MCEq accounts for material effects that have not been previously included in most of the available codes. Hence, the influence of these effects on the air showers will also be briefly discussed.
astrophysics
A self-contained autonomous navigation system is desired to complement the Global Navigation Satellite System (GNSS) for land vehicles, for which odometer aided inertial navigation system (ODO/INS) is a classical solution. In this study, we use a wheel-mounted MEMS IMU (Wheel-IMU) to substitute the conventional odometer, and further, investigate three types of measurement models, including the velocity measurement, displacement increment measurement, and contact point zero-velocity measurement, in the Wheel-IMU based dead reckoning (DR) system. The three measurements, along with the non-holonomic constraints (NHCs) are fused with INS by an extended Kalman filter (EKF). Theoretical discussion and field tests illustrate their feasibility and equivalence in overall positioning performance, which have the maximum horizontal position drifts less than 2% of the total travelled distance. However, the displacement increment measurement model is less sensitive to the installation lever arm between the Wheel-IMU and wheel center.
computer science
We consider planar tiling and packing problems with polyomino pieces and a polyomino container $P$. A polyomino is a polygonal region with axis parallel edges and corners of integral coordinates, which may have holes. We give two polynomial time algorithms, one for deciding if $P$ can be tiled with $2\times 2$ squares (that is, deciding if $P$ is the union of a set of non-overlapping copies of the $2\times 2$ square) and one for packing $P$ with a maximum number of non-overlapping and axis-parallel $2\times 1$ dominos, allowing rotations of $90^\circ$. As packing is more general than tiling, the latter algorithm can also be used to decide if $P$ can be tiled by $2\times 1$ dominos. These are classical problems with important applications in VLSI design, and the related problem of finding a maximum packing of $2\times 2$ squares is known to be NP-Hard [J.~Algorithms 1990]. For our three problems there are known pseudo-polynomial time algorithms, that is, algorithms with running times polynomial in the \emph{area} of $P$. However, the standard, compact way to represent a polygon is by listing the coordinates of the corners in binary. We use this representation, and thus present the first polynomial time algorithms for the problems. Concretely, we give a simple $O(n\log n)$ algorithm for tiling with squares, and a more involved $O(n^4)$ algorithm for packing and tiling with dominos.
computer science
In order to fuse measurements from multiple sensors mounted on a mobile robot, it is needed to express them in a common reference system through their relative spatial transformations. In this paper, we present a method to estimate the full 6DoF extrinsic calibration parameters of multiple heterogeneous sensors (Lidars, Depth and RGB cameras) suitable for automatic execution on a mobile robot. Our method computes the 2D calibration parameters (x, y, yaw) through a motion-based approach, while for the remaining 3 parameters (z, pitch, roll) it requires the observation of the ground plane for a short period of time. What set this proposal apart from others is that: i) all calibration parameters are initialized in closed form, and ii) the scale ambiguity inherent to motion estimation from a monocular camera is explicitly handled, enabling the combination of these sensors and metric ones (Lidars, stereo rigs, etc.) within the same optimization framework. %Additionally, outlier observations arising from local sensor drift are automatically detected and removed from the calibration process. We provide a formal definition of the problem, as well as of the contributed method, for which a C++ implementation has been made publicly available. The suitability of the method has been assessed in simulation an with real data from indoor and outdoor scenarios. Finally, improvements over state-of-the-art motion-based calibration proposals are shown through experimental evaluation.
computer science
We construct two viable extensions of the SM with a heavy vector in the fundamental $SU\left( 2\right) _{L}$ representation and nine SM singlet scalar fields, consistent with the current SM fermion mass spectrum and fermionic mixing parameters. The small masses for the active neutrinos are generated from radiative seesaw mechanism at one loop level mediated by the neutral components of the heavy vector as well as by the left handed Majorana neutrinos. The proposed models predicts rates for charged lepton flavor violating processes within the reach of the forthcoming experiments.
high energy physics phenomenology
Canonical correlation analysis (CCA) is a popular technique for learning representations that are maximally correlated across multiple views in data. In this paper, we extend the CCA based framework for learning a multiview mixture model. We show that the proposed model and a set of simple heuristics yield improvements over standard CCA, as measured in terms of performance on downstream tasks. Our experimental results show that our correlation-based objective meaningfully generalizes the CCA objective to a mixture of CCA models.
computer science
This paper addresses to Sliding Mode Learning Control (SMLC) of uncertain nonlinear systems with Lyapunov stability analysis. In the control scheme, a conventional control term is used to provide the system stability in compact space while a Type-2 Neuro-Fuzzy Controller (T2NFC) learns system behavior so that the T2NFC takes the overall control of the system completely in a very short time period. The stability of the sliding mode learning algorithm was proven in literature; however, it is so restrictive for systems without the overall system stability. To address this shortcoming, a novel control structure with a novel sliding surface is proposed in this paper and the stability of the overall system is proven for nth-order uncertain nonlinear systems. To investigate the capability and effectiveness of the proposed learning and control algorithms, the simulation studies have been achieved under noisy conditions. The simulation results confirm that the developed SMLC algorithm can learn the system behavior in the absence of any mathematical model knowledge and exhibit robust control performance against external disturbances.
electrical engineering and systems science
Biomolecules can be synthesized in interstellar ice grains subject to UV radiation and cosmic rays. I show that on time scales of $\gtrsim 10^{6}$ years, these processes lead to the formation of large percolation clusters of organic molecules. Some of these clusters would have ended up on proto-planets where large, loosely bound aggregates of clusters (superclusters) would have formed. The interior regions of such superclusters provided for chemical micro-environments that are filtered versions of the outside environment. I argue that models for abiogenesis are more likely to work when considered inside such micro-environments. As the supercluster breaks up, biochemical systems in such micro-environments gradually become subject to a less filtered environment, allowing them to get adapted to the more complex outside environment. A particular system originating from a particular location on some supercluster would have been the first to get adapted to the raw outside environment and survive there, thereby becoming the first microbe. A collision of a microbe-containing proto-planet with the Moon could have led to fragments veering off back into space, microbes in small fragments would have been able to survive a subsequent impact with the Earth.
astrophysics
Laser-assisted atom probe tomography (APT) was used to measure the indium mole fraction x of c-plane, MOCVD-grown, GaN/In(x)Ga(1-x)N/GaN test structures and the results were compared with Rutherford backscattering analysis (RBS). Four sample types were examined with (RBS determined) x = 0.030, 0.034, 0.056, and 0.112. The respective In(x)Ga(1-x)N layer thicknesses were 330 nm, 327 nm, 360 nm, and 55 nm. APT data were collected at (fixed) laser pulse energy (PE) selected within the range of (2-1000) fJ. Sample temperatures were = 54 K. PE within (2-50) fJ yielded x values that agreed with RBS (within uncertainty) and were comparatively insensitive to region-of-interest (ROI) geometry and orientation. By contrast, approximate stoichiometry was only found in the GaN portions of the samples provided PE was within (5-20) fJ and the analyses were confined to cylindrical ROIs (of diameters =20 nm) that were coaxial with the specimen tips. m-plane oriented tips were derived from c-axis grown, core-shell, GaN/In(x)Ga(1-x)N nanorod heterostructures. Compositional analysis along [0 0 0 1] (transverse to the long axis of the tip), of these m-plane samples revealed a spatial asymmetry in charge-state ratio (CSR) and a corresponding asymmetry in the resultant tip shape along this direction; no asymmetry in CSR or tip shape was observed for analysis along [-1 2-1 0]. Simulations revealed that the electric field strength at the tip apex was dominated by the presence of a p-type inversion layer, which developed under typical tip-electrode bias conditions for the n-type doping levels considered. Finally, both c-plane and m-plane sample types showed depth-dependent variations in absolute ion counts that depended upon ROI placement.
condensed matter
We introduce a Gaussian process-based model for handling of non-stationarity. The warping is achieved non-parametrically, through imposing a prior on the relative change of distance between subsequent observation inputs. The model allows the use of general gradient optimization algorithms for training and incurs only a small computational overhead on training and prediction. The model finds its applications in forecasting in non-stationary time series with either gradually varying volatility, presence of change points, or a combination thereof. We evaluate the model on synthetic and real-world time series data comparing against both baseline and known state-of-the-art approaches and show that the model exhibits state-of-the-art forecasting performance at a lower implementation and computation cost.
statistics