text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
The growth of magneto-hydrodynamic fluctuations relevant to cosmic ray confinement in and near their sources, and the effects of local plasma conditions is revisited. We consider cases where cosmic rays penetrate a medium which may contain a fraction of neutral particles, and explore the possible effects of high-order cosmic-ray anisotropies. An algorithm for calculating the dispersion relation for arbitrary distributions, and anisotropies is presented, and a general solution for power-law cosmic-ray distributions is provided. Implications for the resulting instabilities near to strong Galactic cosmic-ray sources are discussed. We argue that cosmic-ray streaming in weakly ionised plasmas eliminates the need for the existence of an evanescent band in the dispersion relation, a conclusion which may be confirmed by gamma-ray observations. The necessity for additional multi-scale numerical simulations is highlighted, as understanding the non-linear behaviour is crucial. | astrophysics |
We calculate the production and polarization of direct $J/\psi$ in the improved color evaporation model at $\mathcal{O}(\alpha_s^3)$ in the collinear factorization approach. We present the first calculation of polarization parameters $\lambda_\vartheta$, $\lambda_{\phi}$, $\lambda_{\vartheta \phi}$ in the helicity and the Collins-Soper frames, as well as the frame-invariant polarization parameter $\tilde{\lambda}$ as a function of transverse momentum. We find agreement with both $J/\psi$ cross sections and polarization measurements. | high energy physics phenomenology |
Pedestrian trajectory prediction is essential for collision avoidance in autonomous driving and robot navigation. However, predicting a pedestrian's trajectory in crowded environments is non-trivial as it is influenced by other pedestrians' motion and static structures that are present in the scene. Such human-human and human-space interactions lead to non-linearities in the trajectories. In this paper, we present a new spatio-temporal graph based Long Short-Term Memory (LSTM) network for predicting pedestrian trajectory in crowded environments, which takes into account the interaction with static (physical objects) and dynamic (other pedestrians) elements in the scene. Our results are based on two widely-used datasets to demonstrate that the proposed method outperforms the state-of-the-art approaches in human trajectory prediction. In particular, our method leads to a reduction in Average Displacement Error (ADE) and Final Displacement Error (FDE) of up to 55% and 61% respectively over state-of-the-art approaches. | computer science |
A great number of metal and semiconductor photocathodes, which are of high practical importance for photoinjector applications, closely follow the 1/3 gradient Dowell-Schmerge (DS) law describing the spectral dependence of the mean transverse energy ($MTE$), $viz.$ $MTE$ as a function of the incident laser photon energy. However, some (rare) semiconductor photocathodes show $MTE$ trends that are significantly different. For example, spectral $MTE$ measurements on PbTe, BaLaSnO or Hf/HfO$_2$ have clearly demonstrated trends that can differ from DS law being non-monotonic, slower growing, or displaying constant $MTE$ versus laser photon energy. We have discovered that $n$-type ultra-nano-crystalline diamond (UNCD) and single crystal diamond are anti-DS photocathodes in that their $MTE$ decreases with the incident photon energy. It was previously established that UNCD is a highly emissive material in the near UV such that quantum efficiency ($QE$) grows with the laser photon energy. The unique and novel combination of high increasing $QE$ and low decreasing $MTE$ of UNCD may pave the way to desired high brightness electron beams, through operation well above its work function which fundamentally differs from 'Boltzmann tail' operation near the photoemission threshold. One other remarkable result followed: As UNCD is a $sp^2$ grain boundary diluted $sp^3$ diamond matrix, control over grain boundary/grain engineering in the material's synthesis allowed for the production of different kinds of UNCD. The resultant tuning of the $sp^3$-to-$sp^2$ ratio in different UNCD photocathodes allowed for switching between canonical +1/3 DS and approximate --1/3 gradient 'anti-DS' behavior. | physics |
In many scientific problems such as video surveillance, modern genomic analysis, and clinical studies, data are often collected from diverse domains across time that exhibit time-dependent heterogeneous properties. It is important to not only integrate data from multiple sources (called multiview data), but also to incorporate time dependency for deep understanding of the underlying system. Latent factor models are popular tools for exploring multi-view data. However, it is frequently observed that these models do not perform well for complex systems and they are not applicable to time-series data. Therefore, we propose a generative model based on variational autoencoder and recurrent neural network to infer the latent dynamic factors for multivariate timeseries data. This approach allows us to identify the disentangled latent embeddings across multiple modalities while accounting for the time factor. We invoke our proposed model for analyzing three datasets on which we demonstrate the effectiveness and the interpretability of the model. | statistics |
Planar nanostructures allow near-ideal extraction of emission from a quantum emitter embedded within, thereby realizing deterministic single-photon sources. Such a source can be transformed into M single-photon sources by implementing active temporal-to-spatial mode demultiplexing. We report on the realization of such a demultiplexed source based on a quantum dot embedded in a nanophotonic waveguide. Efficient outcoupling (>60%) from the waveguide into a single mode optical fiber is obtained with high-efficiency grating couplers. As a proof-of-concept, active demultiplexing into M=4 spatial channels is demonstrated by the use of electro-optic modulators with an end-to-end efficiency of >81% into single-mode fibers. Overall we demonstrate four-photon coincidence rates of >1 Hz even under non-resonant excitation of the quantum dot. The main limitation of the current source is the residual population of other exciton transitions that corresponds to a finite preparation efficiency of the desired transition. We quantitatively extract a preparation efficiency of 15% using the second-order correlation function measurements. The experiment highlights the applicability of planar nanostructures as efficient multiphoton sources through temporal-to-spatial demultiplexing and lays out a clear path way of how to scale up towards demonstrating quantum advantages with the quantum dot sources. | quantum physics |
In this paper we establish the well-posedness of the Muskat problem with surface tension and equal viscosities in the subcritical Sobolev spaces $W^s_p(\mathbb{R})$, where ${p\in(1,2]}$ and ${s\in(1+1/p,2)}$. This is achieved by showing that the mathematical model can be formulated as a quasilinear parabolic evolution problem in $W^{\overline{s}-2}_p(\mathbb{R})$, where ${\overline{s}\in(1+1/p,s)}$. Moreover, we prove that the solutions become instantly smooth and we provide a criterion for the global existence of solutions. | mathematics |
Depositing disordered Al on top of SrTiO$_3$ is a cheap and easy way to create a two-dimensional electron system in the SrTiO$_3$ surface layers. To facilitate future device applications we passivate the heterostructure by a disordered LaAlO$_3$ capping layer to study the electronic properties by complementary x-ray photoemission spectroscopy and transport measurements on the very same samples. We also tune the electronic interface properties by adjusting the oxygen pressure during film growth. | condensed matter |
It is challenged only recently that the precision attainable in any measurement of a physical parameter is fundamentally limited by the quantum Cram\'{e}r-Rao Bound (QCRB). Here, targeting at measuring parameters in strongly dissipative systems, we propose an innovative measurement scheme called {\it dissipative adiabatic measurement} and theoretically show that it can beat the QCRB. Unlike projective measurements, our measurement scheme, though consuming more time, does not collapse the measured state and, more importantly, yields the expectation value of an observable as its measurement outcome, which is directly connected to the parameter of interest. Such a direct connection {allows to extract} the value of the parameter from the measurement outcomes in a straightforward manner, with no fundamental limitation on precision in principle. Our findings not only provide a marked insight into quantum metrology but also are highly useful in dissipative quantum information processing. | quantum physics |
The spatial discretization of the single-cone Dirac Hamiltonian on the surface of a topological insulator or superconductor needs a special "staggered" grid, to avoid the appearance of a spurious second cone in the Brillouin zone. We adapt the Stacey discretization from lattice gauge theory to produce a generalized eigenvalue problem, of the form ${\mathcal H}\psi=E {\mathcal P}\psi$, with Hermitian tight-binding operators ${\mathcal H}$, ${\mathcal P}$, a locally conserved particle current, and preserved chiral and symplectic symmetries. This permits the study of the spectral statistics of Dirac fermions in each of the four symmetry classes A, AII, AIII, and D. | condensed matter |
Transmitter arrays play a critical role in ultra high field Magnetic Resonance Imaging (MRI), especially given the advantages made possible via parallel transmission (pTx) techniques. One of the challenges in design and construction of transmit arrays has traditionally been finding effective strategies for decoupling elements of the transmit array. Here, we present the design of the first self-decoupled, loop-based transmit array for human brain MRI at 10.5T / 447MHz. We demonstrate, using full-wave electromagnetic simulations, effective decoupling of the transmit elements without requiring the conventional overlap or inductive decoupling techniques. | physics |
We report on the determination of the anomalous spin Hall angle in the ferromagnetic metal alloy cobalt-iron (Co$_{25}$Fe$_{75}$, CoFe). This is accomplished by measuring the spin injection/detection efficiency in a multiterminal device with nanowires of platinum (Pt) and CoFe deposited onto the magnetic insulator yttrium iron garnet (Y$_3$Fe$_5$O$_{12}$, YIG). Applying a spin-resistor model to our multiterminal spin transport data, we determine the magnon conductivity in YIG, the spin conductance at the YIG/CoFe interface and finally the anomalous spin Hall angle of CoFe as a function of its spin diffusion length in a single device. Our experiments clearly reveal a negative anomalous spin Hall angle of the ferromagnetic metal CoFe, but a vanishing ordinary spin Hall angle. This is in contrast to the results reported for the ferromagnetic metals Co and permalloy. | condensed matter |
In the paper we discuss conformable derivative behavior in arbitrary Banach spaces and clear the connection between two conformable derivatives of different order. As a consequence we obtain the important result that an abstract function has a conformable derivative at a point (which does not coincide with the lower terminal of the conformable derivative) if and only if it has a first order derivative at the same point. | mathematics |
High-dynamic-range (HDR) photography involves fusing a bracket of images taken at different exposure settings in order to compensate for the low dynamic range of digital cameras such as the ones used in smartphones. In this paper, a method for automatically selecting the exposure settings of such images is introduced based on the camera characteristic function. In addition, a new fusion method is introduced based on an optimization formulation and weighted averaging. Both of these methods are implemented on a smartphone platform as an HDR app to demonstrate the practicality of the introduced methods. Comparison results with several existing methods are presented indicating the effectiveness as well as the computational efficiency of the introduced solution. | computer science |
Bayesian experimental design involves the optimal allocation of resources in an experiment, with the aim of optimising cost and performance. For implicit models, where the likelihood is intractable but sampling from the model is possible, this task is particularly difficult and therefore largely unexplored. This is mainly due to technical difficulties associated with approximating posterior distributions and utility functions. We devise a novel experimental design framework for implicit models that improves upon previous work in two ways. First, we use the mutual information between parameters and data as the utility function, which has previously not been feasible. We achieve this by utilising Likelihood-Free Inference by Ratio Estimation (LFIRE) to approximate posterior distributions, instead of the traditional approximate Bayesian computation or synthetic likelihood methods. Secondly, we use Bayesian optimisation in order to solve the optimal design problem, as opposed to the typically used grid search or sampling-based methods. We find that this increases efficiency and allows us to consider higher design dimensions. | statistics |
The popularity of photo sharing services has increased dramatically in recent years. Increases in users, quantity of photos, and quality/resolution of photos combined with the user expectation that photos are reliably stored indefinitely creates a growing burden on the storage backend of these services. We identify a new opportunity for storage savings with application-specific compression for photo sharing services: photo recompression. We explore new photo storage management techniques that are fast so they do not adversely affect photo download latency, are complementary to existing distributed erasure coding techniques, can efficiently be converted to the standard JPEG user devices expect, and significantly increase compression. We implement our photo recompression techniques in two novel codecs, ROMP and L-ROMP. ROMP is a lossless JPEG recompression codec that compresses typical photos 15% over standard JPEG. L-ROMP is a lossy JPEG recompression codec that distorts photos in a perceptually un-noticeable way and typically achieves 28% compression over standard JPEG. We estimate the benefits of our approach on Facebook's photo stack and find that our approaches can reduce the photo storage by 0.3-0.9x the logical size of the stored photos, and offer additional, collateral benefits to the photo caching stack, including 5-11% fewer requests to the backend storage, 15-31% reduction in wide-area bandwidth, and 16% reduction in external bandwidth. | electrical engineering and systems science |
Right-handed neutrinos in supersymmetric models can act as the source of lepton flavor violation (LFV). We present experimental implications of lepton flavor-violating processes within a supersymmetric type-I seesaw framework in the three-extra-parameter non-universal Higgs model (NUHM3) for large (PMNS-like) and small (CKM-like) Yukawa mixing scenarios. We highlight LFV predictions for the natural (low $\Delta_{\rm EW}$) portion of parameter space. Our numerical analysis includes full 2-loop renormalization group running effects for the three neutrino masses and mass matrices. We show the projected discovery reach of various LFV experiments ($\textit{i.e.}$ Mu2e, Mu3e, MEG-II, Belle-II), and specify regions that have already been excluded by the LHC searches. Our results depend strongly on whether one has a normal sneutrino hierarchy (NSH) or an inverted sneutrino hierarchy (ISH). Natural SUSY with a NSH is already excluded by MEG-2013 results while large portions of ISH have been or will soon be tested. However, LFV processes from natural SUSY with small Yukawa mixing and an ISH seem below any projected sensitivities. A substantial amount of the remaining parameter space of models with large PMNS-like sneutrino mixing will be probed by Mu2e and MEG-II experiments whereas small, CKM-like Yukawa mixing predicts LFV decays which can hide from LFV experiments. | high energy physics phenomenology |
Models with two-Higgs-doublets and natural flavour conservation contain $\tan \beta = v_2 / v_1$ as a physical parameter. We offer here a generalization of a recently proposed idea where only the Cabibbo angle, $\theta_\text{c} \simeq 0.22$, was related to $\tan \beta$ by virtue of the $\mathbb{D}_{4}$ dihedral symmetry group. The original proposal consisted of a massless first generation of quarks and no mixing with the third generation. In our case, through the addition of a third Higgs doublet with a small vacuum-expectation-value but very large masses, thus later decoupling, all quarks become massive and quark mixing is fully reproduced. In fact, all quark mixing angles are expressed in terms of $\tan \beta$ and one recovers trivial mixing in the limit $\beta \rightarrow 0$. We also explore the consequences in lepton mixing by adopting a type I seesaw mechanism with three heavy right-handed neutrinos. | high energy physics phenomenology |
We consider interactions of scalar particles, photons, and fermions in Schwarzschild, Reissner-Nordstr\"om, Kerr, and Kerr-Newman gravitational and electromagnetic fields with a zero and nonzero cosmological constant. We also consider interactions of scalar particles, photons, and fermions with nonextremal rotating charged black holes in a minimal five-dimensional gauge supergravity. We analyze the behavior of effective potentials in second-order relativistic Schr\"odinger-type equations. In all cases, we establish the existence of the regime of particle "falling" on event horizons. An alternative can be collapsars with fermions in stationary bound states without a regime of particles "falling". | physics |
We report on the spectroscopic analysis of the black hole binary GX 339-4 during its recent 2017-2018 outburst, observed simultaneously by the Swift and NuSTAR observatories. Although during this particular outburst the source failed to make state transitions, and despite Sun constraints during the peak luminosity, we were able to trigger four different observations sampling the evolution of the source in the hard state. We show that even for the lowest luminosity observations the NuSTAR spectra show clear signatures of X-ray reprocessing (reflection) in an accretion disk. Detailed analysis of the highest signal-to-noise spectra with our family of relativistic reflection models RELXILL indicates the presence of both broad and narrow reflection components. We find that a dual-lamppost model provides a superior fit when compared to the standard single lamppost plus distant neutral reflection. In the dual lamppost model two sources at different heights are placed on the rotational axis of the black hole, suggesting that the narrow component of the Fe K emission is likely to originate in regions far away in the disk, but still significantly affected by its rotational motions. Regardless of the geometry assumed, we find that the inner edge of the accretion disk reaches a few gravitational radii in all our fits, consistent with previous determinations at similar luminosity levels. This confirms a very low degree of disk truncation for this source at luminosities above ~1% Eddington. Our estimates of Rin reinforces the suggested behavior for an inner disk that approaches the inner-most regions as the luminosity increases in the hard state. | astrophysics |
We study the image of $\ell$-adic representations attached to subvarieties of Shimura varieties $Sh_K(G,X)$ that are not contained in a smaller Shimura subvariety and have no isotrivial components. We show that, for $\ell$ large enough (depending on the Shimura datum $(G,X)$ and the subvariety), such image contains the $\mathbb{Z}_\ell$-points coming from the simply connected cover of the derived subgroup of $G$. This can be regarded as a geometric version of the integral $\ell$-adic Mumford-Tate conjecture. | mathematics |
The halo masses $M_{halo}$ of low surface brightness (LSB) galaxies are critical measurements for understanding their formation processes. One promising method to estimate a galaxy's $M_{halo}$ is to exploit the empirical scaling relation between $M_{halo}$ and the number of associated globular clusters ($N_{\mathrm{GC}}$). We use a Bayesian mixture model approach to measure $N_{\mathrm{GC}}$ for 175 LSB ($23\leq\left\langle \mu_{e,r} \right\rangle [\mathrm{mag\ arcsec}^{-2}]\leq 28$) galaxies in the Fornax cluster using the Fornax Deep Survey (FDS) data; this is the largest sample of low mass galaxies so-far analysed for this kind of study. The proximity of the Fornax cluster means that we can measure galaxies with much smaller physical sizes ($0.3\leq r_{e,r}\ [\mathrm{kpc}]\leq 9.5$) compared to previous studies of the GC systems of LSB galaxies, probing stellar masses down to $M_{*}\sim10^{5}\mathrm{M_{\odot}}$. The sample also includes \nudg\ ultra-diffuse galaxies (UDGs), with projected $r$-band half-light radii greater than 1.5 kpc. Our results are consistent with an extrapolation of the $M_{*}-M_{halo}$ relation predicted from abundance matching. In particular, our UDG measurements are consistent with dwarf sized halos, having typical masses between $10^{10}$ and $10^{11}\mathrm{M_{\odot}}$. Overall, our UDG sample is statistically indistinguishable from smaller LSB galaxies in the same magnitude range. We do not find any candidates likely to be as rich as some of those found in the Coma cluster. We suggest that environment might play a role in producing GC-rich LSB galaxies. | astrophysics |
Understanding the differences between the distribution of quarks bound in protons and neutrons is key for constraining the mechanisms of SU(6) spin-flavor symmetry breaking in Quantum Chromodynamics (QCD). While vast amounts of proton structure measurements were done, data on the structure of the neutron is much more spars as experiments typically extract the structure of neutrons from measurements of light atomic nuclei using model-dependent corrections for nuclear effects. Recently the MARATHON collaboration performed such an extraction by measuring inclusive deep-inelastic electron-scattering on helium-3 and tritium mirror nuclei where nuclear effects are expected to be similar and thus be suppressed in the helium-3 to tritium ratio. Here we evaluate the model dependence of this extraction by examining a wide range of models including the effect of using instant-form and light-cone nuclear wave functions and several different parameterizations of nucleon modification effects, including those with and without isospin dependence. We find that, while the data cannot differentiate among the different models of nuclear structure and nucleon modification, they consistently prefer a neutron-to-proton structure function ratio of at $x_B \rightarrow 1$ of $\sim 0.4$ with a typical uncertainty ($1\sigma$) of $\sim0.05$ and $\sim0.10$ for isospin-independent and isospin-dependent modification models, respectively. While strongly favoring SU(6) symmetry breaking models based on perturbative QCD and the Schwinger-Dyson equation calculation, the MARATHON data do not completely rule out the scalar di-quark models if an isospin-dependent modification exist. | high energy physics phenomenology |
Anomaly detection and localization in medical images is a challenging task, especially when the anomaly exhibits a change of existing structures, e.g., brain atrophy or changes in the pleural space due to pleural effusions. In this work, we present a weakly supervised and detail-preserving method that is able to detect structural changes of existing anatomical structures. In contrast to standard anomaly detection methods, our method extracts information about the disease characteristics from two groups: a group of patients affected by the same disease and a healthy control group. Together with identity-preserving mechanisms, this enables our method to extract highly disease-specific characteristics for a more detailed detection of structural changes. We designed a specific synthetic data set to evaluate and compare our method against state-of-the-art anomaly detection methods. Finally, we show the performance of our method on chest X-ray images. Our method called DeScarGAN outperforms other anomaly detection methods on the synthetic data set and by visual inspection on the chest X-ray image data set. | electrical engineering and systems science |
We reassess employing the holographic technique to the description of 4D minimal composite Higgs model with $SO(5)\to SO(4)$ global symmetry breaking pattern. The particular 5D bottom-up holographic treatment is inspired by previous work in the context of QCD and it allows to study spin one and spin zero resonances. The resulting spectrum consists of the states transforming under the unbroken $SO(4)$ subgroup and those with quantum numbers in the $SO(5)/SO(4)$ coset. The spin one states are arranged in linear radial trajectories, and the states from the broken subgroup are generally heavier. The spin zero states from the coset space correspond to the four massless Goldstone bosons in 4D. One of them takes the role of the Higgs boson. Restrictions derived from the experimental constraints (Higgs couplings, $S$ parameter, etc.) are then implemented and we conclude that the model is able to accommodate new vector resonances with masses in the range $2$ TeV to $3$ TeV without encountering phenomenological difficulties. The couplings governing the production of these new states in the processes of the SM gauge boson scattering are also estimated. The method can be extended to other breaking patterns. | high energy physics phenomenology |
Coherent grazing-incidence small-angle X-ray scattering is used to investigate the average kinetics and the fluctuation dynamics during self-organized nanopatterning of silicon by Ar$^+$ bombardment at 65$^{\circ}$ polar angle. At early times, the surface behavior can be understood within the framework of linear theory. The transition away from the linear theory behavior is observed in the dynamics through the intensity correlation function. It quickly evolves to exhibit stretched exponential decay on short length scales and compressed exponential decay on length scales corresponding the dominant structural length scale - the ripple wavelength. The correlation times also peak strongly at the ripple length scale. This behavior has notable similarities but also significant differences with the phenomenon of de Gennes narrowing. Overall, this dynamics behavior is found to be consistent with simulations of a nonlinear growth model. | condensed matter |
Tropical cyclones (TCs) rank among the most costly natural disasters in the United States, and accurate forecasts of track and intensity are critical for emergency response. Intensity guidance has improved steadily but slowly, as processes which drive intensity change are not fully understood. Because most TCs develop far from land-based observing networks, geostationary satellite imagery is critical to monitor these storms. However, these complex data can be challenging to analyze in real time, and off-the-shelf machine learning algorithms have limited applicability on this front due to their ``black box'' structure. This study presents analytic tools that quantify convective structure patterns in infrared satellite imagery for over-ocean TCs, yielding lower-dimensional but rich representations that support analysis and visualization of how these patterns evolve during rapid intensity change. The proposed ORB feature suite targets the global Organization, Radial structure, and Bulk morphology of TCs. By combining ORB and empirical orthogonal functions, we arrive at an interpretable and rich representation of convective structure patterns that serve as inputs to machine learning methods. This study uses the logistic lasso, a penalized generalized linear model, to relate predictors to rapid intensity change. Using ORB alone, binary classifiers identifying the presence (versus absence) of such intensity change events can achieve accuracy comparable to classifiers using environmental predictors alone, with a combined predictor set improving classification accuracy in some settings. More complex nonlinear machine learning methods did not perform better than the linear logistic lasso model for current data. | statistics |
Inspired by ER=EPR conjecture we present a mathematical tool providing a link between quantum entanglement and the geometry of spacetime. We start with the idea of operators in extended Hilbert space which, by definition, has no positive definite scalar product. Adopting several simple postulates we show that a quantum harmonic oscillator can be constructed as a positive definite sector in that space. We discuss the two-dimensional oscillator constructed in such a way that the ground state is maximally entangled. Being a vector in the Hilbert space, it has also a non-trivial expansion in a bigger extended space. On one hand, the space is not free of negative norm states. On the other hand, it allows one to interpret the ground state geometrically in terms of $AdS_3$. The interpretation is based solely on the form of the expansion, revealing certain structures at the boundary and in the bulk of $AdS_3$. The former correspond to world lines of massless particles at the boundary. The latter resemble interacting closed strings. | high energy physics theory |
In a six-day footrace, competitors accumulate as much distance as possible on foot over 144 consecutive hours by circumambulating a loop course. Now an obscure event on the fringe of ultra running and contested by amateurs, six-day races and the associated sport of pedestrianism used to be a lucrative professional athletic endeavor. Indeed, pedestrianism was the most popular spectator sport in America c. 1874-c. 1881. We analyzed data from 277 six-day races spanning 37 years in the post-pedestrianism era (1981-2018). Men outnumber women 3:1 in six-day race participation. The men's (women's) six-day world record is 644.2 (549.1) miles and the top 4% achieve 500 (450) miles. Adopting the forecasting model of Godsey (2012), we predict a 53% (21%) probability that the men's (women's) world record will be broken within the next decade. | statistics |
Mechanotransduction, the biological response to mechanical stress, is often initiated by the activation of mechanosensitive (MS) proteins upon mechanically induced deformations of the cell membrane. A current challenge to fully understand this process is to predict how lipid bilayers deform upon application of mechanical stress. In this context, it is now well established that anionic lipids influence the function of many proteins. Here, we test the hypothesize that anionic lipids could indirectly modulate MS proteins by alteration of the lipid bilayer mechanical properties. Using all-atom molecular dynamics simulations, we computed the bilayer bending rigidity (K_C), the area compressibility (K_A), and the surface shear viscosity ({\eta}_m) of phosphocholine (PC) lipid bilayers containing or not phosphatidylserine (PS) or phosphatidylinositol bisphosphate (PIP2) at physiological concentrations in the lower leaflet. Tensionless leaflets were first checked for each asymmetric bilayer model, and a formula for embedding an asymmetric channel in an asymmetric bilayer is proposed. Results from two different sized bilayers show consistently that the addition of 20% surface charge in the lower leaflet of PC bilayer by PIP2 has minimal impact on its mechanical properties, while PS reduced the bilayer bending rigidity by 22%. As a comparison, supplementing the PIP2-enriched PC membrane with 30% cholesterol, a known rigidifying steroid lipid, produces a significant increase in all three mechanical constants. Analysis of pairwise splay moduli suggests that the effect of anionic lipids on bilayer bending rigidity largely depends on the number of anionic lipid pairs formed during simulations. The potential implication of bilayer bending rigidity is discussed in the framework of mechanosensitive Piezo channels. | physics |
Generalized Brewster angle (GBA) is the incidence angle at which polarization by reflection for p- and s-polarized light takes place. Realizing s-polarization Brewster effect requires a material with magnetic response which is challenging at optical frequencies since the magnetic response of materials at these frequencies is extremely weak. Here, we experimentally realize GBA effect in the visible using a thin-film absorber system consisting of a dielectric film on an absorbing substrate. Polarization by reflection is realized for both p- and s- polarized light at different angles of incidence and multiple wavelengths. We provide a theoretical framework for the generalized Brewster effect in thin-film light absorbers. We demonstrate hydrogen gas sensing using a single layer graphene film transferred on a thin-film absorber at the GBA with ~1 fg/mm2 aerial mass sensitivity. The ultrahigh sensitivity stems from the strong phase sensitivity near point of darkness, particularly at the GBA, and the strong light-matter interaction in planar nanocavities. These findings depart from the traditional domain of thin-films as mere interference optical coatings and highlight its many potential applications including gas sensing and biosensing. | physics |
This paper examines the transverse momentum spectra of baryons in the multi-particle production at modern colliders using Quark-Gluon String Model (QGSM). It discusses 1) the difference in \Lambda^0 hyperon spectra at antiproton-proton vs. proton-proton reactions; 2) the growth of average transverse momenta of \Lambda hyperon with proton-proton collision energies and 3) the dependence of average p_t on the masses of mesons and baryons at the LHC energy 7 TeV. This analysis of baryon spectra led to the following conclusions. First, the fragmentation of antidiquark-diquark side of the pomeron diagram makes the major contribution to baryon production spectra in the asymmetric antiproton-proton reaction. Second, the average p_t's of hyperons steadily grow with energy on the range from 53 GeV to 7 TeV. Since no dramatic changes were seen in the characteristics of baryon production, the hadroproduction processes do not cause the "knee" in the cosmic ray proton spectra at the energies between Tevatron and LHC. | high energy physics phenomenology |
Radar observations show that (16) Psyche is one of the largest and most massive asteroids of the M-class located in the main belt, with a diameter of approximately 230 km. This fact makes Psyche a unique object since observations indicated an iron-nickel composition. It is believed that this body may be what was left of a metal core of an early planet that would have been fragmented over millions of years due to violent collisions. In this work we study a variety of dynamical aspects related to the surface, as well as, the environment around this asteroid. We use computational tools to explore the gravitational field generated by this body, assuming constant values for its density and rotation period. We then determine a set of physical and dynamical characteristics over its entire surface. The results include the geometric altitude, geopotential altitude, tilt, slope, among others. We also explore the neighborhood around the asteroid (16) Psyche, so that the location and linear stability of the equilibrium points were found. We found four external equilibrium points, two of them linearly stable. We confirmed the stability of these points by performing numerical simulations of massless particles around the asteroid, which also showed an asymmetry in the size of the stable regions. In addition, we integrate a cloud of particles in the vicinity of (16) Psyche in order to verify in which regions of its surface the particles are most likely to collide. | astrophysics |
Spintronics provides an efficient platform for realizing non-volatile memory and logic devices. In these systems, data is stored in the magnetization of magnetic materials, and magnetization is switched in the writing process. In conventional spintronic devices, ferromagnetic materials are used which have a magnetization dynamics timescale of around the nanoseconds, setting a limit for the switching speed. Increasing the magnetization switching speed has been one of the challenges in spintronic research. In this work we take advantage of the ultrafast magnetization dynamics in ferrimagnetic materials instead of ferromagnets, and we use femtosecond laser pulses and a photoconductive Auston switch to create picosecond current pulses for switching the ferrimagnet. By anomalous Hall and magneto-optic Kerr (MOKE) measurement, we demonstrate the robust picosecond SOT driven magnetization switching of ferrimagnetic GdFeCo. The time-resolved MOKE shows more than 50 GHz magnetic resonance frequency of GdFeCo, indicating faster than 20 ps spin dynamics and tens of picosecond SOT switching speed. Our work provides a promising route to realize picosecond operation speed for non-volatile magnetic memory and logic applications. | physics |
Measurements of the Cosmic Microwave Background (CMB) temperature anisotropies on large angular scales have uncovered a number of anomalous features of marginal statistical significance, such as a hemispherical power asymmetry, lack of power on large angular scales, and features in the power spectrum. Because the primary CMB temperature has been measured at the cosmic variance limit, determining if these anomalies are hints of new physics as opposed to foregrounds, systematics, or simply statistical flukes, requires new observables. In this paper, we highlight the potential contribution that future measurements of the kinetic Sunyaev-Zel'dovich effect (kSZ) and the polarized Sunyaev Zel'dovich effect (pSZ) could make in determining the physical nature of several CMB anomalies. The kSZ and pSZ effects, temperature and polarization anisotropies induced by scattering from free electrons in the reionized Universe, are the dominant blackbody contribution to the CMB on small angular scales. Using the technique of SZ tomography, measurements of kSZ and pSZ effects can be combined with galaxy surveys to reconstruct the remote CMB dipole and quadrupole fields, providing a 3-dimensional probe of large scale modes inside our Hubble volume. Building on previous work, we forecast the additional constraining power that these observables might offer for a representative set of anomaly models. We find that the remote CMB dipole and quadrupole contain a similar amount of information on anomaly models as the primary CMB polarization. The information from CMB temperature, polarization, and the remote dipole and quadrupole fields is complementary, and the full set of observables can improve constraints on anomaly models by a factor of $\sim 2-4$ using next-generation CMB experiments and galaxy surveys. This could be sufficient to definitively establish the physical origin of several CMB anomalies. | astrophysics |
The XENON1T experiment at the Laboratori Nazionali del Gran Sasso is the most sensitive direct detection experiment for dark matter in the form of weakly interacting particles (WIMPs) with masses above $6\,$GeV/$c^2$ scattering off nuclei. The detector employs a dual-phase time projection chamber with 2.0 metric tons of liquid xenon in the target. A one metric $\mathrm{ton}\times\mathrm{year}$ exposure of science data was collected between October 2016 and February 2018. This article reports on the performance of the detector during this period and describes details of the data analysis that led to the most stringent exclusion limits on various WIMP-nucleon interaction models to date. In particular, signal reconstruction, event selection and calibration of the detector response to nuclear and electronic recoils in XENON1T are discussed. | physics |
The radioheliograph image is essential for the study of solar short term activities and long term variations, while the continuity and granularity of radioheliograph data is not so ideal, due to the short visible time of the sun and the complex electron-magnetic environment near the ground-based radio telescope. In this work, we develop a multi-channel input single-channel output neural network, which can generate radioheliograph image in microwave band from the Extreme Ultra-violet (EUV) observation of the Atmospheric Imaging Assembly (AIA) on-board the Solar Dynamic Observatory (SDO). The neural network is trained with nearly 8 years of data of Nobeyama Radioheliograph (NoRH) at 17 GHz and SDO/AIA from January 2011 to September 2018. The generated radioheliograph image is in good consistency with the well-calibrated NoRH observation. SDO/AIA provides solar atmosphere images in multiple EUV wavelengths every 12 seconds from space, so the present model can fill the vacancy of limited observation time of microwave radioheliograph, and support further study of the relationship between the microwave and EUV emission. | astrophysics |
We present spatially-resolved echelle spectroscopy of an intervening MgII-FeII-MgI absorption-line system detected at $z_{\rm abs}=0.73379$ toward the giant gravitational arc PSZ1 G311.65-18.48. The absorbing gas is associated to an inclined disk-like star-forming galaxy, whose major axis is aligned with the two arc-segments reported here. We probe in absorption the galaxy's extended disk continuously, at $\approx 3$ kpc sampling, from its inner region out to $15\times$ the optical radius. We detect strong ($W_0^{2796}>0.3$ \r{A}) coherent absorption along $13$ independent positions at impact parameters $D=0$--$29$ kpc on one side of the galaxy, and no absorption at $D=28$--$57$ kpc on the opposite side (all de-lensed distances at $z_{\rm abs}$). We show that: (1) the gas distribution is anisotropic; (2) $W_0^{2796}$, $W_0^{2600}$, $W_0^{2852}$, and the ratio $W_0^{2600}\!/W_0^{2796}$, all anti-correlate with $D$; (3) the $W_0^{2796}$-$D$ relation is not cuspy and exhibits significantly less scatter than the quasar-absorber statistics; (4) the absorbing gas is co-rotating with the galaxy out to $D \lesssim 20$ kpc, resembling a `flat' rotation curve, but at $D\gtrsim 20$ kpc velocities decline below the expectations from a 3D disk-model extrapolated from the nebular [OII] emission. These signatures constitute unambiguous evidence for rotating extra-planar diffuse gas, possibly also undergoing enriched accretion at its edge. Arguably, we are witnessing some of the long-sought processes of the baryon cycle in a single distant galaxy expected to be representative of such phenomena. | astrophysics |
This manuscript discusses a scalable controller synthesis method for networked systems with a large number of identical subsystems based on the H-infinity control framework. The dynamics of the individual subsystems are described by identical linear time-invariant delay differential equations and the effect of transport and communication delay is explicitly taken into account. The presented method is based on the result that, under a particular assumption on the graph describing the interconnections between the subsystems, the H-infinity norm of the overall system is upper bounded by the robust H-infinity norm of a single subsystem with an additional uncertainty. This work will therefore briefly discuss a recently developed method to compute this last quantity. The resulting controller is then obtained by directly minimizing this upper bound in the controller parameters. | mathematics |
The optic nerve head (ONH) typically experiences complex neural- and connective-tissue structural changes with the development and progression of glaucoma, and monitoring these changes could be critical for improved diagnosis and prognosis in the glaucoma clinic. The gold-standard technique to assess structural changes of the ONH clinically is optical coherence tomography (OCT). However, OCT is limited to the measurement of a few hand-engineered parameters, such as the thickness of the retinal nerve fiber layer (RNFL), and has not yet been qualified as a stand-alone device for glaucoma diagnosis and prognosis applications. We argue this is because the vast amount of information available in a 3D OCT scan of the ONH has not been fully exploited. In this study we propose a deep learning approach that can: \textbf{(1)} fully exploit information from an OCT scan of the ONH; \textbf{(2)} describe the structural phenotype of the glaucomatous ONH; and that can \textbf{(3)} be used as a robust glaucoma diagnosis tool. Specifically, the structural features identified by our algorithm were found to be related to clinical observations of glaucoma. The diagnostic accuracy from these structural features was $92.0 \pm 2.3 \%$ with a sensitivity of $90.0 \pm 2.4 \% $ (at $95 \%$ specificity). By changing their magnitudes in steps, we were able to reveal how the morphology of the ONH changes as one transitions from a `non-glaucoma' to a `glaucoma' condition. We believe our work may have strong clinical implication for our understanding of glaucoma pathogenesis, and could be improved in the future to also predict future loss of vision. | electrical engineering and systems science |
An exploratory study of the time-like pion electromagnetic form factor in a Poincar\'e-covariant bound state formalism in the isospin symmetric limit is presented. Starting from a quark interaction kernel representing gluon-intermediated interactions for valence-type quarks, non-valence effects are included by introducing pions as explicit degrees of freedom. The two most important qualitative aspects are, in view of the presented study, the opening of the dominant $\rho$-meson decay channel and the presence of a multi-particle branch cut setting in when the two-pion threshold is crossed. Based on a recent respective computation of the quark-photon vertex, the pion electromagnetic form factor for space-like and time-like kinematics is calculated. The obtained results for its absolute value and its phase compare favorably to the available experimental data, and they are analyzed in detail by confronting them to the expectations based on an isospin-symmetric version of a vector-meson dominance model. | high energy physics phenomenology |
We study the low temperature out of equilibrium Monte Carlo dynamics of the disordered Ising $p$-spin Model with $p=3$ and a small number of spin variables. We focus on sequences of configurations that are stable against single spin flips obtained by instantaneous gradient descent from persistent ones. We analyze the statistics of energy gaps, energy barriers and trapping times on sub-sequences such that the overlap between consecutive configurations does not overcome a threshold. We compare our results to the predictions of various trap models finding the best agreement with the step model when the $p$-spin configurations are constrained to be uncorrelated. | condensed matter |
Low-power sensing technologies, such as wearables, have emerged in the healthcare domain since they enable continuous and non-invasive monitoring of physiological signals. In order to endow such devices with clinical value, classical signal processing has encountered numerous challenges. However, data-driven methods, such as machine learning, offer attractive accuracies at the expense of being resource and memory demanding. In this paper, we focus on the inference of neural networks running in microcontrollers and low-power processors which wearable sensors and devices are generally equipped with. In particular, we adapted an existing convolutional-recurrent neural network, designed to detect and classify cardiac arrhythmias from a single-lead electrocardiogram, to the low-power embedded System-on-Chip nRF52 from Nordic Semiconductor with an ARM's Cortex-M4 processing core. We show our implementation in fixed-point precision, using the CMSIS-NN libraries, yields a drop of $F_1$ score from 0.8 to 0.784, from the original implementation, with a memory footprint of 195.6KB, and a throughput of 33.98MOps/s. | electrical engineering and systems science |
Randomized controlled trials are the gold standard for measuring the causal effects of treatments on clinical outcomes. However, randomized trials are not always feasible, and causal treatment effects must, therefore, often be inferred from observational data. Observational study designs do not allow conclusions about causal relationships to be drawn unless statistical techniques are used to account for the imbalance of confounders across groups while key assumptions hold. Propensity score (PS) and balance weighting are two useful techniques that aim to reduce the imbalances between treatment groups by weighting the groups to look alike on the observed confounders. There are many methods available to estimate PS and balancing weights. However, it is unclear a priori which will achieve the best trade-off between covariate balance and effective sample size. Weighted analyses are further complicated by small studies with limited sample sizes, which is common when studying rare diseases. To address these issues, we present a step-by-step guide to covariate balancing strategies, including how to evaluate overlap, obtain estimates of PS and balancing weights, check for covariate balance, and assess sensitivity to unobserved confounding. We compare the performance of a number of commonly used estimation methods on a synthetic data set based on the Physical Activity and Exercise Outcomes in Huntington Disease (PACE-HD) study, which explored whether enhanced physical activity affects the progression and severity of the disease. We provide general guidelines for the choice of method for estimation of PS and balancing weights, interpretation, and sensitivity analysis of results. We also present R code for implementing the different methods and assessing balance. | statistics |
At present Automatic Speaker Recognition system is a very important issue due to its diverse applications. Hence, it becomes absolutely necessary to obtain models that take into consideration the speaking style of a person, vocal tract information, timbral qualities of his voice and other congenital information regarding his voice. The study of Bengali speech recognition and speaker identification is scarce in the literature. Hence the need arises for involving Bengali subjects in modelling our speaker identification engine. In this work, we have extracted some acoustic features of speech using non linear multifractal analysis. The Multifractal Detrended Fluctuation Analysis reveals essentially the complexity associated with the speech signals taken. The source characteristics have been quantified with the help of different techniques like Correlation Matrix, skewness of MFDFA spectrum etc. The Results obtained from this study gives a good recognition rate for Bengali Speakers. | computer science |
An adaptive quantum image encryption method based on wavelet transform is designed. Since the characteristic of most information is centralized in the low frequency part after performing the wavelet transform, it reserves the image low frequency information only, so as to reduce the encryption workload. Then it encrypts the low frequency information by the random key stream generated by logistic map, the encryption process is realized by implementing XOR operation. In the decryption process, it carries out zero filling operations for high frequency coefficient to recover the decryption images which are equal to the plain images. At the same time, the relevant quantum logical circuit is designed. Statistical simulation and theoretical analysis demonstrate that the proposed quantum image encryption algorithm has higher security and lower computational complexity. So it is adaptive to the scenes that need to encrypt a large number of images during network transmission. | quantum physics |
Mechanical properties of a nanomechanical resonator have a significant impact on the performance of a resonant Nano-electromechanical system (NEMS) device. Here we study the mechanical properties of suspended membranes fabricated out of low-pressure chemical vapor deposited silicon nitride thin films. Doubly-clamped membranes of silicon nitride with thickness less than 50 nm and length varying from 5 um to 60 um were fabricated. The elastic modulus and stress in the suspended membranes were measured using Atomic Force Microscope (AFM)-based nanomechanical spectroscopy. The elastic modulus of the suspended membranes was found to be significantly higher than those of corresponding thin films on the substrate. A reduction in the net stress after the fabrication of suspended membrane was observed and is explained by estimating the contributions of thermal stress and intrinsic stress. We establish a mathematical model to calculate the normalized elastic modulus of a suspended membrane. Lastly, we study the capillary force-gradient between the SiNx suspended membrane-Si substrate that could collapse the suspended membrane. | physics |
We discuss ensemble averages of two-dimensional conformal field theories associated with an arbitrary indefinite lattice with integral quadratic form $Q$. We provide evidence that the holographic dual after the ensemble average is the three-dimensional Abelian Chern-Simons theory with kinetic term determined by $Q$. The resulting partition function can be written as a modular form, expressed as a sum over the partition functions of Chern-Simons theories on lens spaces. For odd lattices, the dual bulk theory is a spin Chern-Simons theory, and we identify several novel phenomena in this case. We also discuss the holographic duality prior to averaging in terms of Maxwell-Chern-Simons theories. | high energy physics theory |
Infrared and visible image fusion, a hot topic in the field of image processing, aims at obtaining fused images keeping the advantages of source images. This paper proposes a novel auto-encoder (AE) based fusion network. The core idea is that the encoder decomposes an image into background and detail feature maps with low- and high-frequency information, respectively, and that the decoder recovers the original image. To this end, the loss function makes the background/detail feature maps of source images similar/dissimilar. In the test phase, background and detail feature maps are respectively merged via a fusion module, and the fused image is recovered by the decoder. Qualitative and quantitative results illustrate that our method can generate fusion images containing highlighted targets and abundant detail texture information with strong robustness and meanwhile surpass state-of-the-art (SOTA) approaches. | electrical engineering and systems science |
We extend our BDI (birth-death-immigration) process based stochastic model of an infectious disease to time-nonhomogeneous cases. First, we discuss the deterministic model, and derive the expected value of the infection process. Then as an application we consider that a government issues a decree to its citizens to curtail their activities that may incur further infections and show how the public's tardy response may further increase infections and prolong the epidemic much longer than one might think. We seek to solve a partial differential equation for the probability generating function. We find, however, that an exact solution is obtainable only for the BD process, i.e., no arrivals of the infected from outside. The coefficient of variation for the nonhomogeneous BD process is found to be well over unity. This result implies that the variations among different sample paths will be as large as in the negative binomial distribution with r<1, which was found in Part I for the homogeneous BDI model. In the final section, we illustrate, using our running example, how much information we can derive from the time dependent PMF (probability mass function) P_k(t)=Pr[I(t)=k]. We present graphical plots of the PMF at various t's, and cross-sections of this function at various k's. A mesh plot of the function over the (k, t) plane summarizes the above numerous plots. The results of this paper reinforce our earlier claim (see Abstract of Part II) that it would be a futile effort to attempt to identify all possible reasons why environments of similar situations differ so much in their epidemic patterns. Mere "luck" plays a more significant role than most of us may believe. We should be prepared for a worse possible scenario, which only a stochastic model can provide with probabilistic qualification. An empirical validation of the above results will be given in Part III-B. | statistics |
Human tackle reading comprehension not only based on the given context itself but often rely on the commonsense beyond. To empower the machine with commonsense reasoning, in this paper, we propose a Commonsense Evidence Generation and Injection framework in reading comprehension, named CEGI. The framework injects two kinds of auxiliary commonsense evidence into comprehensive reading to equip the machine with the ability of rational thinking. Specifically, we build two evidence generators: the first generator aims to generate textual evidence via a language model; the other generator aims to extract factual evidence (automatically aligned text-triples) from a commonsense knowledge graph after graph completion. Those evidences incorporate contextual commonsense and serve as the additional inputs to the model. Thereafter, we propose a deep contextual encoder to extract semantic relationships among the paragraph, question, option, and evidence. Finally, we employ a capsule network to extract different linguistic units (word and phrase) from the relations, and dynamically predict the optimal option based on the extracted units. Experiments on the CosmosQA dataset demonstrate that the proposed CEGI model outperforms the current state-of-the-art approaches and achieves the accuracy (83.6%) on the leaderboard. | computer science |
The expectation of unmanned air vehicles (UAVs) pushes the operation environment to narrow spaces, where the systems may fly very close to an object and perform an interaction. This phase brings the variation in UAV dynamics: thrust and drag coefficient of the propellers might change under different proximity. At the same time, UAVs may need to operate under external disturbances to follow time-based trajectories. Under these challenging conditions, a standard controller approach may not handle all missions with a fixed structure, where there may be a need to adjust its parameters for each different case. With these motivations, practical implementation and evaluation of an autonomous controller applied to a quadrotor UAV are proposed in this work. A self-adaptive controller based on a composite control scheme where a combination of sliding mode control (SMC) and evolving neuro-fuzzy control is used. The parameter vector of the neuro-fuzzy controller is updated adaptively based on the sliding surface of the SMC. The autonomous controller possesses a new elastic structure, where the number of fuzzy rules keeps growing or get pruned based on bias and variance balance. The interaction of the UAV is experimentally evaluated in real time considering the ground effect, ceiling effect and flight through a strong fan-generated wind while following time-based trajectories. | electrical engineering and systems science |
Coherent-state-based phase estimation is a fruitful testbed for the field of precision measurements since coherent states are robust to decoherence when compared with exotic quantum states. The seminal work done by Caves (https://doi.org/10.1103/PhysRevD.23.1693 , Phys. Rev. D 23, 1693 (1981)) stated that the phase sensitivity of a U(2) interferometer fed with a coherent state is limited by the shot-noise limit (SNL). In this Letter, we demonstrate that this bound is not conclusive sensitivity limit and can be broken when the measurement includes an external phase reference. The SNL can be surpassed by a factor of $\sqrt{2}$ and the validity is supported through the calculation of quantum Fisher information. Additionally, we discuss other single-mode Gaussian inputs of which sensitivities are beyond the SNL. Our work shows potential applications for many metological scenarios, particularly when the measured samples immersed in great lossy environments or can withstand bright illumination. | quantum physics |
Self-testing refers to a device-independent way to uniquely identify the state and the measurement for uncharacterized quantum devices. The only information required comprises the number of measurements, the number of outputs of each measurement, and the statistics of each measurement. Earlier results on self-testing of multipartite state were restricted either to Dicke states or graph states. In this paper, we propose self-testing schemes for a large family of symmetric three-qubit states, namely the superposition of W state and GHZ state. We first propose and analytically prove a self-testing criterion for the special symmetric state with equal coefficients of the canonical basis, by designing subsystem self-testing of partially and maximally entangled state simultaneously. Then we demonstrate for the general case, the states can be self-tested numerically by the swap method combining semi-definite programming (SDP) in high precision. | quantum physics |
The linear stability of buoyant parallel flow in a vertical porous layer with an annular cross-section is investigated. The vertical cylindrical boundaries are kept at different uniform temperatures and they are assumed to be impermeable. The emergence of linear instability by convection cells is excluded on the basis of a numerical solution of the linearised governing equations. This result extends to the annular geometry the well-known Gill's theorem regarding the impossibility of convective instability in a vertical porous plane slab whose boundaries are impermeable and isothermal with different temperatures. The extension of Gill's theorem to the annular domain is approached numerically by evaluating the growth rate of normal mode perturbations and showing that its sign is negative, which means asymptotic stability of the basic flow. A concurring argument supporting the absence of linear instability arises from the investigation of cases where the impermeability condition at the vertical boundaries is relaxed and a partial permeability is modelled through Robin boundary conditions for the pressure. With partially permeable boundaries, an instability emerges which takes the form of axisymmetric normal modes. | physics |
The degeneracies of single-centered dyonic $\frac14$-BPS black holes (BH) in Type II string theory on K3$\times T^2$ are known to be coefficients of certain mock Jacobi forms arising from the Igusa cusp form $\Phi_{10}$. In this paper we present an exact analytic formula for these BH degeneracies purely in terms of the degeneracies of the perturbative $\frac12$-BPS states of the theory. We use the fact that the degeneracies are completely controlled by the polar coefficients of the mock Jacobi forms, using the Hardy-Ramanujan-Rademacher circle method. Here we present a simple formula for these polar coefficients as a quadratic function of the $\frac12$-BPS degeneracies. We arrive at the formula by using the physical interpretation of polar coefficients as negative discriminant states, and then making use of previous results in the literature to track the decay of such states into pairs of $\frac12$-BPS states in the moduli space. Although there are an infinite number of such decays, we show that only a finite number of them contribute to the formula. The phenomenon of BH bound state metamorphosis (BSM) plays a crucial role in our analysis. We show that the dyonic BSM orbits with $U$-duality invariant $\Delta<0$ are in exact correspondence with the solution sets of the Brahmagupta-Pell equation, which implies that they are isomorphic to the group of units in the order $\mathbb{Z}[\sqrt{|\Delta|}]$ in the real quadratic field $\mathbb{Q}(\sqrt{|\Delta|})$. We check our formula against the known numerical data arising from the Igusa cusp form, for the first 1650 polar coefficients, and find perfect agreement. | high energy physics theory |
Bayesian multinomial logistic-normal (MLN) models are popular for the analysis of sequence count data (e.g., microbiome or gene expression data) due to their ability to model multivariate count data with complex covariance structure. However, existing implementations of MLN models are limited to handling small data sets due to the non-conjugacy of the multinomial and logistic-normal distributions. We introduce MLN models which can be written as marginally latent matrix-t process (LTP) models. Marginally LTP models describe a flexible class of generalized linear regression, non-linear regression, and time series models. We develop inference schemes for Marginally LTP models and, through application to MLN models, demonstrate that our inference schemes are both highly accurate and often 4-5 orders of magnitude faster than MCMC. | statistics |
For Riemannian submersions with fibers of basic mean curvature, we compare the spectrum of the total space with the spectrum of a Schr\"{o}dinger operator on the base manifold. Exploiting this concept, we study submersions arising from actions of Lie groups. In this context, we extend the state of the art results on the bottom of the spectrum under Riemannian coverings. As an application, we compute the bottom of the spectrum and the Cheeger constant of connected, amenable Lie groups. | mathematics |
This paper addresses the problem of safe and efficient navigation in remotely controlled robots operating in hazardous and unstructured environments; or conducting other remote robotic tasks. A shared control method is presented which blends the commands from a VFH+ obstacle avoidance navigation module with the teleoperation commands provided by an operator via a joypad. The presented approach offers several advantages such as flexibility allowing for a straightforward adaptation of the controller's behaviour and easy integration with variable autonomy systems; as well as the ability to cope with dynamic environments. The advantages of the presented controller are demonstrated by an experimental evaluation in a disaster response scenario. More specifically, presented evidence show a clear performance increase in terms of safety and task completion time compared to a pure teleoperation approach, as well as an ability to cope with previously unobserved obstacles. | computer science |
Celestial amplitudes are flat-space amplitudes which are Mellin-transformed to correlators living on the celestial sphere. In this note we present a recursion relation, based on a tree-level BCFW recursion, for gravitational celestial amplitudes and use it to explore the notion of conformal softness. As the BCFW formula exponentiates in the soft energy, it leads directly to conformal soft theorems in an exponential form. These appear from a soft piece of the amplitude characterized by a discrete family of singularities with weights $\Delta=1-\mathbb{Z}_+$. As a byproduct, in the case of the MHV sector we provide a direct celestial analogue of Hodges' recursion formula at all multiplicities. | high energy physics theory |
Heavy neutral leptons (HNLs) appear in many extensions of the Standard Model of particle physics. In this study, we investigate to which extent the NA62 experiment at CERN could improve the existing bounds on the HNL mixing angle $|U_e|^2$ by performing a missing mass search in $K^+ \to \pi^0 e^+ N$ decays in flight. We show that the limit $|U_e|^2 \simeq 2\times 10^{-6}$ can be reached with the currently available data in the mass range 125 -- 144 MeV, which is currently not well covered by direct searches. Future data, together with a dedicated trigger and/or improvements in rejection of out-of-acceptance photons, can improve this limit by another order of magnitude. | high energy physics phenomenology |
In this paper we attempt to understand Lorentzian tensor networks, as a preparation for constructing tensor networks that can describe more exotic backgrounds such as black holes. To define notions of reference frames and switching of reference frames on a tensor network, we will borrow ideas from the algebraic quantum field theory literature. With these definitions, we construct simple examples of Lorentzian tensor networks and solve the spectrum for a choice of ``inertial frame'' based on Gaussian models of fermions and integrable models. In particular, the tensor network can be viewed as a periodically driven Floquet system, that by-pass the ``doubling problem'' and gives rise to fermions with exactly linear dispersion relations. We will find that a boost operator connecting different inertial frames, and notions of ``Rindler observers'' can be defined, and that important physics in Lorentz invariant QFT, such as the Unruh effect, can be captured by such skeleton of spacetime. We find interesting subtleties when the same approach is directly applied to bosons -- the operator algebra contains commutators that take the wrong sign -- resembling bosons behind horizons. | high energy physics theory |
Let $\phi:S^2 \to S^2$ be an orientation-preserving branched covering whose post-critical set has finite cardinality $n$. If $\phi$ has a fully ramified periodic point $p_{\infty}$ and satisfies certain additional conditions, then, by work of Koch, $\phi$ induces a meromorphic self-map $R_{\phi}$ on the moduli space $\mathcal{M}_{0,n}$; $R_{\phi}$ descends from Thurston's pullback map on Teichm\"uller space. Here, we relate the dynamics of $R_{\phi}$ on $\mathcal{M}_{0,n}$ to the dynamics of $\phi$ on $S^2$. Let $\ell$ be the length of the periodic cycle in which the fully ramified point $p_{\infty}$ lies; we show that $R_{\phi}$ is algebraically stable on the heavy-light Hassett space corresponding to $\ell$ heavy marked points and $(n-\ell)$ light points. | mathematics |
Color centers in diamond micro and nano structures are under investigation for a plethora of applications. However, obtaining high quality color centers in small structures is challenging, and little is known about how properties such as spin population lifetimes change during the transition from bulk to micro and nano structures. In this manuscript, we studied various ways to prepare diamond samples containing silicon vacancy centers and measured how population lifetimes of orbital states change in pillars as we varied their dimensions from approximately 1 $\mu$m to 120 nm. We also researched the influence of the properties of the diamond substrate and the implantation and annealing methods on the silicon vacancy inhomogeneous linewidth and orbital lifetime. Our measurements show that nominally identical diamond samples can display significantly distinct inhomogeneous broadening. We observed weak indications that restricted vibrational modes in small structures may extend population lifetimes. However, imperfections in the crystal lattice or surface damage caused by etching reduce population lifetimes, especially in the smallest structures. | quantum physics |
The classification of large-scale high-resolution SAR land cover images acquired by satellites is a challenging task, facing several difficulties such as semantic annotation with expertise, changing data characteristics due to varying imaging parameters or regional target area differences, and complex scattering mechanisms being different from optical imaging. Given a large-scale SAR land cover dataset collected from TerraSAR-X images with a hierarchical three-level annotation of 150 categories and comprising more than 100,000 patches, three main challenges in automatically interpreting SAR images of highly imbalanced classes, geographic diversity, and label noise are addressed. In this letter, a deep transfer learning method is proposed based on a similarly annotated optical land cover dataset (NWPU-RESISC45). Besides, a top-2 smooth loss function with cost-sensitive parameters was introduced to tackle the label noise and imbalanced classes' problems. The proposed method shows high efficiency in transferring information from a similarly annotated remote sensing dataset, a robust performance on highly imbalanced classes, and is alleviating the over-fitting problem caused by label noise. What's more, the learned deep model has a good generalization for other SAR-specific tasks, such as MSTAR target recognition with a state-of-the-art classification accuracy of 99.46%. | electrical engineering and systems science |
For a real nonlinear Klein-Gordon Lagrangian density with a special solitary wave solution (SSWS), which is essentially unstable, it is shown how adding a proper additional massless term could guarantee the energetically stability of the SSWS, without changing its dominant dynamical equation and other properties. In other words, it is a stability catalyzer. The additional term contains a parameter $B$, which brings about more stability for the SSWS at larger values. Hence, if one considers $B$ to be an extremely large value, then any other solution which is not very close to the free far apart SSWSs and the trivial vacuum state, require an infinite amount of energy to be created. In other words, the possible non-trivial stable configurations of the fields with the finite total energies are any number of the far apart SSWSs, similar to any number of identical particles. | physics |
In the CHY-frame for the amplitudes, there are two kinds of singularities we need to deal with. The first one is the pole singularities when the kinematics is not general, such that some of $S_A\to 0$. The second one is the collapse of locations of points after solving scattering equations (i.e., the singular solutions). These two types of singularities are tightly related to each other, but the exact mapping is not well understood. In this paper, we have initiated the systematic study of the mapping. We have demonstrated the different mapping patterns using three typical situations, i.e., the factorization limit, the soft limit and the forward limit. | high energy physics theory |
We provide an analytical approximation to the dynamics in each of the three most important low order secondary resonances (1:1, 2:1, and 3:1) bifurcating from the synchronous primary resonance in the gravitational spin-orbit problem. To this end we extend the perturbative approach introduced in Gkolias et. al. (2016), based on normal form series computations. This allows to recover analytically all non-trivial features of the phase space topology and bifurcations associated with these resonances. Applications include the characterization of spin states of irregular planetary satellites or double systems of minor bodies with irregular shapes. The key ingredients of our method are: i) the use of a detuning parameter measuring the distance from the exact resonance, and ii) an efficient scheme to `book-keep' the series terms, which allows to simultaneously treat all small parameters entering the problem. Explicit formulas are provided for each secondary resonance, yielding i) the time evolution of the spin state, ii) the form of phase portraits, iii) initial conditions and stability for periodic solutions, and iv) bifurcation diagrams associated with the periodic orbits. We give also error estimates of the method, based on analyzing the asymptotic behavior of the remainder of the normal form series. | astrophysics |
A well-known topic within the philosophy of physics is the problem of fine-tuning: the fact that the universal constants seem to take non-arbitrary values in order for live to thrive in our Universe. In this paper we will talk about this problem in general, giving some examples from physics. We will review some solutions like the design argument, logical probability, cosmological natural selection, etc. Moreover, we will also discuss why it's dangerous to uphold the Principle of Naturalness as a scientific principle. After going through this paper, the reader should have a general idea what this problem exactly entails whenever it is mentioned in other sources and we recommend the reader to think critically about these concepts. | physics |
The distribution of block maxima of sequences of independent and identically-distributed random variables is used to model extreme values in many disciplines. The traditional extreme value (EV) theory derives a closed-form expression for the distribution of block maxima under asymptotic assumptions, and is generally fitted using annual maxima or excesses over a high threshold, thereby discarding a large fraction of the available observations. The recently-introduced Metastatistical Extreme Value Distribution (MEVD), a non-asymptotic formulation based on doubly stochastic distributions, has been shown to offer several advantages compared to the traditional EV theory. In particular, MEVD explicitly accounts for the variability of the process generating the extreme values, and uses all the available information to perform high-quantile inferences. Here we review the derivation of the MEVD, analyzing its assumptions in detail, and show that its general formulation includes other doubly stochastic approaches to extreme value analysis that have been recently proposed. | statistics |
This paper presents a first-principle and global perspective of electromagnetic chirality. It follows for this purpose a bottom-up construction, from the description of chiral particles or metaparticles (microscopic scale), through the electromagnetic theory of chiral media (macroscopic scale), to the establishment advanced properties and design principles of chiral materials and metamaterials. It preliminarily highlights the three fundamental concepts related to chirality -- mirror asymmetry, polarization rotation and magnetodielectric coupling -- and points out the nontrivial interdependencies existing between them. The first part (chiral particles) presents metamaterial as the most promising technology for chirality, compares two representative particles involving magnetoelectric coupling, namely the planar Omega particle and the twisted Omega or helix particle, and shows that only the latter is chiral, and finally links the response of microscopic particles to that of the medium formed by arranging them according to a subwavelength lattice structure. The second part (electromagnetic theory) infers from the previous microscopic study the chiral constitutive relations as a subset of the most general bianisotropic relations, derives parity conditions for the chiral parameters, computes the chiral eigenstates as circularly polarized waves, and finally shows that the circular birefringence of these states leads to polarization rotation. The third part (properties and design) introduces an explicit formulation of chirality based on spatial frequency dispersion or nonlocality, analyzes the temporal frequency dispersion or nonlocality of chiral media, and finally provides guidelines to design a practical chiral metamaterial. | physics |
This work develops a novel power control framework for energy-efficient power control in wireless networks. The proposed method is a new branch-and-bound procedure based on problem-specific bounds for energy-efficiency maximization that allow for faster convergence. This enables to find the global solution for all of the most common energy-efficient power control problems with a complexity that, although still exponential in the number of variables, is much lower than other available global optimization frameworks. Moreover, the reduced complexity of the proposed framework allows its practical implementation through the use of deep neural networks. Specifically, thanks to its reduced complexity, the proposed method can be used to train an artificial neural network to predict the optimal resource allocation. This is in contrast with other power control methods based on deep learning, which train the neural network based on suboptimal power allocations due to the large complexity that generating large training sets of optimal power allocations would have with available global optimization methods. As a benchmark, we also develop a novel first-order optimal power allocation algorithm. Numerical results show that a neural network can be trained to predict the optimal power allocation policy. | computer science |
We consider the problem of defining the effect of an intervention on a time-varying risk factor or treatment for a disease or a physiological marker; we develop here the latter case. So, the system considered is $(Y,A,C)$, where $Y=(Y_t)$, is the marker process of interest, $A=A_t$ the treatment. A realistic case is that the treatment can be changed only at discrete times. In an observational study the treatment attribution law is unknown; however, the physical law can be estimated without knowing the treatment attribution law, provided a well-specified model is available. An intervention is specified by the treatment attribution law, which is thus known. Simple interventions will simply randomize the attribution of the treatment; interventions that take into account the past history will be called "strategies". The effect of interventions can be defined by a risk function $R^{\intr}=\Ee_{\intr}[L(\bar Y_{t_J}, \bar A_{t_{J}},C)]$, where $L(\bar Y_{t_J}, \bar A_{t_{J}},C)$ is a loss function, and contrasts between risk functions for different strategies can be formed. Once we can compute effects for any strategy, we can search for optimal or sub-optimal strategies; in particular we can find optimal parametric strategies. We present several ways for designing strategies. As an illustration, we consider the choice of a strategy for containing the HIV load below a certain level while limiting the treatment burden. A simulation study demonstrates the possibility of finding optimal parametric strategies. | statistics |
In the present study, the structural and hitherto uninvestigated mechanical (elastic stiffness constants, machinability index, Cauchy pressure, anisotropy indices, brittleness/ductility, Poissons ratio), electronic, optical, and thermodynamic properties of novel boron-rich compounds B6X (X = S, Se) have been explored using density functional theory. The estimated structural lattice parameters were consistent with the prior report. The mechanical and dynamical stability of these compounds have been established theoretically. The materials are brittle in nature and elastically anisotropic. The value of fracture toughness, KIC for the B6S and B6Se are ~ 2.07 MPam0.5, evaluating the resistance to limit the crack propagation inside the materials. Both B6S and B6Se compounds possess high hardness values in the range 31-35 GPa, and have the potential to be prominent members of the class of hard compounds. Strong covalent bonding and sharp peak at low energy below the Fermi level confirmed by partial density of states (PDOS) resulted in the high hardness. The profile of band structure, as well as DOS, assesses the indirect semiconducting nature of the titled compounds. The comparatively high value of Debye temperature ({\Theta}D), minimum thermal conductivity (Kmin), lattice thermal conductivity (kph), low thermal expansion coefficient, and low density suggest that both boron-rich chalcogenides might be used as thermal management materials. Large absorption capacities in the mid ultraviolet region (3.2-15 eV) of the studied materials and low reflectivity (~16 %) are significantly noted. Such favorable features give promise to the compounds under investigation to be used in UV surface-disinfection devices as well as medical sterilizer equipment applications. Excellent correlations are found among all the studied physical properties of these compounds. | condensed matter |
A comparison of results from principal component analysis and support vector machine calculations is made for a variety of phase transitions in two-dimensional classical spin models. | condensed matter |
Nighttime monitoring of the aerosol content of the lower atmosphere is a challenging task, because appropriate reference natural light sources are lacking. Here we show that the anthropogenic night sky brightness due to city lights can be successfully used for estimating the aerosol optical depth of arbitrarily thick atmospheric layers. This method requires measuring the zenith night sky brightness with two detectors located at the limiting layer altitudes. Combined with an estimate of the overall atmospheric optical depth (available from ground-based measurements or specific satellite products), the ratio of these radiances provides a direct estimate of the differential aerosol optical depth of the air column between these two altitudes. These measurements can be made with single-channel low-cost radiance detectors widely used by the light pollution research community. | astrophysics |
Using the recently published Gaia second data release which includes measurements of the mean radial velocity of about 7.2 million stars, we performed a systematic comparison with other existing radial velocity catalogues in order to search for variations in the radial velocity measurements, with the goal that detected differences may indicate that these stars are possibly spectroscopic binaries stars with only one visible component (SB1). We present a spectroscopic binary candidate stars catalogue containing 35,246 stars, compiled to encourage follow-up observations obtaining spectra at different epochs of these stars orbits in order to verify their binarity and to study these systems using radial velocity curves. Comparing the Gaia DR2 database with the K-M dwarf catalogue we found 16 stars that show radial velocity variations. In a comparison with the Pulkovo radial velocity catalogue of Hipparcos stars we identified a total of 539 SB1 candidate stars. In the largest radial velocity catalogue available, the RAVE catalogue, we found a total of 34,691 stars that show radial velocity variations when compared to the Gaia DR2 data. | astrophysics |
A wide variety of complex systems exhibit large fluctuations both in space and time that often can be attributed to the presence of some kind of critical phenomena. Under such critical scenario it is well known that the properties of the correlation functions in space and time are two sides of the same coin. Here we test wether systems exhibiting a phase transition could self-tune to its critical point taking advantage of such correlation properties. We describe results in three models: the 2D Ising ferromagnetic model, the 3D Vicsek flocking model and a small-world neuronal network model. We illustrate how the feedback of the autocorrelation function of the order parameter fluctuations is able to shift the system towards its critical point. Since the results rely on universal properties they are expected to be relevant to a variety of other settings. | condensed matter |
In this work we introduce the generic conditions for the existence of a non-equilibrium attractor that is an invariant manifold determined by the long-wavelength modes of the physical system. We investigate the topological properties of the global flow structure of the Gubser flow for the Israel-Stewart theory and a kinetic model for the Boltzmann equation by employing Morse-Smale theory. We present a complete classification of the invariant submanifolds of the flow and determine all the possible flow lines connecting any pair of UV/IR fixed points. The formal transseries solutions to the Gubser dynamical system around the early-time (UV) and late-time (IR) fixed points are constructed and analyzed. It is proven that these solutions are purely perturbative (or power-law asymptotic) series with a finite radius of convergence. Based on these analyses, we find that Gubser-like expanding kinetic systems do not hydrodynamize owing to the failure of the hydrodynamization process which heavily relies on the classification of (non)hydrodynamic modes in the IR regime. This is in contrast to longitudinal boost-invariant plasmas where the asymptotic dynamics is described by a few terms of the hydrodynamic gradient expansion. We finally compare our results for both Bjorken and Gubser conformal kinetic models. | high energy physics theory |
The initial detection and identification of suspicious lesions and the precise delineation of tumour mar-gins are essential for successful tumour resection, with progression-free survival linked to rates of complete resection. However, post-surgical positive margin rates remain high for many cancers and despite numerous advances in intraoperative imaging and diagnostic technologies, there exists no single modality that can adequately perform both tumoural detection and delineation. Here, we demonstrate a multimodal computer vision-based diagnostic system capable of both the gross detection and identification of suspicious lesions and the precise delineation of disease margins. We first show that through visual tracking of a spectroscopic probe, we enable real-time tumour margin delineation both for ex vivo human tumour biopsies and for an in vivo tumour xenograft mouse model. We then demonstrate that the combination of Raman spectroscopic diagnoses with protoporphyrin IX (PPIX) fluorescence imaging enables fluorescence-guided Raman spectroscopic margin delineation. Our fluorescence-guided Raman spectroscopic system achieves superior margin delineation accuracy to fluorescence imaging alone, demonstrating the potential for our system to achieve improved clinical outcomes for tumour resection surgeries. | physics |
The time-dependent $CP$ asymmetry in $B^0 \to K_{\rm res} \gamma \to \pi^+ \pi^- K^0_{\scriptscriptstyle S} \gamma$ is sensitive to the photon polarisation in the quark level process $b \to s \gamma$. While this polarisation is predominantly left-handed in the standard model, it could be modified by the existence of new physics contributions that may possess different $CP$ properties. In this paper, we derive the $CP$ violation formulae for $B^0 \to K_{\rm res} \gamma \to \pi^+ \pi^- K^0_{\scriptscriptstyle S} \gamma$ including the most dominant intermediate states. We propose a new observable that could be measured in a time-dependent amplitude analysis of $B^0 \to \pi^+ \pi^- K^0_{\scriptscriptstyle S} \gamma$ decays, providing a stringent contraint on the photon polarisation. We discuss the future prospects for obtaining such constraints from measurements at Belle II and LHCb. | high energy physics phenomenology |
Biomedical imaging is a driver of scientific discovery and core component of medical care, currently stimulated by the field of deep learning. While semantic segmentation algorithms enable 3D image analysis and quantification in many applications, the design of respective specialised solutions is non-trivial and highly dependent on dataset properties and hardware conditions. We propose nnU-Net, a deep learning framework that condenses the current domain knowledge and autonomously takes the key decisions required to transfer a basic architecture to different datasets and segmentation tasks. Without manual tuning, nnU-Net surpasses most specialised deep learning pipelines in 19 public international competitions and sets a new state of the art in the majority of the 49 tasks. The results demonstrate a vast hidden potential in the systematic adaptation of deep learning methods to different datasets. We make nnU-Net publicly available as an open-source tool that can effectively be used out-of-the-box, rendering state of the art segmentation accessible to non-experts and catalyzing scientific progress as a framework for automated method design. | computer science |
Deformed sine-Gordon (DSG) models $\partial_\xi \partial_\eta \, w + \frac{d}{dw}V(w) = 0$, with $V(w)$ being the deformed potential, are considered in the context of the Riccati-type pseudopotential approach. A compatibility condition of the deformed system of Riccati-type equations reproduces the equation of motion of the DSG models. Then, we provide a pair of linear systems of equations for DSG model, and provide an infinite tower of non-local conservation laws. Through a direct construction and supported by numerical simulations of soliton scatterings, we show that the DSG models, which have recently been defined as quasi-integrable in the anomalous zero-curvature approach [Ferreira-Zakrzewski, JHEP05(2011)130], possess new towers of infinite number of quasi-conservation laws. We compute numerically the first sets of non-trivial and independent charges (beyond energy and momentum) of the DSG model: the two third order conserved charges and the two fifth order asymptotically conserved charges in the pseudopotential approach, and the first four anomalies of the new towers of charges, respectively. We consider kink-kink, kink-antikink and breather configurations for the Bazeia {\sl et al.} potential $V_{q}(w) = \frac{64}{q^2} \tan^2{\frac{w}{2}} (1-|\sin{\frac{w}{2}}|^q)^2 \, (q \in R)$, which contains the usual SG potential $V_2(w) = 2[1- \cos{(2 w)}]$. The numerical simulations are performed using the 4th order Runge-Kutta method supplied with non-reflecting boundary conditions. | high energy physics theory |
This paper contributes to the design of a fractional order (FO) internal model controller (IMC) for a first order plus time delay (FOPTD) process model to satisfy a given set of desired robustness specifications in terms of gain margin (Am) and phase margin (Pm). The highlight of the design is the choice of a fractional order (FO) filter in the IMC structure which has two parameters (lambda and beta) to tune as compared to only one tuning parameter (lambda) for traditionally used integer order (IO) filter. These parameters are evaluated for the controller, so that Am and Pm can be chosen independently. A new methodology is proposed to find a complete solution for controller parameters, the methodology also gives the system gain cross-over frequency (wg) and phase cross-over frequency (wp). Moreover, the solution is found without any approximation of the delay term appearing in the controller. | electrical engineering and systems science |
We provide novel sufficient conditions for stability of nonlinear and time-varying impulsive systems. These conditions generalize, extend, and strengthen many existing results. Different types of input-to-state stability (ISS), as well as zero-input global uniform asymptotic stability (0-GUAS), are covered by employing a two-measure framework and considering stability of both weak (decay depends only on elapsed time) and strong (decay depends on elapsed time and the number of impulses) flavors. By contrast to many existing results, the stability state bounds imposed are uniform with respect to initial time and also with respect to classes of impulse-time sequences where the impulse frequency is eventually uniformly bounded. We show that the considered classes of impulse-time sequences are substantially broader than other previously considered classes, such as those having fixed or (reverse) average dwell times, or impulse frequency achieving uniform convergence to a limit (superior or inferior). Moreover, our sufficient conditions are not more restrictive than existing ones when particularized to some of the cases covered in the literature, and hence in these cases our results allow to strengthen the existing conclusions. | electrical engineering and systems science |
In this paper we obtain the complete classification of the inequivalent classes of M2-brane symplectic torus bundles with monodromy in $SL(2,Z)$ and their precise U-duality relations among them. There are eight inequivalent classes of bundles whose monodromy groups, at low energies, are in correspondence with the gauging groups of the eight type II gauged supergravities in nine dimensions. Four of those have been previously found and they correspond to the 'type IIB side'. In this paper we provide the explicit realization of the remaining four classes associated to the 'type IIA side'. The precise M2-brane U-duality relations between the eight inequivalent classes of bundles have allowed to identified the remaining four ones. We conjecture that the classes of gaugings -- classifying the eight types of II gauged supergravity in nine dimensions -- are determined by the inequivalent coinvariant classes associated to the base and the fiber of the supermembrane bundles and their duals. | high energy physics theory |
In the setting of metric measure spaces satisfying the doubling condition and the $(1,p)$-Poincar\'e inequality, we prove a metric analogue of the Bourgain-Brezis-Mironescu formula for functions in the Sobolev space $W^{1,p}(X,d,\nu)$, under the assumption that for $\nu$-a.e. point the tangent space in the Gromov-Hausdorff sense is Euclidean with fixed dimension $N$. | mathematics |
Several forms of iterable belief change exist, differing in the kind of change and its strength: some operators introduce formulae, others remove them; some add formulae unconditionally, others only as additions to the previous beliefs; some only relative to the current situation, others in all possible cases. A sequence of changes may involve several of them: for example, the first step is a revision, the second a contraction and the third a refinement of the previous beliefs. The ten operators considered in this article are shown to be all reducible to three: lexicographic revision, refinement and severe withdrawal. In turn, these three can be expressed in terms of lexicographic revision at the cost of restructuring the sequence. This restructuring needs not to be done explicitly: an algorithm that works on the original sequence is shown. The complexity of mixed sequences of belief change operators is also analyzed. Most of them require only a polynomial number of calls to a satisfiability checker, some are even easier. | computer science |
In this work we evaluate analytically the ultraviolet divergences of Lorentz-violating massive O($N$) $\lambda\phi^{4}$ scalar field theories, which are exact in the Lorentz-violating mechanism, firstly explicitly at next-to-leading order and latter at any loop level through an induction procedure based on a theorem following from the exact approach, for computing the corresponding critical exponents. For attaining that goal, we employ three different and independent field-theoretic renormalization group methods. The results found for the critical exponents show that they are identical in the three distinct methods and equal to their Lorentz invariant counterparts. Furthermore, we show that the results obtained here, based on the single concept of loop order of the referred terms of the corresponding $\beta$-function and anomalous dimensions, reduce to the ones obtained through the earlier non-exact approach based on a joint redefinition of the field and coupling constant of the theory, in the appropriate limit. | high energy physics theory |
Insider threats, as one type of the most challenging threats in cyberspace, usually cause significant loss to organizations. While the problem of insider threat detection has been studied for a long time in both security and data mining communities, the traditional machine learning based detection approaches, which heavily rely on feature engineering, are hard to accurately capture the behavior difference between insiders and normal users due to various challenges related to the characteristics of underlying data, such as high-dimensionality, complexity, heterogeneity, sparsity, lack of labeled insider threats, and the subtle and adaptive nature of insider threats. Advanced deep learning techniques provide a new paradigm to learn end-to-end models from complex data. In this brief survey, we first introduce one commonly-used dataset for insider threat detection and review the recent literature about deep learning for such research. The existing studies show that compared with traditional machine learning algorithms, deep learning models can improve the performance of insider threat detection. However, applying deep learning to further advance the insider threat detection task still faces several limitations, such as lack of labeled data, adaptive attacks. We then discuss such challenges and suggest future research directions that have the potential to address challenges and further boost the performance of deep learning for insider threat detection. | computer science |
For black hole evaporation to be unitary, the naive density matrix of Hawking radiation needs to be corrected with a sprinkling of pseudorandom "noise." Using wormholes, semiclassical gravity appears to describe an averaged "true random" theory of this noise. We discuss the wormholes in dilaton gravity theories with matter. They are classical solutions that depend on a small amount of backreaction from matter fields, and they are closely related to the wormholes that give the Page curve. | high energy physics theory |
Recent $\mu$SR measurements revealed that spontaneous magnetism exists in the superconducting state of rhenium and it also appears in other rhenium based materials like Re$_6$Zr, Re$_6$Hf, Re$_6$Ti. The superconducting state of these materials show $s$-wave-like properties and the pairing mechanism is most likely driven by electron-phonon coupling. In this paper we take elemental rhenium as a testbed and investigate its ground state. By developing an LCAO formalism for the solution of the spin-generalized Bogoliubov-de Gennes equation we use every details of the first-principles band-structure together with spin-orbit coupling. In this paper we provide a possible explanation of the spontaneous time-reverseal symmetry breaking in the superconducting ground state of rhenium by arguing that taking into account the orbital degrees of freedom, spin-orbit coupling is inducing even-parity odd-orbital spin triplet Cooper pairs, and Cooper pairs' migration between the equal-spin triplet states may lower the total energy. We show how magnetism emerges and the structure of the gap changes as a function of the triplet component of the interaction strength. | condensed matter |
We have constructed a perturbation theory to treat interactions that can include the Coulomb interaction, describing a physical problem that is often encountered in nuclear physics. The Coulomb part is not treated perturbatively; the exact solutions are employed. The method is an extension of the results presented in Hoffmann (2021 J. Math. Phys. 62 032105). It is designed to calculate phase shifts directly rather than the full form of the wavefunctions in position space. We present formulas that allow calculation of the phase shifts to second order in the perturbation. The phase shift results to second order, for a short-range potential, were compared with the exact solution, where we found an error of third order in the coupling strength. A different model, meant as a simple approximation of nuclear scattering of a proton on Helium-4 and including a Coulomb potential and a spherical well, was constructed to test the theory. The wavepacket scattering formalism of Hoffmann (2017 J. Phy. B: At. Mol. Opt. Phys 50 215302), known to give everywhere finite results, was employed. We found physically acceptable results and a cross section of the correct order of magnitude. | quantum physics |
Superconductivity and the quantum Hall effect are considered to be two cornerstones of condensed matter physics. The realization of hybrid structures where these two effects coexist has recently become an active field of research. In this work, we study a Josephson junction where a central region in the quantum Hall regime is proximitized with superconductors that can be driven to a topological phase with an external Zeeman field. In this regime, the Majorana modes that emerge at the ends of each superconducting lead couple to the chiral quantum Hall edge states. This produces distinguishable features in the Andreev levels and Fraunhofer patterns that could help in detecting not only the topological phase transition but also the spin degree of freedom of these exotic quasiparticles. The current phase relation and the spectral properties of the junction throughout the topological transition are fully described by a numerical tight-binding calculation. In pursuance of the understanding of these results, we develop a low-energy spinful model that captures the main features of the numerical transport simulations in the topological phase. | condensed matter |
Primordial black holes have been considered as an attractive dark matter candidate, whereas some of the predictions heavily rely on the near-horizon physics that remains to be tested experimentally. As a concrete alternative, thermal 2-2-holes closely resemble black holes without event horizons. Being a probable endpoint of gravitational collapse, they not only provide a resolution to the information loss problem, but also naturally give rise to stable remnants. Previously, we have considered primordial 2-2-hole remnants as dark matter. Due to the strong constraints from a novel phenomenon associated with remnant mergers, only small remnants with close to the Planck mass can constitute all of dark matter. In this paper, we examine the scenario that the majority of dark matter consists of particles produced by the evaporation of primordial 2-2-holes, whereas the remnant contribution is secondary. The products with light enough mass may contribute to the number of relativistic degrees of freedom in the early universe, which we also calculate. Moreover, 2-2-hole evaporation can produce particles that are responsible for the baryon asymmetry. We find that baryogenesis through direct B-violating decays or through leptogenesis can both be realized. Overall, the viable parameter space for the Planck remnant case is similar to primordial black holes with Planck remnants. Heavier remnants, on the other hand, lead to different predictions, and the viable parameter space remains large even when the remnant abundance is constrained small. | high energy physics phenomenology |
We show that a domain wall separating single layer graphene (SLG) and AA-stacked bilayer graphene (AA-BLG) can be used to generate highly collimated electron beams which can be steered by a magnetic field. Such system exists in two distinct configurations, namely, locally delaminated AA-BLG and terminated AA-BLG whose terminal edge-type can be either zigzag or armchair. We investigate the electron scattering using semi-classical dynamics and verify the results independently with wave-packed dynamics simulations. We find that the proposed system supports two distinct types of collimated beams that correspond to the lower and upper cones in AA-BLG. Our computational results also reveal that collimation is robust against the number of layers connected to AA-BLG and terminal edges. | condensed matter |
We assess the impact of searches at flavor factories for new neutral resonances that couple to both photons and gluons. These are well motivated by "heavy axion" solutions of the strong CP problem and by frameworks addressing both Dark Matter and the Higgs hierarchy problem. We use LHCb public diphoton data around the Bs mass to derive the current best limit on these resonances for masses between 4.9 and 6.3 GeV. We estimate that a future LHCb dedicated search would test an axion decay constant of O(TeV) for axion masses in the few-to-tens of GeV, being fully complementary to the low mass ATLAS and CMS searches. We also derive the impact of BABAR searches based on Upsilon decays and the future Belle-II reach. | high energy physics phenomenology |
In many applications, it is necessary to retrieve pairs of vertices with the path between them satisfying certain constraints, since regular expression is a powerful tool to describe patterns of a sequence. To meet such requirements, in this paper, we define regular expression (RE) query on graphs to use regular expression to represent the constraints between vertices. To process RE queries on large graphs such as social networks, we propose the RE query processing method with the index size sublinear to the graph size. Considering that large graphs may be randomly distributed in multiple machines, the parallel RE processing algorithms are presented without the assumption of graph distribution. To achieve high efficiency for complex RE query processing, we develop cost-based query optimization strategies with only a small size statistical information which is suitable for querying large graphs. Comprehensive experimental results show that this approach works scale well for large graphs. | computer science |
We discuss aspects of magnetically charged black holes in the Standard Model. For a range of charges, we argue that the electroweak symmetry is restored in the near horizon region. The extent of this phase can be macroscopic. If $Q$ is the integer magnetic charge, the fermions lead to order $Q$ massless two dimensional fermions moving along the magnetic field lines. These greatly enhance Hawking radiation effects. | high energy physics theory |
Microgravity eases several constraints limiting experiments with ultracold and condensed atoms on ground. It enables extended times of flight without suspension and eliminates the gravitational sag for trapped atoms. These advantages motivated numerous initiatives to adapt and operate experimental setups on microgravity platforms. We describe the design of the payload, motivations for design choices, and capabilities of the Bose-Einstein Condensate and Cold Atom Laboratory (BECCAL), a NASA-DLR collaboration. BECCAL builds on the heritage of previous devices operated in microgravity, features rubidium and potassium, multiple options for magnetic and optical trapping, different methods for coherent manipulation, and will offer new perspectives for experiments on quantum optics, atom optics, and atom interferometry in the unique microgravity environment on board the International Space Station. | physics |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.