text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
In the supersymmetric models, the coannihilation of the neutralino DM with a lighter supersymmetric particle provides a feasible way to accommodate the observed cosmological DM relic density. Such a mechanism predicts a compressed spectrum of the neutralino DM and its coannihilating partner, which results in the soft final states and makes the searches for sparticles challenging at colliders. On the other hand, the abundance of the freeze-out neutralino DM usually increases as the DM mass becomes heavier. This implies an upper bound on the mass of the neutralino DM. Given these observations, we explore the HE-LHC coverage of the neutralino DM for the coannihilations. By analyzing the events of the multijet with the missing transverse energy ($E^{miss}_T$), the monojet, the soft lepton pair plus $E^{miss}_T$, and the monojet plus a hadronic tau, we find that the neutralino DM mass can be excluded up to 2.6, 1.7 and 0.8 TeV in the gluino, stop and wino coannihilations at the $2\sigma$ level, respectively. However, there is still no sensitivity of the neutralino DM in stau coannihilation at the HE-LHC, due to the small cross section of the direct stau pair production and the low tagging efficiency of soft tau from the stau decay. | high energy physics phenomenology |
We examine how the mass assembly of central galaxies depends on their location in the cosmic web. The HORIZON-AGN simulation is analysed at z~2 using the DISPERSE code to extract multi-scale cosmic filaments. We find that the dependency of galaxy properties on large-scale environment is mostly inherited from the (large-scale) environmental dependency of their host halo mass. When adopting a residual analysis that removes the host halo mass effect, we detect a direct and non-negligible influence of cosmic filaments. Proximity to filaments enhances the build-up of stellar mass, a result in agreement with previous studies. However, our multi-scale analysis also reveals that, at the edge of filaments, star formation is suppressed. In addition, we find clues for compaction of the stellar distribution at close proximity to filaments. We suggest that gas transfer from the outside to the inside of the haloes (where galaxies reside) becomes less efficient closer to filaments, due to high angular momentum supply at the vorticity-rich edge of filaments. This quenching mechanism may partly explain the larger fraction of passive galaxies in filaments, as inferred from observations at lower redshifts. | astrophysics |
Several tens of massive binary systems display indirect, or even strong evidence for non-thermal radio emission, hence their particle accelerator status. These objects are referred to as particle-accelerating colliding-wind binaries (PACWBs). WR 133 is one of the shortest period Wolf-Rayet + O systems in this category, and is therefore critical to characterize the boundaries of the parameter space adequate for particle acceleration in massive binaries. Our methodology consists in analyzing JVLA observations of WR 133 at different epochs to search for compelling evidence for a phase-locked variation attributable to synchrotron emission produced in the colliding-wind region. New data obtained during two orbits reveal a steady and thermal emission spectrum, in apparent contradiction with the previous detection of non-thermal emission. The thermal nature of the radio spectrum along the 112.4-d orbit is supported by the strong free-free absorption by the dense stellar winds, and shows that the simple binary scenario cannot explain the non-thermal emission reported previously. Alternatively, a triple system scenario with a wide, outer orbit would fit with the observational facts reported previously and in this paper, albeit no hint for the existence of a third component exists to date. The epoch-dependent nature of the identification of synchrotron radio emission in WR 133 emphasizes the issue of observational biases in the identification of PACWBs, that undoubtedly affect the present census of PACWB among colliding-wind binaries. | astrophysics |
Let $O$ be a nilpotent orbit of a complex semisimple Lie algebra $\mathfrak{g}$ and let $\pi: X \to \bar{O}$ be the finite covering associated with the universal covering of $O$. In the previous article we have explicitly constructed a $\mathbf{Q}$-factorial terminalization $\tilde{X}$ of $X$ when $\mathfrak{g}$ is classical. In the present article, we count how many different $\mathbf{Q}$-factorial terminalizations $X$ has. We construct the universal Poisson deformation of $\tilde{X}$ over $H^2(\tilde{X}, \mathbf{C})$ and look at the action of the Weyl group $W(X)$ on $H^2(\tilde{X}, \mathbf{C})$. The main result is an explicit geometric description of $W(X)$. | mathematics |
The past decade began with the first light of ALMA and will end at the start of the new era of hyperdimensional astrophysics. Our community-wide movement toward highly multiwavelength and multidimensional datasets has enabled immense progress in each science frontier identified by the 2010 Decadal Survey, particularly with regard to black hole feedback and the cycle of baryons in galaxies. Facilities like ALMA and the next generation of integral field unit (IFU) spectrographs together enable mapping the physical conditions and kinematics of warm ionized and cold molecular gas in galaxies in unprecedented detail (Fig. 1). JWST's launch at the start of the coming decade will push this capability to the rest-frame UV at redshifts z > 6, mapping the birth of stars in the first galaxies at cosmic dawn. Understanding of their subsequent evolution, however, now awaits an ability to map the processes that transform galaxies directly, rather than the consequences of those processes in isolation. In this paper, we argue that doing so requires an equivalent revolution in spatially resolved spectroscopy for the hot plasma that pervades galaxies, the atmospheres in which they reside, and the winds that are the engines of their evolution. | astrophysics |
Industrial Internet of Things (IoT) enables distributed intelligent services varying with the dynamic and realtime industrial devices to achieve Industry 4.0 benefits. In this paper, we consider a new architecture of digital twin empowered Industrial IoT where digital twins capture the characteristics of industrial devices to assist federated learning. Noticing that digital twins may bring estimation deviations from the actual value of device state, a trusted based aggregation is proposed in federated learning to alleviate the effects of such deviation. We adaptively adjust the aggregation frequency of federated learning based on Lyapunov dynamic deficit queue and deep reinforcement learning, to improve the learning performance under the resource constraints. To further adapt to the heterogeneity of Industrial IoT, a clustering-based asynchronous federated learning framework is proposed. Numerical results show that the proposed framework is superior to the benchmark in terms of learning accuracy, convergence, and energy saving. | computer science |
This paper develops a method for robots to integrate stability into actively seeking out informative measurements through coverage. We derive a controller using hybrid systems theory that allows us to consider safe equilibrium policies during active data collection. We show that our method is able to maintain Lyapunov attractiveness while still actively seeking out data. Using incremental sparse Gaussian processes, we define distributions which allow a robot to actively seek out informative measurements. We illustrate our methods for shape estimation using a cart double pendulum, dynamic model learning of a hovering quadrotor, and generating galloping gaits starting from stationary equilibrium by learning a dynamics model for the half-cheetah system from the Roboschool environment. | computer science |
The turbulence of capillary waves on the surface of a ferrofluid with a high permeability in a horizontal magnetic field is considered in the framework of a one-dimensional weakly nonlinear model. In the limit of a strong magnetic field, the surface waves under study can propagate without distortions along or against the direction of external field, i.e., similar to Alfv\'en waves in a perfectly conducting fluid. The interaction of counter-propagating nonlinear waves leads to the development of wave turbulence on the surface of the liquid. The computational data show that the spectrum of turbulence is divided into two parts: a low-frequency dispersionless region, where the magnetic forces dominate and a high-frequency dispersive one, in which the influence of capillary forces becomes significant. In the first region, the spectrum of the surface elevation has the same exponent in $k$ and $\omega$ domains and its value is close to $-3.5$, what is in a good agreement with the estimation obtained from the dimensional analysis of the weak turbulence spectra. At the high frequencies, the computed spatial spectrum of the surface waves is close to $k^{-5/2}$ which corresponds to $\omega^{-5/3}$ in terms of the frequency. This spectrum does not coincide with the Zakharov-Filonenko spectrum obtained for pure capillary waves. A possible explanation of this fact is in the influence of coherent structures (like shock waves) usually arising in media with weak dispersion. | physics |
The ability of linear stochastic response analysis to estimate coherent motions is investigated in turbulent channel flow at friction Reynolds number Re$_\tau$ = 1007. The analysis is performed for spatial scales characteristic of buffer-layer and large-scale motions by separating the contributions of different temporal frequencies. Good agreement between the measured spatio-temporal power spectral densities and those estimated by means of the resolvent is found when the effect of turbulent Reynolds stresses, modelled with an eddy-viscosity associated to the turbulent mean flow, is included in the resolvent operator. The agreement is further improved when the flat forcing power spectrum (white noise) is replaced with a power spectrum matching the measures. Such a good agreement is not observed when the eddy-viscosity terms are not included in the resolvent operator. In this case, the estimation based on the resolvent is unable to select the right peak frequency and wall-normal location of buffer-layer motions. Similar results are found when comparing truncated expansions of measured streamwise velocity power spectral densities based on a spectral proper orthogonal decomposition to those obtained with optimal resolvent modes. | physics |
In this note, we study the holographic CFT at finite temperature $T$ in the de Sitter static patch and find that for a certain range of the Hubble parameter $H$ and $T$ butterfly velocity $v_B$ degenerates. We interpret this as a chaos disruption caused by the interplay between the expansion of chaotic correlations constrained by $v_B$ and effects caused by de Sitter curvature. Also, we provide some analogy with the Schwinger effect in de Sitter and black hole formation from shock wave collision. | high energy physics theory |
Real-time magnetic resonance imaging (MRI) poses unique challenges related to the speed of data acquisition and to the degree of undersampling necessary to achieve this speed. This Master's thesis introduces and evaluates two pre-processing approaches for these problems: Coil compression to reduce the data volume and a channel selection algorithm to reduce streak artifacts which arise as a consequence of undersampling. Both approaches are tested on real data covering anatomical imaging of the head and of the heart. | physics |
An order $2m$ complex tensor $\cH$ is said to be Hermitian if \[\mathcal{H}_\ijm=\mathcal{H}_\jim ^*\mathrm{\ for\ all\ }\ijm .\] It can be regarded as an extension of Hermitian matrix to higher order. A Hermitian tensor is also seen as a representation of a quantum mixed state. Motivated by the separability discrimination of quantum states, we investigate properties of Hermitian tensors including: unitary similarity relation, partial traces, nonnegative Hermitian tensors, Hermitian eigenvalues, rank-one Hermitian decomposition and positive Hermitian decomposition, and their applications to quantum states. | quantum physics |
We establish a relationship between the normalized damping coefficients and the time that takes a nonlinear pendulum to complete one oscillation starting from an initial position with vanishing velocity. We establish some conditions on the nonlinear restitution force so that this oscillation time does not depend monotonically on the viscosity damping coefficient. | mathematics |
In the context of control of smart structures, we present an approach for state estimation of adaptive buildings with active load-bearing elements. For obtaining information on structural deformation, a system composed of a digital camera and optical emitters affixed to selected nodal points is introduced as a complement to conventional strain gauge sensors. Sensor fusion for this novel combination of sensors is carried out using a Kalman filter that operates on a reduced-order structure model obtained by modal analysis. Signal delay caused by image processing is compensated for by an out-of-sequence measurement update which provides for a flexible and modular estimation algorithm. Since the camera system is very precise, a self-tuning algorithm that adjusts model along with observer parameters is introduced to reduce discrepancy between system dynamic model and actual structural behavior. We further employ optimal sensor placement to limit the number of sensors to be placed on a given structure and examine the impact on estimation accuracy. A laboratory scale model of an adaptive high-rise with actuated columns and diagonal bracings is used for experimental demonstration of the proposed estimation scheme. | electrical engineering and systems science |
The book is a unique phenomenon in Russian geometry education. It was first published in 1892; there have been more than 40 revised editions, and dozens of millions of copies (by these parameters it is trailing only Euclid's Elements). Our edition is based on 41st edition (the stable edition of Nil Aleksandrovich Glagolev; it's been in public domain since 2015). At a few places we reverted changes to the earlier editions; we also made more accurate historical remarks. | mathematics |
We propose an algorithm, deployable on a highly-parallelized graph computing architecture, to perform rapid reconstruction of charged-particle trajectories in the high energy collisions at the Large Hadron Collider and future colliders. We use software emulation to show that the algorithm can achieve an efficiency in excess of 99.95% for reconstruction with good accuracy. The algorithm can be implemented on silicon-based integrated circuits using field-programmable gate array technology. Our approach can enable a fast trigger for massive charged particles that decay invisibly in the tracking volume, as in some new-physics scenarios related to particulate dark matter. If production of dark matter or other new neutral particles is mediated by metastable charged particles and is not associated with other triggerable energy deposition in the detectors, our method would be useful for triggering on the charged mediators using the small-radius silicon detectors. | physics |
Ultrasound scattered by a dense shoal of fish undergoes mesoscopic interference, as is typical of low-temperature electrical transport in metals or light scattering in colloidal suspensions. Through large-scale measurements in open sea, we show a set of striking deviations from classical wave diffusion making fish shoals good candidates to study mesoscopic wave phenomena. The very good agreement with theories enlightens the role of fish structure on such a strong scattering regime that features slow energy transport and brings acoustic waves close to the Anderson localization transition. | condensed matter |
We study the anisotropic spin-boson model (SBM) with the subohmic bath by a numerically exact method based on variational matrix product states. A rich phase diagram is found in the anisotropy-coupling strength plane by calculating several observables. There are three distinct quantum phases: a delocalized phase with even parity (phase I), a delocalized phase with odd parity (phase II), and a localized phase with broken $Z_2$ symmetry (phase III), which intersect at a quantum tricritical point. The competition between those phases would give overall picture of the phase diagram. For small power of the spectral function of the bosonic bath, the quantum phase transition (QPT) from phase I to III with mean-field critical behavior is present, similar to the isotropic SBM. The novel phase diagram full with three different phases can be found at large power of the spectral function: For highly anisotropic case, the system experiences the QPTs from phase I to II via 1st-order, and then to the phase III via 2nd-order with the increase of the coupling strength. For low anisotropic case, the system only experiences the continuous QPT from phase I to phase III with the non-mean-field critical exponents. Very interestingly, at the moderate anisotropy, the system would display the continuous QPTs for several times but with the same critical exponents. This unusual reentrance to the same localized phase is discovered in the light-matter interacting systems. The present study on the anisotropic SBM could open an avenue to the rich quantum criticality. | condensed matter |
Achieving structural superlubricity in graphitic samples of macro-scale size is particularly challenging due to difficulties in sliding large contact areas of commensurate stacking domains. Here, we show the presence of macro-scale structural superlubricity between two randomly stacked graphene layers produced by both mechanical exfoliation and CVD. By measuring the shifts of Raman peaks under strain we estimate the values of frictional interlayer shear stress (ILSS) in the superlubricity regime (mm scale) under ambient conditions. The random incommensurate stacking, the presence of wrinkles and the mismatch in the lattice constant between two graphene layers induced by the tensile strain differential are considered responsible for the facile shearing at the macroscale. Furthermore, molecular dynamic simulations show that the stick-slip behaviour does not hold for achiral shearing directions for which the ILSS decreases substantially, supporting the experimental observations. Our results pave the way for overcoming several limitations in achieving macroscale superlubricity in graphene. | condensed matter |
In most galaxies like the Milky Way, stars form in clouds of molecular gas. Unlike the CO emission that traces the bulk of molecular gas, the rotational transitions of HCN and CS molecules mainly probe the dense phase of molecular gas, which has a tight and almost linear relation with the far-infrared luminosity and star formation rate. However, it is unclear if dense molecular gas exists at very low metallicity, and if exists, how it is related to star formation. In this work, we report ALMA observations of the CS $J$=5$\rightarrow$4 emission line of DDO~70, a nearby gas-rich dwarf galaxy with $\sim7\%$ solar metallicity. We did not detect CS emission from all regions with strong CO emission. After stacking all CS spectra from CO-bright clumps, we find no more than a marginal detection of CS $J$=5$\rightarrow$4 transition, at a signal-to-noise ratio of $\sim 3.3$. This 3-$\sigma$ upper limit deviates from the $L^\prime_{\rm CS}$-$L_{\rm IR}$ and $L^\prime_{\rm CS}$-SFR relationships found in local star forming galaxies and dense clumps in the Milky Way, implying weaker CS emission at given IR luminosity and SFR. We discuss the possible mechanisms that suppress CS emission at low metallicity. | astrophysics |
Speech enhancement and speech separation are two related tasks, whose purpose is to extract either one or more target speech signals, respectively, from a mixture of sounds generated by several sources. Traditionally, these tasks have been tackled using signal processing and machine learning techniques applied to the available acoustic signals. Since the visual aspect of speech is essentially unaffected by the acoustic environment, visual information from the target speakers, such as lip movements and facial expressions, has also been used for speech enhancement and speech separation systems. In order to efficiently fuse acoustic and visual information, researchers have exploited the flexibility of data-driven approaches, specifically deep learning, achieving strong performance. The ceaseless proposal of a large number of techniques to extract features and fuse multimodal information has highlighted the need for an overview that comprehensively describes and discusses audio-visual speech enhancement and separation based on deep learning. In this paper, we provide a systematic survey of this research topic, focusing on the main elements that characterise the systems in the literature: acoustic features; visual features; deep learning methods; fusion techniques; training targets and objective functions. In addition, we review deep-learning-based methods for speech reconstruction from silent videos and audio-visual sound source separation for non-speech signals, since these methods can be more or less directly applied to audio-visual speech enhancement and separation. Finally, we survey commonly employed audio-visual speech datasets, given their central role in the development of data-driven approaches, and evaluation methods, because they are generally used to compare different systems and determine their performance. | electrical engineering and systems science |
Solar radio emission, especially at metre-wavelengths, is well known to vary over small spectral ($\lesssim$100\,kHz) and temporal ($<1$\,s) spans. It is comparatively recently, with the advent of a new generation of instruments, that it has become possible to capture data with sufficient resolution (temporal, spectral and angular) that one can begin to characterize the solar morphology simultaneously along the axes of time and frequency. This ability is naturally accompanied by an enormous increase in data volumes and computational burden, a problem which will only become more acute with the next generation of instruments such as the Square Kilometre Array (SKA). The usual approach, which requires manual guidance of the calibration process, is impractical. Here we present the "Automated Imaging Routine for Compact Arrays for the Radio Sun (AIRCARS)", an end-to-end imaging pipeline optimized for solar imaging with arrays with a compact core. We have used AIRCARS so far on data from the Murchison Widefield Array (MWA) Phase-I. The dynamic range of the images is routinely from a few hundred to a few thousand. In the few cases, where we have pushed AIRCARS to its limits, the dynamic range can go as high as $\sim$75000. The images made represent a substantial improvement in the state-of-the-art in terms of imaging fidelity and dynamic range. This has the potential to transform the multi-petabyte MWA solar archive from raw visibilities into science-ready images. AIRCARS can also be tuned to upcoming telescopes like the SKA, making it a very useful tool for the heliophysics community. | astrophysics |
We employ the determinant quantum Monte Carlo method to study the thermodynamic properties of the attractive SU(3) Hubbard model on a honeycomb lattice. The thermal charge density wave (CDW) transition, the trion formation, the entropy-temperature relation and the density compressibility are simulated at half filling. The CDW phase only breaks the lattice inversion symmetry on the honeycomb lattice and thus can survive at finite temperatures. The disordered phase is the thermal Dirac semi-metal state in the weak coupling regime, while in the strong coupling regime it is mainly a Mott-insulated state in which the CDW order is thermally melted. The formation of trions is greatly affected by thermal fluctuations. The calculated entropy-temperature relations exhibit prominent features of the Pomeranchuk effect which is related to the distribution of trions. Our simulations show that the density compressibility is still nonzero immediately after the CDW phase appears, but vanishes when the long-range order of trions forms. | condensed matter |
Shelter-in-place and lockdowns have been some of the main non-pharmaceutical interventions that governments around the globe have implemented to contain the Covid-19 pandemic. In this paper we study the impact of such interventions in the capital of a developing country, Santiago, Chile, that exhibits large socioeconomic inequality. A distinctive feature of our study is that we use granular geo-located cell-phone data to measure shelter-at-home behavior as well as trips within the city, thereby allowing to capture the adherence to lockdowns. Using panel data linear regression models we first show that a 10\% reduction in mobility implies a 13-26\% reduction in infections. However, the impact of social distancing measures and lockdowns on mobility is highly heterogeneous and dependent on socioeconomic level. While high income zones can exhibit reductions in mobility of around 60-80\% (significantly driven by voluntary lockdowns), lower income zones only reduce mobility by 20-40\%. Our results show that failing to acknowledge the heterogenous effect of shelter-in-place behavior even within a city can have dramatic consequences in the contention of the pandemic. It also confirms the challenges of implementing mandatory lockdowns in lower-income communities, where people generate their income from their daily work. To be effective, lockdowns in counties of low socioeconomic levels may need to be complemented with other measures that support their inhabitants, providing aid to increase compliance. | physics |
The magnetic stray field is an unavoidable consequence of ferromagnetic devices and sensors leading to a natural asymmetry in magnetic properties. Such asymmetry is particularly undesirable for magnetic random access memory applications where the free layer can exhibit bias. Using atomistic dipole-dipole calculations we numerically simulate the stray magnetic field emanating from the magnetic layers of a magnetic memory device with different geometries. We find that edge effects dominate the overall stray magnetic field in patterned devices and that a conventional synthetic antiferromagnet structure is only partially able to compensate the field at the free layer position. A granular reference layer is seen to provide near-field flux closure while additional patterning defects add significant complexity to the stray field in nanoscale devices. Finally we find that the stray field from a nanoscale antiferromagnet is surprisingly non-zero arising from the imperfect cancellation of magnetic sublattices due to edge defects. Our findings provide an outline of the role of different layer structures and defects in the effective stray magnetic field in nanoscale magnetic random access memory devices and atomistic calculations provide a useful tools to study the stray field effects arising from a wide range of defects. | condensed matter |
A recent article by Li and Lv considered fully nonlinear contraction of convex hypersurfaces by certain nonhomogeneous functions of curvature, showing convergence to points in finite time in cases where the speed is a function of a degree-one homogeneous, concave and inverse concave function of the principle curvatures. In this article we consider self-similar solutions to these and related curvature flows that are not homogeneous in the principle curvatures, finding various situations where closed, convex curvature-pinched hypersurfaces contracting self-similarly are necessarily spheres. | mathematics |
We investigate the chiral quantum walk (CQW) for entanglement transfer on a triangular chain. We specifically consider two and three site Bell and W type entanglement cases, respectively. Using concurrence as quantum entanglement measure, together with the Bures distance and trace distance as the measures of the fidelity of the state transfer, we evaluate the success of the entanglement transfer. We compare the entangled state transfer time and quality in CQW against continuous-time quantum random walk. Furthermore, how the chain length and back-scattering at the end of the chain influence the entanglement transfer are pointed out. | quantum physics |
Successful application of machine learning models to real-world prediction problems, e.g. financial forecasting and personalized medicine, has proved to be challenging, because such settings require limiting and quantifying the uncertainty in the model predictions, i.e. providing valid and accurate prediction intervals. Conformal Prediction is a distribution-free approach to construct valid prediction intervals in finite samples. However, the prediction intervals constructed by Conformal Prediction are often (because of over-fitting, inappropriate measures of nonconformity, or other issues) overly conservative and hence inadequate for the application(s) at hand. This paper proposes an AutoML framework called Automatic Machine Learning for Conformal Prediction (AutoCP). Unlike the familiar AutoML frameworks that attempt to select the best prediction model, AutoCP constructs prediction intervals that achieve the user-specified target coverage rate while optimizing the interval length to be accurate and less conservative. We tested AutoCP on a variety of datasets and found that it significantly outperforms benchmark algorithms. | computer science |
Global Positioning System (GPS) derived precipitable water vapor (PWV) is extensively being used in atmospheric remote sensing for applications like rainfall prediction. Many applications require PWV values with good resolution and without any missing values. In this paper, we implement an exponential smoothing method to accurately predict the missing PWV values. The method shows good performance in terms of capturing the seasonal variability of PWV values. We report a root mean square error of 0.1~mm for a lead time of 15 minutes, using past data of 30 hours measured at 5-minute intervals. | physics |
We report the electronic and magnetic properties of stoichiometric CeAuBi$_{2}$ single crystals. At ambient pressure, CeAuBi$_{2}$ orders antiferromagnetically below a N\'{e}el temperature ($T_{N}$) of 19 K. Neutron diffraction experiments revealed an antiferromagnetic propagation vector $\hat{\tau} = [0, 0, 1/2]$, which doubles the paramagnetic unit cell along the $c$-axis. At low temperatures several metamagnetic transitions are induced by the application of fields parallel to the $c$-axis, suggesting that the magnetic structure of CeAuBi$_{2}$ changes as a function of field. At low temperatures, a linear positive magnetoresistance may indicate the presence of band crossings near the Fermi level. Finally, the application of external pressure favors the antiferromagnetic state, indicating that the 4$f$ electrons become more localized. | condensed matter |
Non-Bayesian social learning theory provides a framework that models distributed inference for a group of agents interacting over a social network. In this framework, each agent iteratively forms and communicates beliefs about an unknown state of the world with their neighbors using a learning rule. Existing approaches assume agents have access to precise statistical models (in the form of likelihoods) for the state of the world. However in many situations, such models must be learned from finite data. We propose a social learning rule that takes into account uncertainty in the statistical models using second-order probabilities. Therefore, beliefs derived from uncertain models are sensitive to the amount of past evidence collected for each hypothesis. We characterize how well the hypotheses can be tested on a social network, as consistent or not with the state of the world. We explicitly show the dependency of the generated beliefs with respect to the amount of prior evidence. Moreover, as the amount of prior evidence goes to infinity, learning occurs and is consistent with traditional social learning theory. | computer science |
We present the cationic impurity assisted band offset phenomena in NixCd1-xO (x= 0, 0.02, 0.05, 0.1, 0.2, 0.4, 0.8, 1) thin films and further discussed in the light of orbital hybridization modification. Compositional and structural studies revealed that cationic substitution of Cd2+ by Ni2+ ions leads to a monotonic shift in (220) diffraction peak, indicating the suppression of lattice distortion while evolution of local strain with increasing Ni concentration mainly associated to the mismatch in electro-negativity of Cd2+ and Ni2+ ion. In fact, Fermi level pinning towards conduction band minima takes place with increasing Ni concentration at the cost of electronically compensated oxygen vacancies, resulting modification in the distribution of carrier concentration which eventually affects the band edge effective mass of conduction band electrons and further endorses band gap renormalization. Besides that, the appearance of longitudinal optical (LO) mode at 477 cm-1 as manifested by Raman spectroscopy also indicate the active involvement of electron-phonon scattering whereas modification in local coordination environment particularly anti-crossing interaction in conjunction with presence of satellite features and shake-up states with Ni doping is confirmed by X-ray absorption near-edge and X-ray photoelectron spectroscopy studies. These results manifest the gradual reduction of orbital hybridization with Ni incorporation, leading to decrement in the band edge effective mass of electron. Finally, molecular dynamics simulation reflects 13% reduction in lattice parameter for NiO thin film as compared to undoped one while projected density of states calculation further supports the experimental observation of reduced orbital hybridization with increasing Ni concentration. | condensed matter |
Grand gauge-Higgs unification of five dimensional SU(6) gauge theory on an orbifold S^1/Z_2 with localized gauge kinetic terms is discussed. The Standard model (SM) fermions on one of the boundaries and some massive bulk fermions coupling to the SM fermions on the boundary are introduced, so that they respect an SU(5) symmetry structure. The SM fermion masses including top quark are reproduced by mild tuning the bulk masses and parameters of the localized gauge kinetic terms. Gauge coupling universality is not guaranteed by the presence of the localized gauge kinetic terms and it severely constrains the Higgs vacuum expectation value. Higgs potential analysis shows that the electroweak symmetry breaking occurs by introducing additional bulk fermions in simplified representations.The localized gauge kinetic terms enhance the magnitude of the compactification scale, which helps Higgs boson mass large. Indeed the observed Higgs boson mass 125 GeV is obtained. | high energy physics phenomenology |
Joint misclassification of exposure and outcome variables can lead to considerable bias in epidemiological studies of causal exposure-outcome effects. In this paper, we present a new maximum likelihood based estimator for the marginal causal odd-ratio that simultaneously adjusts for confounding and several forms of joint misclassification of the exposure and outcome variables. The proposed method relies on validation data for the construction of weights that account for both sources of bias. The weighting estimator, which is an extension of the exposure misclassification weighting estimator proposed by Gravel and Platt (Statistics in Medicine, 2018), is applied to reinfarction data. Simulation studies were carried out to study its finite sample properties and compare it with methods that do not account for confounding or misclassification. The new estimator showed favourable large sample properties in the simulations. Further research is needed to study the sensitivity of the proposed method and that of alternatives to violations of their assumptions. The implementation of the estimator is facilitated by a new R function in an existing R package. | statistics |
The objective of active learning (AL) is to train classification models with less number of labeled instances by selecting only the most informative instances for labeling. The AL algorithms designed for other data types such as images and text do not perform well on graph-structured data. Although a few heuristics-based AL algorithms have been proposed for graphs, a principled approach is lacking. In this paper, we propose MetAL, an AL approach that selects unlabeled instances that directly improve the future performance of a classification model. For a semi-supervised learning problem, we formulate the AL task as a bilevel optimization problem. Based on recent work in meta-learning, we use the meta-gradients to approximate the impact of retraining the model with any unlabeled instance on the model performance. Using multiple graph datasets belonging to different domains, we demonstrate that MetAL efficiently outperforms existing state-of-the-art AL algorithms. | statistics |
We explore the effective field theory for single and multiple interacting pseudo-linear spin-2 fields. By applying forward limit positivity bounds, we show that among the parameters contributing to elastic tree level scattering amplitude, there is no region of compatibility of the leading interactions with a standard local UV completion. Our result generalizes to any number of interacting pseudo-linear spin-2 fields. These results have significant implications for the organization of the effective field theory expansion for pseudo-linear fields. | high energy physics theory |
Memory security and reliability are two of the major design concerns in cloud computing systems. State-of-the-art memory security-reliability co-designs (e.g. Synergy) have achieved a good balance on performance, confidentiality, integrity, and reliability. However, these works merely rely on encryption to ensure data confidentiality, which has been proven unable to prevent information leakage from memory access patterns. Ring ORAM is an attractive confidential protection protocol to hide memory access patterns to the untrusted storage system. Unfortunately, it does not compatible with the security-reliability co-designs. A forced combination would result in more severe performance loss. In this paper, we propose IRO, an Integrity and Reliability enhanced Ring ORAM design. To reduce the overhead of integrity verification, we propose a low overhead integrity tree RIT and use a Minimum Update Subtree Tree (MUST) to reduce metadata update overhead. To improve memory reliability, we present Secure Replication to provide channel-level error resilience for the ORAM tree and use the mirrored channel technique to guarantee the reliability of the MUST. Last, we use the error correction pointer (ECP) to repair permanent memory cell fault to further improve device reliability and lifetime. A compact metadata design is used to reduce the storage and consulting overhead of the ECP. IRO provides strong security and reliability guarantees, while the resulting storage and performance overhead is very small. Our evaluation shows that IRO only increases 7.54% execution time on average over the Baseline under two channels four AES-GCM units setting. With enough AES-GCM units to perform concurrent MAC computing, IRO can reduce 2.14% execution time of the Baseline. | computer science |
The ability to walk in new scenarios is a key milestone on the path toward real-world applications of legged robots. In this work, we introduce Meta Strategy Optimization, a meta-learning algorithm for training policies with latent variable inputs that can quickly adapt to new scenarios with a handful of trials in the target environment. The key idea behind MSO is to expose the same adaptation process, Strategy Optimization (SO), to both the training and testing phases. This allows MSO to effectively learn locomotion skills as well as a latent space that is suitable for fast adaptation. We evaluate our method on a real quadruped robot and demonstrate successful adaptation in various scenarios, including sim-to-real transfer, walking with a weakened motor, or climbing up a slope. Furthermore, we quantitatively analyze the generalization capability of the trained policy in simulated environments. Both real and simulated experiments show that our method outperforms previous methods in adaptation to novel tasks. | computer science |
The discovery and characterization of Algol eclipsing binaries (EAs) provide an opportunity to contribute for a better picture of the structure and evolution of low-mass stars. However, the cadence of most current photometric surveys hinders the detection of EAs since the separation between observations is usually larger than the eclipse(s) duration and hence few measurements are found at the eclipses. Even when those objects are detected as variable, their periods can be missed if an appropriate oversampling factor is not used in the search tools. In this paper, we apply this approach to find the periods of stars cataloged in the Catalina Real-Time Transient Survey (CRTS) as EAs having unknown period (EA$_{\rm up}$). As a result, the periods of $\sim 56\%$ of them were determined. Eight objects were identified as low-mass binary systems and modeled with the Wilson \& Devinney synthesis code combined with a Monte-Carlo Markov Chain optimization procedure. The computed masses and radii are in agreement with theoretical models and show no evidence of inflated radii. This paper is the first of a series aiming to identify suspected binary systems in large surveys. | astrophysics |
Realizing the promise of quantum information processing remains a daunting task, given the omnipresence of noise and error. Adapting noise-resilient classical computing modalities to quantum mechanics may be a viable path towards near-term applications in the noisy intermediate-scale quantum era. Here, we propose continuous variable quantum reservoir computing in a single nonlinear oscillator. Through numerical simulation of our model we demonstrate quantum-classical performance improvement, and identify its likely source: the nonlinearity of quantum measurement. Beyond quantum reservoir computing, this result may impact the interpretation of results across quantum machine learning. We study how the performance of our quantum reservoir depends on Hilbert space dimension, how it is impacted by injected noise, and briefly comment on its experimental implementation. Our results show that quantum reservoir computing in a single nonlinear oscillator is an attractive modality for quantum computing on near-term hardware. | quantum physics |
It is investigated whether the "X17 puzzle" might be explained by a nuclear decay chain and a conversion of the two resulting highly energetic $\gamma$s into an electron-positron pair. It is found that the corresponding kinematics fits perfectly to the experimental result. Also the conversion rates of this process are reasonable. However, the assumed nuclear chain reaction is not favored in the established nuclear models and no explanation for the isospin structure of the signal can be given. Thus, it has to be concluded that the process studied in this paper does not give a completely satisfying explanation of the "X17 puzzle". | high energy physics phenomenology |
Cryptocurrencies have become very popular in recent years. Thousands of new cryptocurrencies have emerged, proposing new and novel techniques that improve on Bitcoin's core innovation of the blockchain data structure and consensus mechanism. However, cryptocurrencies are a major target for cyber-attacks, as they can be sold on exchanges anonymously and most cryptocurrencies have their codebases publicly available. One particular issue is the prevalence of code clones in cryptocurrencies, which may amplify security threats. If a vulnerability is found in one cryptocurrency, it might be propagated into other cloned cryptocurrencies. In this work, we propose a systematic remedy to this problem, and we propose CoinWatch (CW). Given a reported vulnerability at the input, CW uses the code evolution analysis and a clone detection technique for indication of cryptocurrencies that might be vulnerable. We applied CW on 1094 cryptocurrencies using 4 CVEs and obtained 786 true vulnerabilities present in 384 projects, which were confirmed with developers and successfully reported as CVE extensions. | computer science |
We investigate the stellar atmospheres of the two components of the Herbig Ae SB2 system AK Sco to determine the elements present in the stars and their abundance. Equal stellar parameters T_eff = 6500K and log g = 4.5 were used for both stars. We studied HARPSpol spectra (resolution 110,000) that were previously used to state the presence of a weak magnetic field in the secondary. A composite synthetic spectrum was compared in the whole observed region lambda 3900-6912A with the observed spectrum. The abundances were derived mostly from unblended profiles, in spite of their sparsity, owing to the complexity of the system and to the not negligible v sin i of 18km/s and 21km/s adopted for the two components, respectively. The identified elements are those typical of stars with spectral type F5IV-V, except for Li I at 6707A and He I at 5875.61A, whose presence is related with the Herbig nature of the two stars. Furthermore, overabundances were determined in both stars for Y, Ba, and La. Zirconium is overabundant only in the primary, while sulfur is overabundant outside the adopted error limits only in the secondary. In contrast to previous results showing a high occurrence rate of lambda Boo peculiarities or normal chemical composition among the Herbig Ae/Be stars, the abundance pattern of AK Sco is similar to that of only few other Herbig stars displaying weak Ap/Bp peculiarities. A few accretion diagnostic lines are discussed. | astrophysics |
This work shows that bulk ionic liquids (ILs) and their water solution can be conveniently investigated by synchrotron-based UV resonance Raman (UVRR) spectroscopy. The main advantages of this technique for the investigation of the local structure and intermolecular interactions in imidazolium-based ILs are presented and discussed. The unique tunability of synchrotron source allows one to selectively enhance in the Raman spectra the vibrational signals arising from the imidazolium ring. Such signals showed good sensitivity to the modifications induced in the local structure of ILs by i) the change of anion and ii) the progressively longer alkyl chain substitution on the imidazolium ring. Moreover, some UVRR signals are specifically informative on the effect induced by addition of water on the strength of cation-anion H-bonds in IL-water solutions. All of these results corroborate the potentiality of UVRR to retrieve information on the intermolecular interactions in IL-water solutions, besides the counterpart obtained by employing on these systems the spontaneous Raman scattering technique. | condensed matter |
SINDy is a method for learning system of differential equations from data by solving a sparse linear regression optimization problem [Brunton et al., 2016]. In this article, we propose an extension of the SINDy method that learns systems of differential equations in cases where some of the variables are not observed. Our extension is based on regressing a higher order time derivative of a target variable onto a dictionary of functions that includes lower order time derivatives of the target variable. We evaluate our method by measuring the prediction accuracy of the learned dynamical systems on synthetic data and on a real data-set of temperature time series provided by the R\'eseau de Transport d'\'Electricit\'e (RTE). Our method provides high quality short-term forecasts and it is orders of magnitude faster than competing methods for learning differential equations with latent variables. | statistics |
We consider geodesics of infinite length and constant 4d dilaton in the (classical) hypermultiplet moduli space of type II Calabi-Yau compactifications. When approaching such infinite distance points, a large amount of D-instantons develop an exponentially suppressed action, substantially modifying the moduli space metric. We consider a particular large volume/strong coupling trajectory for which, in the corrected metric, the path length becomes finite. The instanton effects also modify the cllassical 4d dilaton such that, in order to keep the 4d Planck mass finite, the string scale has to be lowered. Our results can be related, via the c-map, to the physics around points of infinite distance in the vector multiplet moduli space where the Swampland Distance Conjecture and the Emergence Proposal have been discussed, and provide further evidence for them. | high energy physics theory |
Motivated by recent applications of the open quantum system formalism to understand quarkonium transport in the quark-gluon plasma, we develop a set of coupled Boltzmann equations for open heavy quark-antiquark pairs and quarkonia. Our approach keeps track of the correlation between the heavy quark-antiquark pair from quarkonium dissociation and thus is able to account for both uncorrelated and correlated recombination. By solving the coupled Boltzmann equations for current heavy ion collision experiments, we find correlated recombination is crucial to describe the data of bottomonia nuclear modification factors. To further test the importance of correlated recombination in experiments, we propose a new observable: $\frac{R_{AA}[\chi_b(1P)]}{R_{AA}[\Upsilon(2S)]}$. Future measurements of this ratio will help distinguish calculations with and without correlated recombination. | high energy physics phenomenology |
Over the past few years, we have observed different media outlets' attempts to shift public opinion by framing information to support a narrative that facilitate their goals. Malicious users referred to as "pathogenic social media" (PSM) accounts are more likely to amplify this phenomena by spreading misinformation to viral proportions. Understanding the spread of misinformation from account-level perspective is thus a pressing problem. In this work, we aim to present a feature-driven approach to detect PSM accounts in social media. Inspired by the literature, we set out to assess PSMs from three broad perspectives: (1) user-related information (e.g., user activity, profile characteristics), (2) source-related information (i.e., information linked via URLs shared by users) and (3) content-related information (e.g., tweets characteristics). For the user-related information, we investigate malicious signals using causality analysis (i.e., if user is frequently a cause of viral cascades) and profile characteristics (e.g., number of followers, etc.). For the source-related information, we explore various malicious properties linked to URLs (e.g., URL address, content of the associated website, etc.). Finally, for the content-related information, we examine attributes (e.g., number of hashtags, suspicious hashtags, etc.) from tweets posted by users. Experiments on real-world Twitter data from different countries demonstrate the effectiveness of the proposed approach in identifying PSM users. | computer science |
Water managers in the western United States (U.S.) rely on longterm forecasts of temperature and precipitation to prepare for droughts and other wet weather extremes. To improve the accuracy of these longterm forecasts, the U.S. Bureau of Reclamation and the National Oceanic and Atmospheric Administration (NOAA) launched the Subseasonal Climate Forecast Rodeo, a year-long real-time forecasting challenge in which participants aimed to skillfully predict temperature and precipitation in the western U.S. two to four weeks and four to six weeks in advance. Here we present and evaluate our machine learning approach to the Rodeo and release our SubseasonalRodeo dataset, collected to train and evaluate our forecasting system. Our system is an ensemble of two regression models. The first integrates the diverse collection of meteorological measurements and dynamic model forecasts in the SubseasonalRodeo dataset and prunes irrelevant predictors using a customized multitask model selection procedure. The second uses only historical measurements of the target variable (temperature or precipitation) and introduces multitask nearest neighbor features into a weighted local linear regression. Each model alone is significantly more accurate than the debiased operational U.S. Climate Forecasting System (CFSv2), and our ensemble skill exceeds that of the top Rodeo competitor for each target variable and forecast horizon. Moreover, over 2011-2018, an ensemble of our regression models and debiased CFSv2 improves debiased CFSv2 skill by 40-50% for temperature and 129-169% for precipitation. We hope that both our dataset and our methods will help to advance the state of the art in subseasonal forecasting. | statistics |
The existing cross-validated risk scores (CVRS) design has been proposed for developing and testing the efficacy of a treatment in a high-efficacy patient group (the sensitive group) using high-dimensional data (such as genetic data). The design is based on computing a risk score for each patient and dividing them into clusters using a non-parametric clustering procedure. In some settings it is desirable to consider the trade-off between two outcomes, such as efficacy and toxicity, or cost and effectiveness. With this motivation, we extend the CVRS design (CVRS2) to consider two outcomes. The design employs bivariate risk scores that are divided into clusters. We assess the properties of the CVRS2 using simulated data and illustrate its application on a randomised psychiatry trial. We show that CVRS2 is able to reliably identify the sensitive group (the group for which the new treatment provides benefit on both outcomes) in the simulated data. We apply the CVRS2 design to a psychology clinical trial that had offender status and substance use status as two outcomes and collected a large number of baseline covariates. The CVRS2 design yields a significant treatment effect for both outcomes, while the CVRS approach identified a significant effect for the offender status only after pre-filtering the covariates. | statistics |
We study a two-dimensional phase of chromium bismuthate (CrBi), previously unknown material with exceptional magnetic and magnetooptical characteristics. Monolayer CrBi is a ferromagnetic metal with strong spin-orbit coupling induced by the heavy bismuth atoms, resulting in a strongly anisotropic Ising-type magnetic ordering with the Curie temperature estimated to be higher than 300 K. The electronic structure of the system is topologically nontrivial, giving rise to a nonzero Berry curvature in the ground magnetic state, leading to the anomalous Hall effect with the conductivity plateau of $\sim$1.5 $e^2/h$ at the Fermi level. Besides, monolayer CrBi demonstrates the magnetooptical Kerr effect in the visible and near-ultraviolet spectral ranges with the maximum rotation angles of up to 10 mrad. Remarkably, the magnetooptical response is strongly dependent on the direction of magnetization, which makes monolayer CrBi a promising system for practical applications in magnetooptical and spintronic devices. | condensed matter |
Purpose: The use of cumulative measures of exposure to raised HIV viral load (viremia copy-years) is an increasingly common in HIV prevention and treatment epidemiology due to the high biological plausibility. We sought to estimate the magnitude and direction of bias in a cumulative measure of viremia caused by different frequency of sampling and duration of follow-up. Methods: We simulated longitudinal viral load measures and reanalysed cohort study datasets with longitudinal viral load measurements under different sampling strategies to estimate cumulative viremia. Results: In both simulated and observed data, estimates of cumulative viremia by the trapezoidal rule show systematic upward bias when there are fewer sampling time points and/or increased duration between sampling time points, compared to estimation of full time series. Absolute values of cumulative viremia vary appreciably by the patterns of viral load over time, even after adjustment for total duration of follow up. Conclusions: Sampling bias due to differential frequency of sampling appears extensive and of meaningful magnitude in measures of cumulative viremia. Cumulative measures of viremia should be used only in studies with sufficient frequency of viral load measures and always as relative measures. | statistics |
We report that metamaterial-inspired one-dimensional gratings (or metagratings) can be used to control nonpropagating diffraction orders as well as propagating ones. By accurate engineering of the near field, it becomes possible to satisfy power-conservation conditions and achieve perfect control over all propagating diffraction orders with passive and lossless metagratings. We show that each propagating diffraction order requires 2 degrees of freedom represented by passive and lossless loaded thin "wires." This provides a solution to the old problem of power management between diffraction orders created by a grating. The theory developed is verified by both three-dimensional full-wave numerical simulations and experimental measurements, and can be readily applied to the design of wavefront-manipulation devices over the entire electromagnetic spectrum as well as in different fields of physics. | physics |
A unified clustering approach that can estimate number of clusters and produce clustering against this number simultaneously is proposed. Average silhouette width (ASW) is a widely used standard cluster quality index. A distance based objective function that optimizes ASW for clustering is defined. The proposed algorithm named as OSil, only, needs data observations as an input without any prior knowledge of the number of clusters. This work is about thorough investigation of the proposed methodology, its usefulness and limitations. A vast spectrum of clustering structures were generated, and several well-known clustering methods including partitioning, hierarchical, density based, and spatial methods were consider as the competitor of the proposed methodology. Simulation reveals that OSil algorithm has shown superior performance in terms of clustering quality than all clustering methods included in the study. OSil can find well separated, compact clusters and have shown better performance for the estimation of number of clusters as compared to several methods. Apart from the proposal of the new methodology and it's investigation the paper offers a systematic analysis on the estimation of cluster indices, some of which never appeared together in comparative simulation setup before. The study offers many insightful findings useful for the selection of the clustering methods and indices for clustering quality assessment. | statistics |
We show that any smooth projective cubic hypersurface of dimension at least $29$ over the rationals contains a rational line. A variation of our methods provides a similar result over p-adic fields. In both cases, we improve on previous results due to the second author and Wooley. We include an appendix in which we highlight some slight modifications to a recent result of Papanikolopoulos and Siksek. It follows that the set of rational points on smooth projective cubic hypersurfaces of dimension at least 29 is generated via secant and tangent constructions from just a single point. | mathematics |
The individual data collected throughout patient follow-up constitute crucial information for assessing the risk of a clinical event, and eventually for adapting a therapeutic strategy. Joint models and landmark models have been proposed to compute individual dynamic predictions from repeated measures to one or two markers. However, they hardly extend to the case where the complete patient history includes much more repeated markers possibly. Our objective was thus to propose a solution for the dynamic prediction of a health event that may exploit repeated measures of a possibly large number of markers. We combined a landmark approach extended to endogenous markers history with machine learning methods adapted to survival data. Each marker trajectory is modeled using the information collected up to landmark time, and summary variables that best capture the individual trajectories are derived. These summaries and additional covariates are then included in different prediction methods. To handle a possibly large dimensional history, we rely on machine learning methods adapted to survival data, namely regularized regressions and random survival forests, to predict the event from the landmark time, and we show how they can be combined into a superlearner. Then, the performances are evaluated by cross-validation using estimators of Brier Score and the area under the Receiver Operating Characteristic curve adapted to censored data. We demonstrate in a simulation study the benefits of machine learning survival methods over standard survival models, especially in the case of numerous and/or nonlinear relationships between the predictors and the event. We then applied the methodology in two prediction contexts: a clinical context with the prediction of death for patients with primary biliary cholangitis, and a public health context with the prediction of death in the general elderly population at different ages. Our methodology, implemented in R, enables the prediction of an event using the entire longitudinal patient history, even when the number of repeated markers is large. Although introduced with mixed models for the repeated markers and methods for a single right censored time-to-event, our method can be used with any other appropriate modeling technique for the markers and can be easily extended to competing risks setting. | statistics |
We present the Hamiltonian formulation of the recently constructed integrable theories of arXiv:2006.12525. These theories turn out to be canonically equivalent to the sum of an asymmetrically gauged CFT and of the most general $\lambda$-deformed model of arXiv:1812.04033. Using the Hamiltonian formalism, we prove that the full set of conserved charges of the models of arXiv:2006.12525 are in involution, ensuring their Hamiltonian integrability. Finally, we show that the equations of motion of these theories can be put in the form of zero curvature Lax connections. | high energy physics theory |
Recent studies of inflation with multiple scalar fields have highlighted the importance of non-canonical kinetic terms in novel types of inflationary solutions. This motivates a thorough analysis of non-Gaussianities in this context, which we revisit here by studying the primordial bispectrum in a general two-field model. Our main result is the complete cubic action for inflationary fluctuations written in comoving gauge, i.e. in terms of the curvature perturbation and the entropic mode. Although full expressions for the cubic action have already been derived in terms of fields fluctuations in the flat gauge, their applicability is mostly restricted to numerical evaluations. Our form of the action is instead amenable to several analytical approximations, as our calculation in terms of the directly observable quantity makes manifest the scaling of every operator in terms of the slow-roll parameters, what is essentially a generalization of Maldacena's single-field result to non-canonical two-field models. As an important application we derive the single-field effective field theory that is valid when the entropic mode is heavy and may be integrated out, underlining the observable effects that derive from a curved field space. | high energy physics theory |
We report on the calculation of the symmetry resolved entanglement entropies in two-dimensional many-body systems of free bosons and fermions by \emph{dimensional reduction}. When the subsystem is translational invariant in a transverse direction, this strategy allows us to reduce the initial two-dimensional problem into decoupled one-dimensional ones in a mixed space-momentum representation. While the idea straightforwardly applies to any dimension $d$, here we focus on the case $d=2$ and derive explicit expressions for two lattice models possessing a $U(1)$ symmetry, i.e., free non-relativistic massless fermions and free complex (massive and massless) bosons. Although our focus is on symmetry resolved entropies, some results for the total entanglement are also new. Our derivation gives a transparent understanding of the well known different behaviours between massless bosons and fermions in $d\geq2$: massless fermions presents logarithmic violation of the area which instead strictly hold for bosons, even massless. This is true both for the total and the symmetry resolved entropies. Interestingly, we find that the equipartition of entanglement into different symmetry sectors holds also in two dimensions at leading order in subsystem size; we identify for both systems the first term breaking it. All our findings are quantitatively tested against exact numerical calculations in lattice models for both bosons and fermions. | condensed matter |
Motivated by a clinical trial conducted by Janssen Pharmaceuticals in which a flexible dosing regimen is compared to placebo, we evaluate how switchers in the treatment arm (i.e., patients who were switched to the higher dose) would have fared had they been kept on the low dose. This in order to understand whether flexible dosing is potentially beneficial for them. Simply comparing these patients' responses with those of patients who stayed on the low dose is unsatisfactory because the latter patients are usually in a better health condition. Because the available information in the considered trial is too scarce to enable a reliable adjustment, we will instead transport data from a fixed dosing trial that has been conducted concurrently on the same target, albeit not in an identical patient population. In particular, we will propose an estimator which relies on an outcome model and a propensity score model for the association between study and patient characteristics. The proposed estimator is asymptotically unbiased if at least one of both models is correctly specified, and efficient (under the model defined by the restrictions on the propensity score) when both models are correctly specified. We show that the proposed method for using results from an external study is generically applicable in studies where a classical confounding adjustment is not possible due to positivity violation (e.g., studies where switching takes place in a deterministic manner). Monte Carlo simulations and application to the motivating study demonstrate adequate performance. | statistics |
In this paper we consider two different nonlinear $\sigma$-models minimally coupled to Eddington-inspired Born-Infeld gravity. We show that the resultant geometries represent minimal modifications with respect to those found in GR, though with important physical consequences. In particular, wormhole structures always arise, though this does not guarantee by itself the geodesic completeness of those space-times. In one of the models, quadratic in the canonical kinetic term, we identify a subset of solutions which are regular everywhere and are geodesically complete. We discuss characteristic features of these solutions and their dependence on the relationship between mass and global charge. | high energy physics theory |
Recently it was shown that a certain class of phylogenetic networks, called level-$2$ networks, cannot be reconstructed from their associated distance matrices. In this paper, we show that they can be reconstructed from their induced shortest and longest distance matrices. That is, if two level-$2$ networks induce the same shortest and longest distance matrices, then they must be isomorphic. We further show that level-$2$ networks are reconstructible from their shortest distance matrices if and only if they do not contain a subgraph from a family of graphs. A generator of a network is the graph obtained by deleting all pendant subtrees and suppressing degree-$2$ vertices. We also show that networks with a leaf on every generator side is reconstructible from their induced shortest distance matrix, regardless of level. | mathematics |
The aim of quantum metrology is to estimate target parameters as precisely as possible. In this paper, we propose a protocol for quantum metrology based on symmetry-protected adiabatic transformation. We introduce a ferromagnetic Ising model with a transverse field as a probe and consider the estimation of a longitudinal field. Without the transverse field, the ground state of the probe is given by the Greenberger-Horne-Zeilinger state, and thus the Heisenberg limit estimation of the longitudinal field can be achieved through parity measurement. We find that full information of the longitudinal field encoded on parity can exactly be mapped to global magnetization by symmetry-protected adiabatic transformation, i.e., parity measurement can be replaced with global magnetization measurement. Our scheme requires neither accurate control of individual qubits nor that of interaction strength. | quantum physics |
Projectile impact into a light granular material composed of expanded polypropylene (EPP) particles is investigated systematically with various impact velocities. Experimentally, the trajectory of an intruder moving inside the granular material is monitored with a recently developed non-invasive microwave radar system. Numerically, discrete element simulations together with coarse-graining techniques are employed to address both dynamics of the intruder and response of the granular bed. Our experimental and numerical results of the intruder dynamics agree with each other quantitatively and are in congruent with existing phenomenological model on granular drag. Stepping further, we explore the `microscopic' origin of granular drag through characterizing the response of granular bed, including density, velocity and kinetic stress fields at the mean-field level. In addition, we find that the dynamics of cavity collapse behind the intruder behaves qualitatively different with different impact velocities. Moreover, the kinetic pressure ahead of the intruder decays exponentially in the co-moving system of the intruder. Its scaling gives rise to a characteristic length scale, which is in the order of intruder size. This finding is in perfect agreement with the long-scale inertial dissipation type that we find in all cases. | condensed matter |
The high-redshift quasar PMN J0909+0354 ($z=3.288$) is known to have a pc-scale compact jet structure, based on global 5-GHz very long baseline interferometry (VLBI) observations performed in 1992. Its kpc-scale structure was studied with the Karl G. Jansky Very Large Array (VLA) in the radio and the Chandra space telescope in X-rays. Apart from the north-northwestern jet component seen in both the VLA and Chandra images at $2.3''$ separation from the core, there is another X-ray feature at $6.48''$ in the northeastern (NE) direction. To uncover more details and possibly structural changes in the inner jet, we conducted new observations at 5 GHz using the European VLBI Network (EVN) in 2019. These data confirm the northward direction of the one-sided inner jet already suspected from the 1992 observations. A compact core and multiple jet components were identified that can be traced up to $\sim0.25$ kpc projected distance towards the north, while the structure becomes more and more diffuse. A comparison with arcsec-resolution imaging with the VLA shows that the radio jet bends by $\sim30^\circ$ between the two scales. The direction of the pc-scale jet as well as the faint optical counterpart found for the newly-detected X-ray point source (NE) favors the nature of the latter as a background or foreground object in the field of view. However, the extended ($\sim160$ kpc) emission around the positions of the quasar core and NE detected by the Wide-field Infrared Survey Explorer (WISE) in the mid-infrared might suggest physical interaction of the two objects. | astrophysics |
We present a Monte-Carlo approach to soft-gluon resummation at sub-leading color which can be used to improve existing parton shower algorithms. At the single emission level, soft-collinear enhancements of the splitting functions are explicitly linked to quadratic Casimir operators, while wide angle single-soft enhancements are connected to non-trivial color correlators. We focus on a numerically stable implementation of color matrix element corrections to all orders and approximate the virtual corrections by requiring unitarity at the single-emission level. We provide a proof-of-concept implementation to compute non-global event shapes at lepton colliders. | high energy physics phenomenology |
With the emerging COVID-19 crisis, a critical task for public health officials and policy makers is to decide how to prioritize, locate, and allocate scarce resources. To answer these questions, decision makers need to be able to determine the location of the required resources over time based on emerging hot spot locations. Hot spots are defined as concentrated areas with sharp increases in COVID19 cases. Hot spots place stress on existing healthcare resources, resulting in demand for resources potentially exceeding current capacity. This research will describe a value based resource allocation approach that seeks to coordinate demand, as defined by uncertain epidemiological forecasts, with the value of adding additional resources such as hospital beds. Value is framed as a function of the expected usage of a marginal resource (bed, ventilator, etc). Subject to certain constraints, allocation decisions are operationalized using a nonlinear programming model, allocating new hospital beds over time and across a number of geographical locations. The results of the research show a need for a value based approach to assist decision makers at all levels in making the best possible decisions in the current highly uncertain and dynamic COVID environment. | physics |
We investigate the baryon asymmetry in the supersymmetry Dine-Fischler-Srednicki-Zhitnitsky axion model without R-parity. It turns out that the R-parity violating terms economically explain the atmospheric mass-squared difference of neutrinos and the appropriate amount of baryon asymmetry through the Affleck-Dine mechanism. In this model, the axion is a promising candidate for the dark matter and the axion isocurvature perturbation is suppressed due to the large field values of Peccei-Quinn fields. Remarkably, in some parameter regions explaining the baryon asymmetry and the axion dark matter abundance, the proton decay will be explored in future experiments. | high energy physics phenomenology |
We propose that the nearly massless dark photons produced from the annihilation of keV dark fermions in the Galaxy can induce the excess of electron recoil events recently observed in the XENON1T experiment. The minimal model for this is the extension of a $U(1)_{X}$ gauge symmetry, under which the dark photon couples to both dark and visible matter currents. We find that the best-fit parameters of the dark sector are compatible with the most stringent constraints from stellar cooling. We also show that in the freeze-out scenario, the dark fermions can explain the anomaly while contributing $\gtrsim 1\%$ of the DM relic density. | high energy physics phenomenology |
The work assessed seven classical classifiers and two beamforming algorithms for detecting surveillance sound events. The tests included the use of AWGN with -10 dB to 30 dB SNR. Data Augmentation was also employed to improve algorithms' performance. The results showed that the combination of SVM and Delay-and-Sum (DaS) scored the best accuracy (up to 86.0\%), but had high computational cost ($\approx $ 402 ms), mainly due to DaS. The use of SGD also seems to be a good alternative since it has achieved good accuracy either (up to 85.3\%), but with quicker processing time ($\approx$ 165 ms). | electrical engineering and systems science |
Using the AdS/CFT correspondence we consider the retarded Green's function in the background of rotating near-extremal AdS$_4$ black holes. Following the canonical AdS/CFT dictionary into the asymptotic boundary we get a CFT$_3$ result. We also take a new route and zoom in on the near-horizon region, blow up this region and show that it yields a CFT$_2$ result. We argue that the decoupling of the near-horizon region is akin to the decoupling of the near-throat region of a D3-brane, which led to the original formulation of the AdS/CFT correspondence, thus implying that the Kerr/CFT correspondence follows as a decoupling of the standard AdS/CFT correspondence applied to rotating black holes. As a byproduct, we compute the shear viscosity to entropy density ratio for the strongly coupled boundary CFT$_3$, and find that it violates the $1 / (4 \pi)$ bound. | high energy physics theory |
Module Analysis for Multiple-Choice Responses (MAMCR) was applied to a large sample of Force Concept Inventory (FCI) pretest and post-test responses ($N_{pre}=4509$ and $N_{post}=4716$) to replicate the results of the original MAMCR study and to understand the origins of the gender differences reported in a previous study of this data set. When the results of MAMCR could not be replicated, a modification of the method was introduced, Modified Module Analysis (MMA). MMA was productive in understanding the structure of the incorrect answers in the FCI, identifying 9 groups of incorrect answers on the pretest and 11 groups on the post-test. These groups, in most cases, could be mapped on to common misconceptions used by the authors of the FCI to create distactors for the instrument. Of these incorrect answer groups, 6 of the pretest groups and 8 of the post-test groups were the same for men and women. Two of the male-only pretest groups disappeared with instruction while the third male-only pretest group was identified for both men and women post-instruction. Three of the groups identified for both men and women on the post-test were not present for either on the pretest. The rest of the identified incorrect answer groups did not represent misconceptions, but were rather related to the the blocked structure of some FCI items where multiple items are related to a common stem. The groups identified had little relation to the gender unfair items previously identified for this data set, and therefore, differences in the structure of student misconceptions between men and women cannot explain the gender differences reported for the FCI. | physics |
We demonstrate that linear combinations of subregion entropies with canceling boundary terms, commonly used to calculate the topological entanglement entropy, may suffer from spurious nontopological contributions even in models with zero correlation length. These spurious contributions are due to a specific kind of long-range string order, and persist throughout certain subsystem symmetry-protected phases. We introduce an entropic quantity that measures the presence of such order, and hence should serve as an order parameter for the aforementioned phases. | quantum physics |
We study loop corrections to scattering amplitudes in the world-volume theory of a probe D3-brane, which is described by the supersymmetric Dirac-Born-Infeld theory. We show that the D3-brane loop superamplitudes can be obtained from the tree-level superamplitudes in the world-volume theory of a probe M5-brane (or D5-brane). The M5-brane theory describes self-interactions of an abelian tensor supermultiplet with $(2,0)$ supersymmetry, and the tree-level superamplitudes are given by a twistor formula. We apply the construction to the maximally-helicity-violating (MHV) amplitudes in the D3-brane theory at one-loop order, which are purely rational terms (except for the four-point amplitude). The results are further confirmed by generalised unitarity methods. Through a supersymmetry reduction on the M5-brane tree-level superamplitudes, we also construct one-loop corrections to the non-supersymmetric D3-brane amplitudes, which agree with the known results in the literature. | high energy physics theory |
Computer-aided breast cancer diagnosis in mammography is limited by inadequate data and the similarity between benign and cancerous masses. To address this, we propose a signed graph regularized deep neural network with adversarial augmentation, named \textsc{DiagNet}. Firstly, we use adversarial learning to generate positive and negative mass-contained mammograms for each mass class. After that, a signed similarity graph is built upon the expanded data to further highlight the discrimination. Finally, a deep convolutional neural network is trained by jointly optimizing the signed graph regularization and classification loss. Experiments show that the \textsc{DiagNet} framework outperforms the state-of-the-art in breast mass diagnosis in mammography. | electrical engineering and systems science |
We construct the simplest inflationary $\alpha$-attractor models in supergravity: it has only one scalar, the inflaton. There is no sinflaton since the inflaton belongs to an orthogonal nilpotent superfield where the sinflaton depends on fermion bilinears. When the local supersymmetry is gauge-fixed, these models have only one single real scalar (the inflaton), a graviton and a massive gravitino. The sinflaton, sgoldstino and inflatino are all absent from the physical spectrum in the unitary gauge. The orthogonality condition leads to the simplest K\"ahler potential for the inflaton, while preserving the Poincar\'e disk geometry of $\alpha$-attractors. The models are particularly simple in the framework of the $\overline {D3}$ induced geometric inflation. | high energy physics theory |
We use the energy-momentum tensor (EMT) current to compute the EMT form factors of the nucleon in the framework of the light cone QCD sum rule formalism. In the calculations, we employ the most general form of the nucleon's interpolating field and use the distribution amplitudes (DAs) of the nucleon with two sets of the numerical values of the main input parameters entering the expressions of the DAs. The directly obtained results from the sum rules for the form factors are reliable at $ Q^2\geq1~GeV^2 $: To extrapolate the results to include the zero momentum transfer squared with the aim of estimation of the related static physical quantities, we use some fit functions for the form factors. The numerical computations show that the energy-momentum tensor form factors of the nucleon can be well fitted to the multipole fit form. We compare the results obtained for the form factors at $ Q^2=0 $ with the existing theoretical predictions as well as experimental data on the gravitational form factor d$_1^q(0)$. For the form factors M$_2^q (0)$ and J$^q(0)$ a consistency among the theoretical predictions is seen within the errors: Our results are nicely consistent with the Lattice QCD and chiral perturbation theory predictions. However, there are large discrepancies among the theoretical predictions on d$_1^q(0)$. Nevertheless, our prediction is in accord with the JLab data as well as with the results of the Lattice QCD, chiral perturbation theory and KM15-fit. Our fit functions well define most of the JLab data in the interval $ Q^2\in[0,0.4]~GeV^2 $, while the Lattice results suffer from large uncertainties in this region. As a by-product, some mechanical properties of the nucleon like the pressure and energy density at the center of nucleon as well as its mechanical radius are also calculated and their results are compared with other existing theoretical predictions. | high energy physics phenomenology |
We introduce a characterization of topological order based on bulk oscillations of the entanglement entropy and the definition of an `entanglement gap', showing that it is generally applicable to pure and disordered quantum systems. Using exact diagonalization and the strong disorder renormalization group method, we demonstrate that this approach gives results in agreement with the use of traditional topological invariants, especially in cases where topological order is known to persist in the presence of off-diagonal bond disorder. The entanglement gap is then used to analyze classes of quantum systems with alternating bond types, allowing us to construct their topological phase diagrams. The validity of these phase diagrams is verified in limiting cases of dominant bond types, where the solution is known. | condensed matter |
Although the thermal and radiative effects associated with a two-level quantum system undergoing acceleration are now widely understood and accepted, a surprising amount of controversy still surrounds the simpler and older problem of an accelerated classical charge. We argue that the analogy between these systems is more than superficial: There is a sense in which a "UD detector" in a quantized scalar field effectively acts as a classical source for that field if the splitting of its energy levels is so small as to be ignored. After showing explicitly that a detector with unresolved inner structure does behave as a structureless scalar source, we use that analysis to rederive the scalar version of a previous analysis of the accelerated electromagnetic charge, without appealing to the troublesome concept of "zero-energy particles." Then we recover these results when the detector energy gap is taken to be zero from the beginning. This vindicates the informal terminology "zero-frequency Rindler modes" as a shorthand for "Rindler modes with arbitrarily small energy." In an appendix, the mathematical behavior of the normal modes in the limit of small frequency is examined in more detail than before. The vexed (and somewhat ambiguous) question of whether coaccelerating observers detect the acceleration radiation can then be studied on a sound basis. | high energy physics theory |
Hexagonal boron nitride (hBN) has been grown on sapphire substrates by ultra-high temperature molecular beam epitaxy (MBE). A wide range of substrate temperatures and boron fluxes have been explored, revealing that high crystalline quality hBN layers are grown at high substrate temperatures, $>$1600$^\circ$C, and low boron fluxes, $\sim1\times10^{-8}$ Torr beam equivalent pressure. \emph{In-situ} reflection high energy electron diffraction (RHEED) revealed the growth of hBN layers with $60^\circ$ rotational symmetry and the $[11\bar20]$ axis of hBN parallel to the $[1\bar100]$ axis of the sapphire substrate. Unlike the rough, polycrystalline films previously reported, atomic force microscopy (AFM) and transmission electron microscopy (TEM) characterization of these films demonstrate smooth, layered, few-nanometer hBN films on a nitridated sapphire substrate. This demonstration of high-quality hBN growth by MBE is a step towards its integration into existing epitaxial growth platforms, applications, and technologies. | condensed matter |
We demonstrate that the exact solution for the spectrum of synchrotron radiation from an isotropic population of mono-energetic electrons in turbulent magnetic field with Gaussian distribution of local field strengths can be expressed in the simple analytic form: $\left( \frac{{\rm d} \dot{N}}{{\rm d} \omega} \right)_t = \frac{\alpha}{3} \frac{1}{\gamma^2} \left( 1 + \frac{1}{x^{2/3}} \right) \exp \left( - 2 x^{2/3} \right)$, where $x = \frac{\omega}{\omega_0}\, ; \omega_0 = \frac{4}{3} \gamma^2 \frac{eB_0}{m_e c}\, .$ We use this expression to find approximate synchrotron spectra for power-law electron distributions with $\propto \exp\left( -\left[ \gamma/\gamma_0 \right]^\beta\right)$ type high-energy cut-off; the resulting synchrotron spectrum has the exponential cut-off factor with frequency raised to $2\beta/(3\beta+4)$ power in the exponent. For the power-law electron distribution without high-energy cut-off, we find the coefficient $a_m$ as a function of the power-law index, which results in exact expression for the synchrotron spectrum when using monochromatic (i.e., each electron radiates at frequency $\omega_m = a_m \gamma^2 \, \frac{e B_0}{m_e c}$) approximation. | astrophysics |
The organic electrochemical transistor (OECT) with a conjugated polymer as the active material is the elementary unit of organic bioelectronic devices. Increased functionalities, such as low power consumption, can be achieved by building complementary circuits featuring two or more OECTs. Complementary circuits commonly combine both p- and n-type transistors to reduce power draw. While p-type OECTs are readily available, n-type OECTs are less common mainly due to poor stability of the n-type active channel material in aqueous electrolyte. Additionally, an OECT based complementary circuit requires well matched transport properties in the p- and n-type materials. Here, a complementary circuit is made using a pair of OECTs having polyaniline (PANI) as the channel material in both transistors. PANI is chosen due to its unique behaviour exhibiting a peak in current versus gate voltage when used as an active channel in an OECT. The PANI based circuit is shown to have excellent performance with gain of ~ 7 and could be transferred on a flexible biocompatible chitosan substrate with demonstrated operation in aqueous electrolyte. This study extends the capabilities of conjugated polymer based OECTs. | physics |
Coherent homodyne detection requires a precise matching of emission wavelengths between transmitter and local oscillator at the receiver. Injection-locking can provide all-optical synchronization of the emission frequencies, even under wavelength-swept emission. By adapting the sweep parameters to the conditions in the optical fiber plant, transmission impairments can be mitigated. In this regard I experimentally demonstrate that a wavelength-hopping yet locked transceiver pair, which builds on conceptually simple externally modulated laser technology, features a much higher robustness to reflection crosstalk. The reception penalty due to distortions arising at a Fresnel reflection in the transmission path can be reduced by >90% at a low optical signal-to-reflection ratio of ~0 dB. | electrical engineering and systems science |
We adapt the block-Lanczos density-matrix renormalization-group technique to study the spin transport in a spin chain coupled to two non-interacting fermionic leads. As an example, we consider leads described by two-dimensional tight-binding models on a square lattice. Although the simulations are carried out using a chain representation of the leads, observables in the original two-dimensional lattice can be calculated by reversing the block-Lanczos transformation. This is demonstrated for leads with Rashba spin-orbit coupling. | condensed matter |
Pulsars can act as an excellent probe of the Milky Way magnetic field. The average strength of the Galactic magnetic field component parallel to the line of sight can be estimated as $\langle B_\parallel \rangle = 1.232 \, \text{RM}/\text{DM}$, where $\text{RM}$ and $\text{DM}$ are the rotation and dispersion measure of the pulsar. However, this assumes that the thermal electron density and magnetic field of the interstellar medium are uncorrelated. Using numerical simulations and observations, we test the validity of this assumption. Based on magnetohydrodynamical simulations of driven turbulence, we show that the correlation between the thermal electron density and the small-scale magnetic field increases with increasing Mach number of the turbulence. We find that the assumption of uncorrelated thermal electron density and magnetic fields is valid only for subsonic and transsonic flows, but for supersonic turbulence, the field strength can be severely overestimated by using $1.232 \, \text{RM}/\text{DM}$. We then correlate existing pulsar observations from the Australia Telescope National Facility with regions of enhanced thermal electron density and magnetic fields probed by ${^{12} \mathrm {CO}}$ data of molecular clouds, magnetic fields from the Zeeman splitting of the 21 cm line, neutral hydrogen column density, and H$\alpha$ observations. Using these observational data, we show that the thermal electron density and magnetic fields are largely uncorrelated over kpc scales. Thus, we conclude that the relation $\langle B_\parallel \rangle = 1.232 \, \text{RM}/\text{DM}$ provides a good estimate of the magnetic field on Galactic scales but might break down on sub - kpc scales. | astrophysics |
In the field of face recognition, a model learns to distinguish millions of face images with fewer dimensional embedding features, and such vast information may not be properly encoded in the conventional model with a single branch. We propose a novel face-recognition-specialized architecture called GroupFace that utilizes multiple group-aware representations, simultaneously, to improve the quality of the embedding feature. The proposed method provides self-distributed labels that balance the number of samples belonging to each group without additional human annotations, and learns the group-aware representations that can narrow down the search space of the target identity. We prove the effectiveness of the proposed method by showing extensive ablation studies and visualizations. All the components of the proposed method can be trained in an end-to-end manner with a marginal increase of computational complexity. Finally, the proposed method achieves the state-of-the-art results with significant improvements in 1:1 face verification and 1:N face identification tasks on the following public datasets: LFW, YTF, CALFW, CPLFW, CFP, AgeDB-30, MegaFace, IJB-B and IJB-C. | computer science |
Coupled tensor approximation has recently emerged as a promising approach for the fusion of hyperspectral and multispectral images, reconciling state of the art performance with strong theoretical guarantees. However, tensor-based approaches previously proposed assume that the different observed images are acquired under exactly the same conditions. A recent work proposed to accommodate inter-image spectral variability in the image fusion problem using a matrix factorization-based formulation, but did not account for spatially-localized variations. Moreover, it lacks theoretical guarantees and has a high associated computational complexity. In this paper, we consider the image fusion problem while accounting for both spatially and spectrally localized changes in an additive model. We first study how the general identifiability of the model is impacted by the presence of such changes. Then, assuming that the high-resolution image and the variation factors admit a Tucker decomposition, two new algorithms are proposed -- one purely algebraic, and another based on an optimization procedure. Theoretical guarantees for the exact recovery of the high-resolution image are provided for both algorithms. Experimental results show that the proposed method outperforms state-of-the-art methods in the presence of spectral and spatial variations between the images, at a smaller computational cost. | electrical engineering and systems science |
We review our recent work on ellipsoidal M2-brane solutions in the large-N limit of the BMN matrix model. These bosonic finite-energy membranes live inside SO(3)xSO(6) symmetric plane-wave spacetimes and correspond to local extrema of the energy functional. They are static in SO(3) and stationary in SO(6). Chaos appears at the level of radial stability analysis through the explicitly derived spectrum of eigenvalues. The angular perturbation analysis is suggestive of the presence of weak turbulence instabilities that propagate from low to high orders in perturbation theory. | high energy physics theory |
Modern microprocessors are equipped with Single Instruction Multiple Data (SIMD) or vector instructions which expose data level parallelism at a fine granularity. Programmers exploit this parallelism by using low-level vector intrinsics in their code. However, once programs are written using vector intrinsics of a specific instruction set, the code becomes non-portable. Modern compilers are unable to analyze and retarget the code to newer vector instruction sets. Hence, programmers have to manually rewrite the same code using vector intrinsics of a newer generation to exploit higher data widths and capabilities of new instruction sets. This process is tedious, error-prone and requires maintaining multiple code bases. We propose Revec, a compiler optimization pass which revectorizes already vectorized code, by retargeting it to use vector instructions of newer generations. The transformation is transparent, happening at the compiler intermediate representation level, and enables performance portability of hand-vectorized code. Revec can achieve performance improvements in real-world performance critical kernels. In particular, Revec achieves geometric mean speedups of 1.160$\times$ and 1.430$\times$ on fast integer unpacking kernels, and speedups of 1.145$\times$ and 1.195$\times$ on hand-vectorized x265 media codec kernels when retargeting their SSE-series implementations to use AVX2 and AVX-512 vector instructions respectively. We also extensively test Revec's impact on 216 intrinsic-rich implementations of image processing and stencil kernels relative to hand-retargeting. | computer science |
We propose an analytical method for solving the problem of electronic relaxation in solution in time domain, modelled by a particle undergoing diffusion under the influence of two coupled potentials. The coupling between the two potentials is assumed to be represented by a Dirac delta function of arbitrary position and strength. Smoluchowskii equation is used model the diffusion motion on both the potentials. We report an analytical expression for survival probability in time domain. This is the first time analytical solution in time domain is derived and this method can be used to solve problems involving other potentials. | condensed matter |
Purpose: 3D Time-of-flight (TOF) MR Angiography (MRA) can accurately visualize the intracranial vasculature, but is limited by long acquisition times. Compressed sensing (CS) reconstruction can be used to substantially accelerate acquisitions. The quality of those reconstructions depends on the undersampling patterns used in the acquisitions. In this work, optimized sets of undersampling parameters using various acceleration factors for Cartesian 3D TOF-MRA are established. Methods: Fully-sampled datasets acquired at 7T were retrospectively undersampled using variable-density Poisson-disk sampling with various autocalibration region sizes, polynomial orders, and acceleration factors. The accuracy of reconstructions from the different undersampled datasets was assessed using the vessel-masked structural similarity index. Results were compared for four imaging volumes, acquired from two different subjects. Optimized undersampling parameters were validated using additional prospectively undersampled datasets. Results: For all acceleration factors, using a fully-sampled calibration area of 12x12 k-space lines and a polynomial order of around 2-2.4 resulted in the highest image quality. The importance of sampling parameter optimization was found to increase for higher acceleration factors. The results were consistent across resolutions and regions of interest with vessels of varying sizes and tortuosity. In prospectively undersampled acquisitions, using optimized undersampling parameters resulted in a 7.2% increase in the number of visible small vessels at R = 7.2. Conclusion: The image quality of CS TOF-MRA can be improved by appropriate choice of undersampling parameters. The optimized sets of parameters are independent of the acceleration factor. | physics |
We study a stationary Gibbs particle process with deterministically bounded particles on Euclidean space defined in terms of an activity parameter and non-negative interaction potentials of finite range. Using disagreement percolation we prove exponential decay of the correlation functions, provided a dominating Boolean model is subcritical. We also prove this property for the weighted moments of a U-statistic of the process. Under the assumption of a suitable lower bound on the variance, this implies a central limit theorem for such U-statistics of the Gibbs particle process. A byproduct of our approach is a new uniqueness result for Gibbs particle processes. | mathematics |
The Stochastic Series Expansion (SSE) technique is a quantum Monte Carlo method that is especially efficient for many quantum spin systems and boson models. It was the first generic method free from the discretization errors affecting previous path integral based approaches. These lecture notes give a brief overview of the SSE method and its applications. In the introductory section, the representation of quantum statistical mechanics by the power series expansion of ${\rm e}^{-\beta H}$ will be compared with path integrals in discrete and continuous imaginary time. Extensions of the SSE approach to ground state projection and quantum annealing in imaginary time will also be briefly discussed. The later sections introduce efficient sampling schemes (loop and cluster updates) that have been developed for many classes of models. A summary of generic forms of estimators for important observables are also given. Applications are discussed in the last section. | condensed matter |
We study the problem of testing the existence of a heterogeneous dense subhypergraph. The null hypothesis corresponds to a heterogeneous Erd\"{o}s-R\'{e}nyi uniform random hypergraph and the alternative hypothesis corresponds to a heterogeneous uniform random hypergraph that contains a dense subhypergraph. We establish detection boundaries when the edge probabilities are known and construct an asymptotically powerful test for distinguishing the hypotheses. We also construct an adaptive test which does not involve edge probabilities, and hence, is more practically useful. | statistics |
Stochastic models of chemical reaction networks are an important tool to describe and analyze noise effects in cell biology. When chemical species and reaction rates in a reaction system have different orders of magnitude, the associated stochastic system is often modeled in a multiscale regime. It is known that multiscale models can be approximated with a reduced system such as mean field dynamics or hybrid systems, but the accuracy of the approximation remains unknown. In this paper, we estimate the probability distribution of low copy species in multiscale stochastic reaction systems under short-time scale. We also establish an error bound for this approximation. Throughout the manuscript, typical mass action systems are mainly handled, but we also show that the main theorem can extended to general kinetics, which generalizes existing results in the literature. Our approach is based on a direct analysis of the Kolmogorov equation, in contrast to classical approaches in the existing literature. | mathematics |
The storage of hydrogen (H$_2$) is of economic and ecological relevance, because it could potentially replace petroleum-based fuels. However, H$_2$ storage at mild condition remains one of the bottlenecks for its widespread usage. In order to devise successful H$_2$ storage strategies, there is a need for a fundamental understanding of the weak and elusive hydrogen interactions at the quantum mechanical level. One of the most promising strategies for storage at mild pressure and temperature is physisorption. Porous materials are specially effective at physisorption, however the process at the quantum level has been under-studied. Here, we present quantum calculations to study the interaction of H$_2$ with building units of porous materials. We report 240 H$_2$ complexes made of different transition metal (Tm) atoms, chelating ligands, spins, oxidation states, and geometrical configurations. We found that both the dispersion and electrostatics interactions are the major contributors to the interaction energy between H$_2$ and the transition metal complexes. The binding energy for some of these complexes is in the range of at least 10 kJ/mol for many interactions sites, which is one of these main requirements for practical H$_2$ storage. Thus, these results are of fundamental nature for practical H$_2$ storage in porous materials. | condensed matter |
We present results from the NIRVANDELS survey investigating the gas-phase metallicity ($\mathrm{Z}_{\mathrm{gas}}$, tracing O/H) and stellar metallicity ($Z_{\star}$, tracing Fe/H) of 33 star-forming galaxies at redshifts $2.95 < z < 3.80$. Based on a combined analysis of deep optical and near-IR spectra, tracing the rest-frame far ultraviolet and rest-frame optical respectively, we present the first simultaneous determination of the stellar and gas-phase mass-metallicity relationships (MZRs) at $z\simeq3.4$. In both cases, we find that metallicity increases with increasing stellar mass ($M_{\star}$), and that the power-law slope at $M_{\star} \lesssim 10^{10} \mathrm{M}_{\odot}$ of both MZRs scales as $Z \propto M_{\star}^{0.3}$. Comparing the stellar and gas-phase MZRs, we present direct evidence for super-solar O/Fe ratios (i.e., $\alpha$-enhancement) at $z>3$, finding $\mathrm{(O/Fe)}\simeq (2.54 \pm 0.38) \times \mathrm{(O/Fe)}_{\odot}$, with no clear dependence on $M_{\star}$. | astrophysics |
A circular ribbon flare SOL2014-12-17T04:51 is studied using the 17/34 GHz maps from the Nobeyama Radioheliograph (NoRH) along with (E)UV and magnetic data from the Solar Dynamics Observatory (SDO). We report the following three findings as important features of the microwave CRF. (1) The first preflare activation comes in the form of a gradual increase of the 17 GHz flux without a counterpart at 34 GHz, which indicates thermal preheating. The first sign of nonthermal activity occurs in the form of stepwise flux increases at both 17 and 34 GHz about 4 min before the impulsive phase. (2) Until the impulsive phase, the microwave emission over the entire active region is in a single polarization state matching the magnetic polarity of the surrounding fields. During and after the impulsive phase, the sign of the 17 GHz polarization state reverses in the core region, which implies a magnetic breakout--type eruption in a fan-spine magnetic structure. (3) The 17 GHz flux around the time of the eruption shows quasi-periodic variations with periods of 1--2 min. The pre-eruption oscillation is more obvious in total intensity at one end of the flare loop, and the post-eruption oscillation, more obvious in the polarized intensity at a region near the inner spine. We interpret this transition as the transfer of oscillatory power from kink mode oscillation to torsional Alfv\'en waves propagating along the spine field after the eruption. We argue that these three processes are inter-related and indicate a breakout process in a fan-spine structure. | astrophysics |
We review the theoretical underpinning of the Higgs mechanism of electroweak symmetry breaking and the experimental status of Higgs measurements from a pedagogical perspective. The possibilities and motivations for new physics in the symmetry breaking sector are discussed along with current measurements. A focus is on the implications of measurements in the Higgs sector for theoretical insights into extensions of the Standard Model. We also discuss of future prospects for Higgs physics and new analysis techniques. | high energy physics phenomenology |
We derive general relationships between the number of complex poles of a propagator and the sign of the spectral function originating from the branch cut in the Minkowski region under some assumptions on the asymptotic behaviors of the propagator. We apply this relation to the mass-deformed Yang-Mills model with one-loop quantum corrections, which is identified with a low-energy effective theory of the Yang-Mills theory, to show that the gluon propagator in this model has a pair of complex conjugate poles or "tachyonic" poles of multiplicity two, in accordance with the fact that the gluon field has a negative spectral function, while the ghost propagator has at most one "unphysical" pole. Finally, we discuss implications of these results for gluon confinement and other non-perturbative aspects of the Yang-Mills theory. | high energy physics theory |
Subsets and Splits