text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
Off-policy evaluation (OPE) in reinforcement learning is notoriously difficult in long- and infinite-horizon settings due to diminishing overlap between behavior and target policies. In this paper, we study the role of Markovian and time-invariant structure in efficient OPE. We first derive the efficiency bounds for OPE when one assumes each of these structures. This precisely characterizes the curse of horizon: in time-variant processes, OPE is only feasible in the near-on-policy setting, where behavior and target policies are sufficiently similar. But, in time-invariant Markov decision processes, our bounds show that truly-off-policy evaluation is feasible, even with only just one dependent trajectory, and provide the limits of how well we could hope to do. We develop a new estimator based on Double Reinforcement Learning (DRL) that leverages this structure for OPE using the efficient influence function we derive. Our DRL estimator simultaneously uses estimated stationary density ratios and $q$-functions and remains efficient when both are estimated at slow, nonparametric rates and remains consistent when either is estimated consistently. We investigate these properties and the performance benefits of leveraging the problem structure for more efficient OPE. | statistics |
The breakup pathway of Rayleigh fission of a charged drop is unequivocally demonstrated by first of its kind, continuous, high-speed imaging of a drop levitated in an AC quadrupole trap. The experimental observations consistently exhibited asymmetric, sub-critical Rayleigh breakup with an upward (i.e. opposite to the direction of gravity) ejection of a jet from the levitated drop. These experiments supported by numerical calculations show that the gravity induced downward shift of the equilibrium position of the drop in the trap cause significant, large amplitude shape oscillations superimposed over the center-of-mass oscillations. The shape oscillations result in sufficient deformations to act as triggers for the onset of instability well below the Rayleigh limit (a subcritical instability). At the same time, the center-of-mass oscillations which are out of phase with the applied voltage, lead to an asymmetric breakup such that the Rayleigh fission occurs upwards via the ejection of a jet at the pole of the deformed drop. As an important application, it follows from corollarial reasoning that the nanodrop generation in electrospray devices will occur, more as a rule rather than as an exception, via asymmetric, subcritical Rayleigh fission events of micro drops due to inherent directionality provided by the external electric fields. | physics |
This is a comment on the paper : Quantum Interference between Light Sources Separated by 150 Million Kilometers by Deng et al, PHYSICAL REVIEW LETTERS 123, 080401 (2019) | quantum physics |
Although it may seem The Delayed Choice experiments contradict causality and one could construct an experiment which could possibly affect the past, using Many World interpretation we prove it is not possible. We also find a mathematical background to Which-path information and show why its obtainability prevents system from interfering. We find a system which exhibit both interference and correlation and show why one-particle interference and correlations are complementary. Better visible interference pattern leads to worse correlations and vice versa. Then, using knowledge gained from Quantum Eraser and Delayed Choice experiments we prove there is not an objective reality in a sense of Einstein, Podolsky and Rosen. Furthermore, we discuss the difference between ``outer'' (non-interacting) and ``inner'' (interacting) observer. We find the mathematical relationship between the ``universal'' wave function used by ``outer'' observer and processes the ``inner'' observer sees, which is our small contribution to the measurement problem. | quantum physics |
Ultra-wide triple black-holes (TBHs; with an outer orbit $>10^3$ AU) in the field can be considerably perturbed by flyby encounters with field stars by the excitation of the outer orbit eccentricities. We study the cumulative effect of such flybys, and show them to be conductive for the production of gravitational-wave (GW) sources. Flyby encounters with TBHs can turn the TBHs unstable and follow chaotic evolution. This leads to a binary-single resonant encounter between the outer BH and the inner-binary. These encounters can result in either a prompt GW-merger of two of the TBH components during the resonant phase, or the disruption of the TBH. In the latter case a more compact binary is left behind, while the third BH escapes and is ejected. The compact remnant binary may still inspiral through GW-emission, although on longer timescales. A significant number of these would lead to a delayed GW-merger in less than a Hubble time. We find a volumetric merger rate of $\sim3-10{\rm Gpc^{-3}yr^{-1}}$ contributed by the (former) prompt-merger TBH channel and $\sim100-250{\rm {\rm Gpc^{-3}yr^{-1}}}$ contributed by the (latter) delayed-merger TBH channel. The prompt channel gives rise to eccentric mergers in the aLIGO band, while the majority of the delayed-GW mergers are circularized when enter the aLIGO band. We find the total {\rm eccentric} volumetric merger rate to be $\sim1-10{\rm Gpc^{-3}yr^{-1}}$ from both channels. We expect these mergers to show no significant spin-orbit alignment, and uniform delay time distribution. | astrophysics |
We explicitly rewrite the path integral for the free or critical $O(N)$ (or $U(N)$) bosonic vector models in $d$ space-time dimensions as a path integral over fields (including massless high-spin fields) living on ($d+1$)-dimensional anti-de Sitter space. Inspired by de Mello Koch, Jevicki, Suzuki and Yoon and earlier work, we first rewrite the vector models in terms of bi-local fields, then expand these fields in eigenmodes of the conformal group, and finally map these eigenmodes to those of fields on anti-de Sitter space. Our results provide an explicit (non-local) action for a high-spin theory on anti-de Sitter space, which is presumably equivalent in the large $N$ limit to Vasiliev's classical high-spin gravity theory (with some specific gauge-fixing to a fixed background), but which can be used also for loop computations. Our mapping is explicit within the $1/N$ expansion, but in principle can be extended also to finite $N$ theories, where extra constraints on products of bulk fields need to be taken into account. | high energy physics theory |
By using the recently generalized version of Newton Shell Theorem analytical equations are derived to calculate the electric interaction energy between two separated charged spheres surrounded outside and inside by electrolyte. This electric interaction energy is calculated as a function of the electrolyte ion concentration, temperature, distance between the spheres and size of the spheres. At the same distance between the spheres the absolute value of the interaction energy decreases with increasing electrolyte ion concentration and increases with increasing temperature. At zero electrolyte ion concentration the derived analytical equation transforms into the Coulomb equation. Finally, the analytical equation is generalized to calculate the electric interaction energy of N separated charged spheres surrounded by electrolyte. | condensed matter |
In this work, we present the effect of a probe string on the complexity of a black hole according to the CA (Complexity equals action) conjecture on Horndeski's gravity. In our system, we consider a particle moving on the boundary of black hole spacetime in ($2+1$)-dimensions. To obtain a dual description, we need to insertion a fundamental string on the bulk spacetime. The effect of this string is given by the Nambu-Goto term. For the Nambu-Goto term, we can analyze the time development of this system, which is affected by the parameters of Horndeski's gravity. In our case, we show some interesting complexity properties for this gravity. | high energy physics theory |
Optimization of hyper-parameters in reinforcement learning (RL) algorithms is a key task, because they determine how the agent will learn its policy by interacting with its environment, and thus what data is gathered. In this work, an approach that uses Bayesian optimization to perform a two-step optimization is proposed: first, categorical RL structure hyper-parameters are taken as binary variables and optimized with an acquisition function tailored for such variables. Then, at a lower level of abstraction, solution-level hyper-parameters are optimized by resorting to the expected improvement acquisition function, while using the best categorical hyper-parameters found in the optimization at the upper-level of abstraction. This two-tier approach is validated in a simulated control task. Results obtained are promising and open the way for more user-independent applications of reinforcement learning. | computer science |
Brainwave signals are read through Electroencephalogram (EEG) devices. These signals are generated from an active brain based on brain activities and thoughts. The classification of brainwave signals is a challenging task due to its non-stationary nature. To address the issue, this paper proposes a Convolutional Neural Network (CNN) model to classify brainwave signals. In order to evaluate the performance of the proposed model a dataset is developed by recording brainwave signals for two conditions, which are visible and invisible. In the visible mode, the human subjects focus on the color and shape presented. Meanwhile, in the invisible mode, the subjects think about specific colors or shapes with closed eyes. A comparison has been provided between the original CNN and the proposed CNN architecture on the same dataset. The results show that the proposed CNN model achieves higher classification accuracy as compared to the standard CNN. The best accuracy rate achieved when the proposed CNN is applied on the visible color mode is 92%. In the future, improvements on the proposed CNN will be able to classify raw EEG signals in an efficient way. | electrical engineering and systems science |
Decoherence induced by the laser frequency noise is one of the most important obstacles in the quantum information processing. In order to suppress this decoherence, the noise power spectral density needs to be accurately characterized. In particular, the noise spectrum measurement based on the coherence characteristics of qubits would be a meaningful and still challenging method. Here, we theoretically analyze and experimentally obtain the spectrum of laser frequency noise based on the continuous dynamical decoupling technique. We first estimate the mixture-noise (including laser and magnetic noises) spectrum up to $(2\pi)$530 kHz by monitoring the transverse relaxation from an initial state $+X$, followed by a gradient descent data process protocol. Then the contribution from the laser noise is extracted by enconding the qubits on different Zeeman sublevels. We also investigate two sufficiently strong noise components by making an analogy between these noises and driving lasers whose linewidth assumed to be negligible. This method is verified experimentally and finally helps to characterize the noise. | quantum physics |
This paper deals with the problem of jointly designing the source precoder, the relaying matrices, and the destination equalizer in a multiple-relay amplify-and-forward (AF) cooperative multiple-input multiple-output (MIMO) wireless network, when partial channel-state information (CSI) is available. Specifically, the considered approaches are based on the knowledge of instantaneous CSI of the first-hop channel matrix, whereas only statistical CSI of the second-hop channels is assumed. In such a scenario, with respect to the case when instantaneous CSI of both the first- and second-hop MIMO channel matrices is exploited, existing network designs exhibit a significant performance degradation. Relying on a relaxed minimum-mean-square-error (MMSE) criterion, we show that the design based on the potential activation of all possible antennas for all available AF relays leads to a mathematically intractable optimization problem. Therefore, we develop a joint relay-and-antenna selection procedure that determines the best subset of the available antennas possibly belonging to different relays. Monte Carlo simulations show that, compared to designs based on the selection of the best relay, the proposed strategy offers a significant performance gain, by also outperforming other recently proposed relay/antenna selection schemes. | electrical engineering and systems science |
Conditionally Markov (CM) sequences are powerful mathematical tools for modeling problems. One class of CM sequences is the reciprocal sequence. In application, we need not only CM dynamic models, but also know how to design model parameters. Models of two important classes of nonsingular Gaussian (NG) CM sequences, called $CM_L$ and $CM_F$ models, and a model of the NG reciprocal sequence, called reciprocal $CM_L$ model, were presented in our previous works and their applications were discussed. In this paper, these models are studied in more detail, in particular their parameter design. It is shown that every reciprocal $CM_L$ model can be induced by a Markov model. Then, parameters of each reciprocal $CM_L$ model can be obtained from those of the Markov model. Also, it is shown that a NG $CM_L$ ($CM_F$) sequence can be represented by a sum of a NG Markov sequence and an uncorrelated NG vector. This (necessary and sufficient) representation provides a basis for designing parameters of a $CM_L$ ($CM_F$) model. From the CM viewpoint, a representation is also obtained for NG reciprocal sequences. This representation is simple and reveals an important property of reciprocal sequences. As a result, the significance of studying reciprocal sequences from the CM viewpoint is demonstrated. A full spectrum of dynamic models from a $CM_L$ model to a reciprocal $CM_L$ model is also presented. Some examples are presented for illustration. | electrical engineering and systems science |
We introduce a Poincar\'{e} polynomial with two-variable $t$ and $x$ for knots, derived from Khovanov homology, where the specialization $(t, x)$ $=$ $(1, -1)$ is a Vassiliev invariant of order $n$. Since for every $n$, there exist non-trivial knots with the same value of the Vassiliev invariant of order $n$ as that of the unknot, there has been no explicit formulation of a perturbative knot invariant which is a coefficient of $y^n$ by the replacement $q=e^y$ for the quantum parameter $q$ of a quantum knot invariant, and which distinguishes the above knots together with the unknot. The first formulation is our polynomial. | mathematics |
Recently, the source separation performance was greatly improved by time-domain audio source separation based on dual-path recurrent neural network (DPRNN). DPRNN is a simple but effective model for a long sequential data. While DPRNN is quite efficient in modeling a sequential data of the length of an utterance, i.e., about 5 to 10 second data, it is harder to apply it to longer sequences such as whole conversations consisting of multiple utterances. It is simply because, in such a case, the number of time steps consumed by its internal module called inter-chunk RNN becomes extremely large. To mitigate this problem, this paper proposes a multi-path RNN (MPRNN), a generalized version of DPRNN, that models the input data in a hierarchical manner. In the MPRNN framework, the input data is represented at several (>3) time-resolutions, each of which is modeled by a specific RNN sub-module. For example, the RNN sub-module that deals with the finest resolution may model temporal relationship only within a phoneme, while the RNN sub-module handling the most coarse resolution may capture only the relationship between utterances such as speaker information. We perform experiments using simulated dialogue-like mixtures and show that MPRNN has greater model capacity, and it outperforms the current state-of-the-art DPRNN framework especially in online processing scenarios. | electrical engineering and systems science |
We present multi-frequency scatter broadening evolution of 29 pulsars observed with the LOw Frequency ARray (LOFAR) and Long Wavelength Array (LWA). We conducted new observations using LOFAR Low Band Antennae (LBA) as well as utilized the archival data from LOFAR and LWA. This study has increased the total of all multi-frequency or wide-band scattering measurements up to a dispersion measure (DM) of 150~pc\,cm$^{-3}$ by 60\%. The scatter broadening timescale ($\tau_{sc}$) measurements at different frequencies are often combined by scaling them to a common reference frequency of 1\,GHz. Using our data, we show that the $\tau_{sc}$--DM variations are best fitted for reference frequencies close to 200--300\,MHz, and scaling to higher or lower frequencies results in significantly more scatter in data. We suggest that this effect might indicate a frequency dependence of the scatter broadening scaling index ($\alpha$). However, a selection bias due to our chosen observing frequencies can not be ruled out with the current data set. Our data did not favour any particular model of the DM -- $\tau_{sc}$ relations, and we do not see a statistically significant break at the low DM range in this relation. The turbulence spectral index ($\beta$) is found to be steeper than that is expected from a Kolmogorov spectrum. This indicates that the local ISM turbulence may have a low wave-number cutoff or presence of large scale inhomogeneities in the line of sight to some of the reported pulsars. | astrophysics |
This work proposes a novel end-to-end convolutional neural network (CNN) architecture to automatically quantify the severity of knee osteoarthritis (OA) using X-Ray images, which incorporates trainable attention modules acting as unsupervised fine-grained detectors of the region of interest (ROI). The proposed attention modules can be applied at different levels and scales across any CNN pipeline helping the network to learn relevant attention patterns over the most informative parts of the image at different resolutions. We test the proposed attention mechanism on existing state-of-the-art CNN architectures as our base models, achieving promising results on the benchmark knee OA datasets from the osteoarthritis initiative (OAI) and multicenter osteoarthritis study (MOST). All code from our experiments will be publicly available on the github repository: https://github.com/marc-gorriz/KneeOA-CNNAttention | electrical engineering and systems science |
We construct the first examples of residually finite non-exact groups. The construction is based on author's earlier construction of groups containing isometrically expanders using a graphical small cancellation. | mathematics |
Multivariate time series exhibit two types of dependence: across variables and across time points. Vine copulas are graphical models for the dependence and can conveniently capture both types of dependence in the same model. We derive the maximal class of graph structures that guarantees stationarity under a condition called translation invariance. Translation invariance is not only a necessary condition for stationarity, but also the only condition we can reasonably check in practice. In this sense, the new model class characterizes all practically relevant vine structures for modeling stationary time series. We propose computationally efficient methods for estimation, simulation, prediction, and uncertainty quantification and show their validity by asymptotic results and simulations. The theoretical results allow for misspecified models and, even when specialized to the \emph{iid} case, go beyond what is available in the literature. The new model class is illustrated by an application to forecasting returns of a portolio of 20 stocks, where they show excellent forecast performance. The paper is accompanied by an open source software implementation. | statistics |
From the Schmidt representation we have that, up to local gates, every 2-qubit state can be written as $[\phi\rangle=\lambda_1 [00\rangle+\lambda_2 [11\rangle$ with $\lambda_1$ and $\lambda_2$ real numbers. For 3-qubits states, it is known, PRL 2000 85, that up to local gates every 3-qubit state can be written as $[\phi\rangle=\lambda_1 [000\rangle+\lambda_2 e^{i \theta }[100\rangle+\lambda_3[101\rangle+\lambda_4[110\rangle+\lambda_5[111\rangle$ with $\lambda_i\ge0$ and $0\le \theta \le \pi$. In this paper, we show that no every 3-qubit state with real amplitudes can be transform in the form $[\phi\rangle=\lambda_1 [000\rangle+\lambda_2 [100\rangle+\lambda_3[101\rangle+\lambda_4[110\rangle+\lambda_5[111\rangle$ by using local gates in the orthogonal group (the group generated by $R_y(\theta)$ and $X$ gates). We also show that, up to local gates in the orthogonal group, every 3-qubit with real amplitudes can be written as $[\phi\rangle=\lambda_1 [000\rangle+\lambda_2 [011\rangle+\lambda_3[101\rangle+\lambda_4[110\rangle+\lambda_5[111\rangle$ with the $\lambda_i$ real numbers. An explanation of this result can be found in the youtube video \url{https://youtu.be/gDN20QHzsoQ} | quantum physics |
Measurements play a crucial role in doing physics: Their results provide the basis on which we adopt or reject physical theories. In this note, we examine the effect of subjecting measurements themselves to our experience. We require that our contact with the world is empirically warranted. Therefore, we study theories that satisfy the following assumption: Interactions are accounted for so that they are empirically traceable, and observations necessarily go with such an interaction with the observed system. Examining, with regard to these assumptions, an abstract representation of measurements with tools from quantum logic leads us to contextual theories. Contextuality becomes a means to render interactions, thus also measurements, empirically tangible. The measurement becomes problematic---also beyond quantum mechanics---if one tries to commensurate the assumption of tangible interactions with the notion of a spectator theory, i.e., with the idea that measurement results are read off without effect. The problem, thus, presents itself as the collision of different epistemological stances with repercussions beyond quantum mechanics. | quantum physics |
We propose a novel learned deep prior of body motion for 3D hand shape synthesis and estimation in the domain of conversational gestures. Our model builds upon the insight that body motion and hand gestures are strongly correlated in non-verbal communication settings. We formulate the learning of this prior as a prediction task of 3D hand shape over time given body motion input alone. Trained with 3D pose estimations obtained from a large-scale dataset of internet videos, our hand prediction model produces convincing 3D hand gestures given only the 3D motion of the speaker's arms as input. We demonstrate the efficacy of our method on hand gesture synthesis from body motion input, and as a strong body prior for single-view image-based 3D hand pose estimation. We demonstrate that our method outperforms previous state-of-the-art approaches and can generalize beyond the monologue-based training data to multi-person conversations. Video results are available at http://people.eecs.berkeley.edu/~evonne_ng/projects/body2hands/. | computer science |
We study the system of two localized detectors (oscillators) interacting through a massless quantum field in a vacuum state via an Unruh-DeWitt coupling. This system admits an exact solution providing a good model for addressing fundamental issues in particle-field interactions, causality and locality in quantum field measurements that are relevant to proposed quantum experiments in space. Our analysis of the exact solution leads to the following results. (i) Common approximations used in the study of analogous open quantum systems fail when the distance between the detectors becomes of the order of the relaxation time. In particular, the creation of correlations between remote detectors is not well described by ordinary perturbation theory and the Markov approximation. (ii) There is a unique asymptotic state that is correlated; it is not entangled unless the detector separation is of the order of magnitude of the wavelength of the exchanged quanta. (iii) The evolution of seemingly localized observables is non-causal. The latter is a manifestation of Fermi's two-atom problem, albeit in an exactly solvable system. We argue that the problem of causality requires a reexamination of the notion of entanglement in relativistic systems, in particular, the physical relevance of its extraction from the quantum vacuum. | quantum physics |
This paper, motivated by problems in Diophantine analysis which can be formulated as problems of finding rational points on the intersection of two quadrics, presents an explicit construction of a rationally defined isomorphism (biregular mapping) between a rationally defined smooth intersection of two quadrics in projective three-space and an elliptic curve in Weierstrass form which maps a distinguished rational point to the point at infinity. The usual approach of transforming a smooth plane cubic to a curve in Weierstrass form by mapping an inflection point to the point at infinity in a particular way is not applicable in our setting, because there may be no inflection point defined over the rationals. This difficulty is overcome by a construction dating back to Nagell. The results are exemplified in two situations of number-theoretical interest: Euler's problem of concordant forms and the occurrence of four rational squares in arithmetic progressions. | mathematics |
Diffusive dynamics in presence of deep energy minima and weak nongradient forces can be coarse-grained into a mesoscopic jump process over the various basins of attraction. Combining standard weak-noise results with a path integral expansion around equilibrium, we show that the emerging transition rates satisfy local detailed balance (LDB). Namely, the log ratio of the transition rates between nearby basins of attractions equals the free-energy variation appearing at equilibrium, supplemented by the work done by the nonconservative forces along the typical transition path. When the mesoscopic dynamics possesses a large-size deterministic limit, it can be further reduced to a jump process over macroscopic states satisfying LDB. The persistence of LDB under coarse graining of weakly nonequilibrium states is a generic consequence of the fact that only dissipative effects matter close to equilibrium. | condensed matter |
Dynamically relaxed galaxy clusters have long played a role in galaxy cluster studies because it is thought their properties can be reconstructed more precisely and with less systematics. As relaxed clusters are desirable, there exist a plethora of criteria for classifying a galaxy cluster as relaxed. In this work, we examine $9$ commonly used observational and theoretical morphological metrics extracted from $54,000$ Mock-X synthetic X-ray images of galaxy clusters taken from the IllustrisTNG, BAHAMAS and MACSIS simulation suites. We find that the simulated criteria distributions are in reasonable agreement with the observed distributions. Many criteria distributions evolve as a function of redshift, cluster mass, numerical resolution and subgrid physics, limiting the effectiveness of a single relaxation threshold value. All criteria are positively correlated with each other, however, the strength of the correlation is sensitive to redshift, mass and numerical choices. Driven by the intrinsic scatter inherent to all morphological metrics and the arbitrary nature of relaxation threshold values, we find the consistency of relaxed subsets defined by the different metrics to be relatively poor. Therefore, the use of relaxed cluster subsets introduces significant selection effects that are non-trivial to resolve. | astrophysics |
Anomaly detection in spatiotemporal data is a challenging problem encountered in a variety of applications including hyperspectral imaging, video surveillance and urban traffic monitoring. In the case of urban traffic data, anomalies refer to unusual events such as traffic congestion and unexpected crowd gatherings. Detecting these anomalies is challenging due to the dependence of anomaly definition on time and space. In this paper, we introduce an unsupervised tensor-based anomaly detection method for spatiotemporal urban traffic data. The proposed method assumes that the anomalies are sparse and temporally continuous, {i.e.}, anomalies appear as spatially contiguous groups of locations that show anomalous values consistently for a short duration of time. Furthermore, a manifold embedding approach is adopted to preserve the local geometric structure of the data across each mode. The proposed framework, Graph Regularized Low-rank plus Temporally Smooth Sparse decomposition (GLOSS), is formulated as an optimization problem and solved using alternating method of multipliers (ADMM). The resulting algorithm is shown to converge and be robust against missing data and noise. The proposed framework is evaluated on both synthetic and real spatiotemporal urban traffic data and compared with baseline methods. | electrical engineering and systems science |
We consider the self-dual Chern-Simons-Schr\"odinger equation (CSS). CSS is $L^{2}$-critical, admits solitons, and has the pseudoconformal symmetry. In this work, we consider pseudoconformal blow-up solutions under $m$-equivariance, $m\geq1$. Our result is threefold. Firstly, we construct a pseudoconformal blow-up solution $u$ with given asymptotic profile $z^{\ast}$: \[ \Big[u(t,r)-\frac{1}{|t|}Q\Big(\frac{r}{|t|}\Big)e^{-i\frac{r^{2}}{4|t|}}\Big]e^{im\theta}\to z^{\ast}\qquad\text{in }H^{1} \] as $t\to0^{-}$, where $Q(r)e^{im\theta}$ is a static solution. Secondly, we show that such blow-up solutions are unique in a suitable class. Lastly, we exhibit an instability mechanism of $u$. We construct a continuous family of solutions $u^{(\eta)}$, $0\leq\eta\ll1$, such that $u^{(0)}=u$ and for $\eta>0$, $u^{(\eta)}$ is a global scattering solution exhibiting a rotational instability as $\eta\to0^{+}$: $u^{(\eta)}$ takes an abrupt spatial rotation by the angle \[ \Big(\frac{m+1}{m}\Big)\pi \] on the time interval $|t|\lesssim\eta$. We are inspired by works in the $L^{2}$-critical NLS. In the seminal work of Bourgain and Wang (1997), they constructed such pseudoconformal blow-up solutions. Merle, Rapha\"el, and Szeftel (2013) showed an instability of Bourgain-Wang solutions. Although CSS shares many features with NLS, there are essential differences and obstacles over NLS. To name a few, the soliton profile to CSS shows a slow decay $r^{-(m+2)}$, CSS has nonlocal nonlinearities responsible for strong long-range interactions, and the instability mechanism of CSS is completely different from that of NLS. Here, the phase rotation is the main source of the instability. On the other hand, the self-dual structure of CSS is our sponsor to overcome these obstacles. We exploited the self-duality in many places such as the linearization, spectral properties, and construction of modified profiles. | mathematics |
Second-order nonlinear optics is the base for a large variety of devices aimed at the active manipulation of light. However, physical principles restrict its occurrence to non-centrosymmetric, anisotropic matter. This significantly limits the number of base materials exhibiting nonlinear optics. Here, we show that embedding chromophores in an array of conical channels 13 nm across in monolithic silica results in mesoscopic anisotropic matter and thus in a hybrid material showing second-harmonic generation (SHG). This non-linear optics is compared to the one achieved in corona-poled polymer films containing the identical chromophores. It originates in confinement-induced orientational order of the elongated guest molecules in the nanochannels. This leads to a non-centrosymmetric dipolar order and hence to a non-linear light-matter interaction on the sub-wavelength, single-pore scale. Our study demonstrates that the advent of large-scale, self-organised nanoporosity in monolithic solids along with confinement-controllable orientational order of chromophores at the single-pore scale provides a reliable and accessible tool to design materials with a nonlinear meta-optics. | physics |
Although the unscented Kalman filter (UKF) is applicable to nonlinear systems, it turns out that, for linear systems, UKF does not specialize to the classical Kalman filter. This situation suggests that it may be advantageous to modify UKF in such a way that, for linear systems, the Kalman filter is recovered. The ultimate goal is thus to develop modifications of UKF that specialize to the Kalman filter for linear systems and have improved accuracy for nonlinear systems. With this motivation, this paper presents two modifications of UKF that specialize to the Kalman filter for linear systems. The first modification (EUKF-A) requires the Jacobian of the dynamics map, whereas the second modification (EUKF-C) requires the Jacobian of the measurement map. For various nonlinear examples, the accuracy of EUKF-A and EUKF-C is compared to the accuracy of UKF. | electrical engineering and systems science |
In this paper by making use of the "Complexity=Action" proposal, we study the complexity growth after shock waves in holographic field theories. We consider both double black hole-Vaidya and AdS-Vaidya with multiple shocks geometries. We find that the Lloyd's bound is respected during the thermalization process in each of these geometries and at the late time, the complexity growth saturates to the value which is proportional to the energy of the final state. We conclude that the saturation value of complexity growth rate is independent of the initial temperature and in the case of thermal initial state, the rate of complexity is always less than the value for the vacuum initial state such that considering multiple shocks it gets more smaller. Our results indicate that by increasing the temperature of the initial state, the corresponding rate of complexity growth starts far from final saturation rate value. | high energy physics theory |
The resolution of optical imaging devices is ultimately limited by the diffraction of light. To circumvent this limit, modern super-resolution microscopy techniques employ active interaction with the object by exploiting its optical nonlinearities, nonclassical properties of the illumination beam, or near-field probing. Thus, they are not applicable whenever such interaction is not possible, for example, in astronomy or non-invasive biological imaging. Far-field, linear-optical super-resolution techniques based on passive analysis of light coming from the object would cover these gaps. In this paper, we present the first proof-of-principle demonstration of such a technique. It works by accessing information about spatial correlations of the image optical field and, hence, about the object itself via measuring projections onto Hermite-Gaussian transverse spatial modes. With a basis of 21 spatial modes in both transverse dimensions, we perform two-dimensional imaging with twofold resolution enhancement beyond the diffraction limit. | physics |
The analogy between self-similar time series with given Hurst exponent H and Markovian, Gaussian stochastic processes with multiplicative noise and entropic index q (Borland, PRE 57, 6, 6634-6642, 1998) allows us to explain the empirical results reported in (Pavithran et al., EPL, 129 2020 24004) and (Pavithran et al. Sci. Reports 10.1 (2020) 1-8) with the help of the properties of the nonextensive entropy Sq of index q: a dominant oscillating mode arises as H goes to zero in many different systems and its amplitude is proportional to 1/ H^2 . Thus, a decrease of H acts as precursor of large oscillations of the state variable, which corresponds to catastrophic events in many problems of practical interest. In contrast, if H goes to 1 then the time series is strongly intermittent, fluctuations of the state variable follow a power law whose exponent depends on H, and exceedingly large event are basically unpredictable. These predictions agree with observations in problems of aeroacoustics, aeroelasticity, electric engineering, hydrology, laser physics, meteorology, plasma physics, plasticity, polemology, seismology and thermoacoustics. | physics |
Gravitational lensing is a powerful tool for quantifying the mass content and distribution in distant galaxies. By using milliarcsecond angular resolution observations of radio-loud gravitationally lensed sources it is also possible to detect and quantify small deviations from a smooth mass density distribution, which can be due to low mass substructures in the lensing galaxy. We present high-resolution global VLBI observations of the gravitationally lensed radio source MG J0751+2716 (at z = 3.2), that shows evidence of both compact and extended structure (core-jet morphology) across several gravitational arcs. These data provide a wealth of observational constraints that are used to determine the inner (baryonic and dark matter) mass profile of a group of galaxies and also investigate the smoothness of the dark matter distribution on mas-scales, which is sensitive to possible structures of $10^{6-7}$ M$_{\odot}$ within the lensing halo or along the line-of-sight. Our lens modelling finds evidence for astrometric anomalies in this system, which suggest presence of extra mass structure in the lens model. To date this kind of detailed studies of gravitational lensing systems like MG J0751+2716 has been limited by the currently small sample of radio-loud gravitational lenses. In this context, we also present a new pilot gravitational lens search in the VLBI survey mJIVE-20, in perspective of future surveys with the next generation of radio interferometers. | astrophysics |
Credit risk modelling is an integral part of the global financial system. While there has been great attention paid to neural network models for credit default prediction, such models often lack the required interpretation mechanisms and measures of the uncertainty around their predictions. This work develops and compares Bayesian Neural Networks(BNNs) for credit card default modelling. This includes a BNNs trained by Gaussian approximation and the first implementation of BNNs trained by Hybrid Monte Carlo(HMC) in credit risk modelling. The results on the Taiwan Credit Dataset show that BNNs with Automatic Relevance Determination(ARD) outperform normal BNNs without ARD. The results also show that BNNs trained by Gaussian approximation display similar predictive performance to those trained by the HMC. The results further show that BNN with ARD can be used to draw inferences about the relative importance of different features thus critically aiding decision makers in explaining model output to consumers. The robustness of this result is reinforced by high levels of congruence between the features identified as important using the two different approaches for training BNNs. | statistics |
Monitoring wildlife abundance across space and time is an essential task to study their population dynamics and inform effective management. Acoustic recording units are a promising technology for efficiently monitoring bird populations and communities. We present an integrated modeling framework that combines high-quality but temporally sparse bird point count survey data with acoustic recordings. Using simulations, we compare the accuracy and precision of abundance estimates using differing amounts of acoustic vocalizations obtained from a clustering algorithm, point count data, and a subset of manually validated acoustic vocalizations. We also use our modeling framework in a case study to estimate abundance of the Eastern Wood-Pewee (Contopus virens) in Vermont, U.S.A. The simulation study reveals that combining acoustic and point count data via an integrated model improves accuracy and precision of abundance estimates compared with models informed by either acoustic or point count data alone. Combining acoustic data with only a small number of point count surveys yields estimates of abundance without the need for validating any of the identified vocalizations from the acoustic data. Within our case study, the integrated models provided moderate support for a decline of the Eastern Wood-Pewee in this region. Our integrated modeling approach combines dense acoustic data with few point count surveys to deliver reliable estimates of species abundance without the need for manual identification of acoustic vocalizations or a prohibitively expensive large number of repeated point count surveys. Our proposed approach offers an efficient monitoring alternative for large spatio-temporal regions when point count data are difficult to obtain or when monitoring is focused on rare species with low detection probability. | statistics |
Motivated by the theoretical interest in reconstructing long 3D trajectories of individual birds in large flocks, we developed CoMo, a co-moving camera system of two synchronized high speed cameras coupled with rotational stages, which allow us to dynamically follow the motion of a target flock. With the rotation of the cameras we overcome the limitations of standard static systems that restrict the duration of the collected data to the short interval of time in which targets are in the cameras common field of view, but at the same time we change in time the external parameters of the system, which have then to be calibrated frame-by-frame. We address the calibration of the external parameters measuring the position of the cameras and their three angles of yaw, pitch and roll in the system "home" configuration (rotational stage at an angle equal to 0deg and combining this static information with the time dependent rotation due to the stages. We evaluate the robustness and accuracy of the system by comparing reconstructed and measured 3D distances in what we call 3D tests, which show a relative error of the order of 1%. The novelty of the work presented in this paper is not only on the system itself, but also on the approach we use in the tests, which we show to be a very powerful tool in detecting and fixing calibration inaccuracies and that, for this reason, may be relevant for a broad audience. | computer science |
Observing light or heavy charged Higgs bosons $H^\pm$, lighter or heavier than the top quark, would be instant evidence of physics beyond the Standard Model. For this reason, in recent years searches for charged Higgs bosons have been in the center of attention of current colliders such as the CERN Large Hadron Collider (LHC). In spite of all efforts, no signal has been yet observed. Especially, the results of CMS and ATLAS experiments have excluded a large region in the MSSM $m_{H^+}-\tan\beta$ parameter space for $m_{H^+}=80-160$ GeV corresponding to the entire range of $\tan\beta$ up to 60. Therefore, it seems that one should concentrate on probing heavy charged Higgs bosons ($m_{H^\pm}>m_t$) so in this context each new probing channel is welcomed. In this work, we intend to present our proposed channel to search for heavy charged Higgses through the study of scaled-energy distribution of bottom-flavored mesons ($B$) inclusively produced in charged Higgs decay, i.e., $H^+\to t\bar{b}\to B+X$. Our study is carried out within the framework of the generic two Higgs doublet model (2HDM) using the massless scheme where the zero mass parton approximation is adopted for bottom quark. | high energy physics phenomenology |
Somewhat unexpectedly, the study of the family of twisted knots revealed a hidden structure behind exclusive Racah matrices $\bar S$, which control non-associativity of the representation product in a peculiar channel $R\otimes \bar R \otimes R \longrightarrow R$. These $\bar S$ are simultaneously symmetric and orthogonal, and therefore admit two decompositions: as quadratic forms, $\bar S \sim {\cal E}^{tr}{\cal E}$, and as operators: $\bar T\bar S\bar T = S T^{-1} S^{-1}$. Here $\bar T$ and $T$ consist of the eigenvalues of the quantum ${\cal R}$-matrices in channels $R\otimes \bar R$ and $R\otimes R$ respectively, $S$ is the second exclusive Racah matrix for $\bar R\otimes R\otimes R \longrightarrow R$ (still orthogonal, but no longer symmetric), and ${\cal E}$ is a {\it triangular} matrix. It can be further used to construct the KNTZ evolution matrix ${\cal B}={\cal E}\bar T^2{\cal E}^{-1}$, which is also triangular and explicitly expressible through the skew Schur and Macdonald functions -- what makes Racah matrices calculable. Moreover, ${\cal B}$ is somewhat similar to Ruijsenaars Hamiltonian, which is used to define Macdonald functions, and gets triangular in the Schur basis. Discovery of this pentad structure $(\bar T,\bar S,S,{\cal E},{\cal B})$, associated with the universal ${\cal R}$-matrix, can lead to further insights about representation theory, knot invariants and Macdonald-Kerov functions. | high energy physics theory |
The GRAVITY instrument has been commissioned on the VLTI during 2016 and is now available to the astronomical community. It is the first optical interferometer capable of observing sources as faint as magnitude 19 in K-band. This is possible thanks to the fringe tracker which compensates the differential piston based on measurements of a brighter off-axis astronomical reference source. The goal of this paper is to consign the main developments made in the context of the GRAVITY fringe tracker. This could serve as basis for future fringe tracking systems. The paper therefore covers all aspects of the fringe tracker, from hardware, to control software and on-sky observations. Special emphasis is placed on the interaction between the group delay controller and the phase delay controller. The group delay control loop is a simple but robust integrator. The phase delay controller is a state-space control loop based on an auto-regressive representation of the atmospheric and vibrational perturbations. A Kalman filter provides optimal determination of the state of the system. The fringe tracker shows good tracking performance on sources with coherent K magnitudes of 11 on the UTs and 9.5 on the ATs. It can track fringes with an SNR level of 1.5 per DIT, limited by photon and background noises. On the ATs, during good seeing conditions, the optical path delay residuals can be as low as 75 nm root mean square. On the UTs, the performance is limited to around 250 nm because of structural vibrations. | astrophysics |
The solar atmosphere is full of complicated transients manifesting the reconfiguration of solar magnetic field and plasma. Solar jets represent collimated, beam-like plasma ejections; they are ubiquitous in the solar atmosphere and important for the understanding of solar activities at different scales, magnetic reconnection process, particle acceleration, coronal heating, solar wind acceleration, as well as other related phenomena. Recent high spatiotemporal resolution, wide-temperature coverage, spectroscopic, and stereoscopic observations taken by ground-based and space-borne solar telescopes have revealed many valuable new clues to restrict the development of theoretical models. This review aims at providing the reader with the main observational characteristics of solar jets, physical interpretations and models, as well as unsolved outstanding questions in future studies. | astrophysics |
In this paper we study the finite groups in which every element has prime power order, briefly them EPPO-groups. The classification of EPPO-groups is given including the cases of solvable, non-solvable and simple EPPO-groups. This paper is published in Journal of Yunnan Education College, no.1(1986), p.2-10 (in Chinese). Translate it to English is helpful for readers for citing some conclusions of this paper. For example, the result of solvable EPPO-groups(see Theorem 2.4 in the text) is detailed more than G. Higman's conclusion (see reference 1 in this paper). | mathematics |
We explore the possibility to systematically study the extended, hot gaseous halos of low-redshift galaxies with Coronal Broad Ly alpha Absorbers (CBLAs). These are weak, thermally broadenend HI absorption lines arising from the tiny fraction of neutral hydrogen that resides in the collisionally ionized, million-degree halo gas in these galaxies. Using a semi-analytic approach, we model the spatial density and temperature distribution of hot coronal gas to predict strength, spectral shape, and cross section of CBLAs as a function of galaxy-halo mass and line-of-sight impact parameter. For virial halo masses in the range log (M/M_sun)=10.6-12.6, the characteristic logarithmic CBLA HI column densities and Doppler parameters are log N(HI)=12.4-13.4 and b(HI)=70-200 km/s, indicating that CBLAs represent weak, shallow spectral features that are difficult to detect. Yet, the expected number density of CBLAs per unit redshift in the above given mass range is dN/dz(CBLA)~3, implying that CBLAs have a substantial absorption cross-section. We compare the model predictions with a combined set of ultraviolet (UV) absorption-line spectra from HST/COS and HST/STIS that trace the halos of four low-redshift galaxies. We demonstrate that CBLAs already might have been detected in these spectra, but the complex multi-component structure and the limited signal-to-noise ratio (S/N) complicate the interpretation of these CBLA candidate systems. Our study suggests that CBLAs represent a very interesting absorber class that potentially will allow us to further explore the hot coronae of galaxies with UV spectral data. | astrophysics |
We investigate the possibility for the direct detection of low mass (GeV scale) WIMP dark matter in scintillation experiments. Such WIMPs are typically too light to leave appreciable nuclear recoils, but may be detected via their scattering off atomic electrons. In particular, the DAMA Collaboration [R. Bernabei et al., Nucl. Phys. At. Energy 19, 307 (2018)] has recently presented strong evidence of an annual modulation in the scintillation rate observed at energies as low as 1 keV. Despite a strong enhancement in the calculated event rate at low energies, we find that an interpretation in terms of electron-interacting WIMPs cannot be consistent with existing constraints. We also demonstrate the importance of correct treatment of the atomic wavefunctions, and show the resulting event rate is very sensitive to the low-energy performance of the detectors, meaning it is crucial that the detector uncertainties be taken into account. Finally, we demonstrate that the potential scintillation event rate can be much larger than may otherwise be expected, meaning that competitive searches can be performed for GeV scale WIMPs using the conventional prompt S1 scintillation signals. This is important given the recent and upcoming very large liquid xenon detectors. | high energy physics phenomenology |
Motivation: Radiomics refers to the high-throughput mining of quantitative features from radiographic images. It is a promising field in that it may provide a non-invasive solution for screening and classification. Standard machine learning classification and feature selection techniques, however, tend to display inferior performance in terms of (the stability of) predictive performance. This is due to the heavy multicollinearity present in radiomic data. We set out to provide an easy-to-use approach that deals with this problem. Results: We developed a four-step approach that projects the original high-dimensional feature space onto a lower-dimensional latent-feature space, while retaining most of the covariation in the data. It consists of (i) penalized maximum likelihood estimation of a redundancy filtered correlation matrix. The resulting matrix (ii) is the input for a maximum likelihood factor analysis procedure. This two-stage maximum-likelihood approach can be used to (iii) produce a compact set of stable features that (iv) can be directly used in any (regression-based) classifier or predictor. It outperforms other classification (and feature selection) techniques in both external and internal validation settings regarding survival in squamous cell cancers. | statistics |
Distributed quantum computing has been well-known for many years as a system composed of a number of small-capacity quantum circuits. Limitations in the capacity of monolithic quantum computing systems can be overcome by using distributed quantum systems which communicate with each other through known communication links. In our previous study, an algorithm with an exponential complexity was proposed to optimize the number of qubit teleportations required for the communications between two partitions of a distributed quantum circuit. In this work, a genetic algorithm is used to solve the optimization problem in a more efficient way. The results are compared with the previous study and we show that our approach works almost the same with a remarkable speed-up. Moreover, the comparison of the proposed approach based on GA with a random search over the search space verifies the effectiveness of GA. | quantum physics |
We show theoretically and experimentally that a quantum cascade laser (QCL) in an external Fabry-Perot cavity (EC) can produce regular self-pulsations (SP) of the output intensity at frequencies ~75 GHz. We recognize that the propagation delay in EC provides QCL with a "memory" mechanism to preserve the regularity and coherence of the pulse train on the intervals significantly exceeding the sub-picosecond gain coherence and gain recovery times. These results may point to novel practical approaches for producing regular time-domain SPs and pulse trains in the Mid-IR QCLs. Otherwise, due to a multimode Risken-Nummedal-Graham-Haken (RNGH) instability, free-running FP QCLs exhibit SPs at Rabi flopping frequencies in the THz range, which in addition, reveal a quasi-periodic chaotic behavior if the cavity round trip time is a few picoseconds or more. | physics |
Massive device connectivity in Internet of Thing (IoT) networks with sporadic traffic poses significant communication challenges. To overcome this challenge, the serving base station is required to detect the active devices and estimate the corresponding channel state information during each coherence block. The corresponding joint activity detection and channel estimation problem can be formulated as a group sparse estimation problem, also known under the name "Group Lasso". This letter presents a fast and efficient distributed algorithm to solve such Group Lasso problems, which alternates between solving small-scaled problems in parallel and dealing with a linear equation for consensus. Numerical results demonstrate the speedup of this algorithm compared with the state-of-the-art methods in terms of convergence speed and computation time. | electrical engineering and systems science |
The novel coronavirus disease (COVID-19) has spread rapidly across the world in a short period of time and with a heterogeneous pattern. Understanding the underlying temporal and spatial dynamics in the spread of COVID-19 can result in informed and timely public health policies. In this paper, we use a spatio-temporal stochastic model to explain the temporal and spatial variations in the daily number of new confirmed cases in Spain, Italy and Germany from late February to mid September 2020. Using a hierarchical Bayesian framework, we found that the temporal trend of the epidemic in the three countries rapidly reached their peaks and slowly started to decline at the beginning of April and then increased and reached their second maximum in August. However decline and increase of the temporal trend seems to be sharper in Spain and smoother in Germany. The spatial heterogeneity of the relative risk of COVID-19 in Spain is also more pronounced than Italy and Germany. | statistics |
Inferring the correct answers to binary tasks based on multiple noisy answers in an unsupervised manner has emerged as the canonical question for micro-task crowdsourcing or more generally aggregating opinions. In graphon estimation, one is interested in estimating edge intensities or probabilities between nodes using a single snapshot of a graph realization. In the recent literature, there has been exciting development within both of these topics. In the context of crowdsourcing, the key intellectual challenge is to understand whether a given task can be more accurately denoised by aggregating answers collected from other different tasks. In the context of graphon estimation, precise information limits and estimation algorithms remain of interest. In this paper, we utilize a statistical reduction from crowdsourcing to graphon estimation to advance the state-of-art for both of these challenges. We use concepts from graphon estimation to design an algorithm that achieves better performance than the {\em majority voting} scheme for a setup that goes beyond the {\em rank one} models considered in the literature. We use known explicit lower bounds for crowdsourcing to provide refined lower bounds for graphon estimation. | statistics |
This study explores the validity of chain effects of clean water, which are known as the "Mills-Reincke phenomenon," in early twentieth-century Japan. Recent studies have reported that water purifications systems are responsible for huge contributions to human capital. Although some studies have investigated the instantaneous effects of water-supply systems in pre-war Japan, little is known about the chain effects of these systems. By analyzing city-level cause-specific mortality data from 1922-1940, we find that a decline in typhoid deaths by one per 1,000 people decreased the risk of death due to non-waterborne diseases such as tuberculosis and pneumonia by 0.742-2.942 per 1,000 people. Our finding suggests that the observed Mills-Reincke phenomenon could have resulted in the relatively rapid decline in the mortality rate in early twentieth-century Japan. | statistics |
Using a Tanaka representation of the local time for a class of superprocesses with dependent spatial motion, as well as sharp estimates from the theory of uniformly parabolic partial differential equations, the joint H\"older continuity in time and space of said local times is obtained in two and three dimensional Euclidean space. | mathematics |
We generalize the recently discovered relationship between JT gravity and double-scaled random matrix theory to the case that the boundary theory may have time-reversal symmetry and may have fermions with or without supersymmetry. The matching between variants of JT gravity and matrix ensembles depends on the assumed symmetries. Time-reversal symmetry in the boundary theory means that unorientable spacetimes must be considered in the bulk. In such a case, the partition function of JT gravity is still related to the volume of the moduli space of conformal structures, but this volume has a quantum correction and has to be computed using Reidemeister-Ray-Singer "torsion." Presence of fermions in the boundary theory (and thus a symmetry $(-1)^F$) means that the bulk has a spin or pin structure. Supersymmetry in the boundary means that the bulk theory is associated to JT supergravity and is related to the volume of the moduli space of super Riemann surfaces rather than of ordinary Riemann surfaces. In all cases we match JT gravity or supergravity with an appropriate random matrix ensemble. All ten standard random matrix ensembles make an appearance -- the three Dyson ensembles and the seven Altland-Zirnbauer ensembles. To facilitate the analysis, we extend to the other ensembles techniques that are most familiar in the case of the original Wigner-Dyson ensemble of hermitian matrices. We also generalize Mirzakhani's recursion for the volumes of ordinary moduli space to the case of super Riemann surfaces. | high energy physics theory |
Neutrinos can gain mass from coupling to an ultralight field in slow roll. When such a field is displaced from its minimum, its vev acts just like the Higgs vev in spontaneous symmetry breaking. Although these masses may eventually vanish, they do it over a very long time. The theory is technically natural, with the ultralight field-dependent part being the right-handed Majorana mass. The mass variation induced by the field correlates with the cosmological evolution. The change of the mass term changes the mixing matrix, and therefore suppresses the fraction of sterile neutrinos at earlier times and increases it at later times. Since the issue of quantum gravity corrections to field theories with large field variations remains open, this framework may give an observational handle on the Weak Gravity Conjecture. | high energy physics phenomenology |
We study quantum statistical inference tasks of hypothesis testing and their canonical variations, in order to review relations between their corresponding figures of merit---measures of statistical distance---and demonstrate the crucial differences which arise in the quantum regime in contrast to the classical setting. In our analysis, we primarily focus on the geometric approach to data inference problems, within which the aforementioned measures can be neatly interpreted as particular forms of divergences that quantify distances in the space of probability distributions or, when dealing with quantum systems, of density matrices. Moreover, with help of the standard language of Riemannian geometry we identify both the metrics such divergences must induce and the relations such metrics must then naturally inherit. Finally, we discuss exemplary applications of such a geometric approach to problems of quantum parameter estimation, "speed limits" and thermodynamics. | quantum physics |
In this note we study chaos in generic quantum systems with a global symmetry generalizing seminal work [arXiv : 1503.01409] by Maldacena, Shenker and Stanford. We conjecture a bound on instantaneous chaos exponent in a thermodynamic ensemble at temperature $T$ and chemical potential $\mu$ for the continuous global symmetry under consideration. For local operators which could create excitation up to some fixed charge, the bound on chaos (Lyapunov) exponent is independent of chemical potential $\lambda_L \leq \frac{2 \pi T}{ \hbar} $. On the other hand when the operators could create excitation of arbitrary high charge, we find that exponent must satisfy $\lambda_L \leq \frac{2 \pi T}{(1-|\frac{\mu}{\mu_c}|) \hbar} $, where $\mu_c$ is the maximum value of chemical potential for which the thermodynamic ensemble makes sense. As specific examples of quantum mechanical systems we consider conformal field theories. In a generic conformal field theory with internal $U(1)$ symmetry living on a cylinder the former bound is applicable, whereas in more interesting examples of holographic two dimensional conformal field theories dual to Einstein gravity, we argue that later bound is saturated in presence of a non-zero chemical potential for rotation. | high energy physics theory |
It is shown that quantum mechanics is a plausible statistical description of an ontology described by classical electrodynamics. The reason that no contradiction arises with various no-go theorems regarding the compatibility of QM with a classical ontology, can be traced to the fact that classical electrodynamics of interacting particles has never been given a consistent definition. Once this is done, our conjecture follows rather naturally, including a purely classical explanation of photon related phenomena. Our analysis entirely rests on the block-universe view entailed by relativity theory. | quantum physics |
In a series of previous papers, we have presented a new approach, based on perturbative QCD, for the evolution of a jet in a dense quark-gluon plasma. In the original formulation, the plasma was assumed to be homogeneous and static. In this work, we extend our description and its Monte Carlo implementation to a plasma obeying Bjorken longitudinal expansion. Our key observation is that the factorisation between vacuum-like and medium-induced emissions, derived in the static case, still holds for an expanding medium, albeit with modified rates for medium-induced emissions and transverse momentum broadening, and with a modified phase-space for vacuum-like emissions. We highlight a scaling relation valid for the energy spectrum of medium-induced emissions, through which the case of an expanding medium is mapped onto an effective static medium. We find that scaling violations due to vacuum-like emissions and transverse momentum broadening are numerically small. Our new predictions for the nuclear modification factor for jets $R_{AA}$, the in-medium fragmentation functions, and substructure distributions are very similar to our previous estimates for a static medium, maintaining the overall good qualitative agreement with existing LHC measurements. In the case of $R_{AA}$, we find that the agreement with the data is significantly improved at large transverse momenta $p_T\gtrsim 500$ GeV after including the effects of the nuclear parton distribution functions. | high energy physics phenomenology |
In this article we study the tail probability of the mass of critical Gaussian multiplicative chaos (GMC) associated to a general class of log-correlated Gaussian fields in any dimension, including the Gaussian free field (GFF) in dimension two. More precisely, we derive a fully explicit formula for the leading order asymptotics for the tail probability and demonstrate a new universality phenomenon. Our analysis here shares similar philosophy with the subcritical case but requires a different approach due to complications in the analogous localisation step, and we also employ techniques from recent studies of fusion estimates in GMC theory. | mathematics |
In 1999, Katona and Kierstead conjectured that if a $k$-uniform hypergraph $\cal H$ on $n$ vertices has minimum co-degree $\lfloor \frac{n-k+3}{2}\rfloor$, i.e., each set of $k-1$ vertices is contained in at least $\lfloor \frac{n-k+3}{2}\rfloor$ edges, then it has a Hamiltonian cycle. R\"{o}dl, Ruci\'{n}ski and Szemer\'{e}di in 2011 proved that the conjecture is true when $k=3$ and $n$ is large. We show that this Katona-Kierstead conjecture holds if $k=4$, $n$ is large, and $V({\cal H})$ has a partition $A$, $B$ such that $|A|=\lceil n/2\rceil$, $|\{e\in E({\cal H}):|e \cap A|=2\}| <\epsilon n^4$. | mathematics |
We theoretically investigate a Bose-Einstein condensate confined by a rotating harmonic trap whose rotation axis is not aligned with any of its principal axes. The principal axes of the Thomas-Fermi density profiles of the resulting stationary solutions are found to be tilted with respect to those of the rotating trap, representing an extra degree of freedom that is associated with the existence of additional branches of stationary solutions for any given rotation axis alignment. By linearizing the time-dependent theory about the stationary states, we obtain a semi-analytical prediction of their dynamical instability at high rotation frequencies against collective modes arising from environmental perturbations. Comparing the stationary states to direct simulations of the Gross-Pitaevskii equation, we predict the nucleation of quantum vortices in the dynamically unstable rotational regime. These vortex lines are aligned along the rotation axis despite the tilting of the rotating trap although the background density profile is tilted with respect to the trapping and rotation axes. | condensed matter |
We study the phenomenology of a strongly-interacting top quark at future hadron and lepton colliders, showing that the characteristic four-top contact operators give rise to the most significant effects. We demonstrate the extraordinary potential of a 100 TeV proton-proton collider to directly test such non-standard interactions in four-top production, a process that we thoroughly analyze in the same-sign dilepton and trilepton channels, and explore in the fully hadronic channel. Furthermore, high-energy electron-positron colliders, such as CLIC or the ILC, are shown to exhibit an indirect yet remarkable sensitivity to four-top operators, since these constitute, via renormalization group evolution, the leading new-physics deformations in top-quark pair production. We investigate the impact of our results on the parameter space of composite Higgs models with a strongly-coupled (right-handed) top quark, finding that four-top probes provide the best sensitivity on the compositeness scale at the future energy frontier. In addition, we investigate mild yet persisting LHC excesses in multilepton plus jets final states, showing that they can be consistently described in the effective field theory of such a new-physics scenario. | high energy physics phenomenology |
Quantum light sources are characterized by their distinctive statistical distribution of photons. For example, single photons and correlated photon pairs exhibit antibunching and reduced variance in the number distribution that is impossible with classical light. Most common realizations of quantum light sources have relied on spontaneous parametric processes such as down-conversion (SPDC) and four-wave mixing (SFWM). These processes are mediated by vacuum fluctuations of the electromagnetic field. Therefore, by manipulating the electromagnetic mode structure, for example, using nanophotonic systems, one can engineer the spectrum of generated photons. However, such manipulations are susceptible to fabrication disorders which are ubiquitous in nanophotonic systems and lead to device-to-device variations in the spectrum of generated photons. Here, we demonstrate topologically robust mode engineering of the electromagnetic vacuum fluctuations and implement a nanophotonic quantum light source where the spectrum of generated photons is robust against fabrication disorders. Specifically, we use the topological edge states to achieve an enhanced and robust generation of correlated photon pairs using SFWM and show that they outperform their topologically-trivial counterparts. We demonstrate the non-classical nature of our source using conditional antibunching of photons which confirms that we have realized a robust source of heralded single photons. Such topological effects, which are unique to bosonic systems, could pave the way for the development of robust quantum photonic devices. | physics |
We collected the experimental data of transverse momentum spectra of identified particles produced in proton-proton ($p$-$p$), deuteron-gold ($d$-Au or $d$-$A$), gold-gold (Au-Au or $A$-$A$), proton-lead ($p$-Pb or $p$-$A$), and lead-lead (Pb-Pb or $A$-$A$) collisions measured by the ALICE, CMS, LHCb, NA49, NA61/SHINE, PHENIX, and STAR collaborations at different center-of mass energies. The multisource thermal model at the quark level or the participant quark model is used to describe the experimental data. The free parameters, the effective temperature $T$, entropy index-related $n$, and revised index $a_{0}$, in the revised Tsallis--Pareto-type function are extracted at the quark level. In most cases, $T$ and $n$ in central collisions are larger than those in peripheral collisions, and $a_0$ does not change in different centrality classes. With the increase in the mass of produced particle or participant quark, $T$ and $a_0$ increase, and $n$ does not change significantly. The behaviors of related parameters from $p$-$p$, $p(d)$-$A$, and $A$-$A$ collisions are similar. | high energy physics phenomenology |
We present an algorithm to compute stabilizing minimum dwell times for discrete-time switched linear systems without the explicit knowledge of state-space models of their subsystems. Given a set of finite traces of state trajectories of the subsystems that satisfies certain properties, our algorithm involves the following tasks: first, multiple Lyapunov functions are designed from the given data; second, a set of relevant scalars is computed from these functions; and third, a stabilizing minimum dwell time is determined as a function of these scalars. A numerical example is presented to demonstrate the proposed algorithm. | electrical engineering and systems science |
Performing exact Bayesian inference for complex models is computationally intractable. Markov chain Monte Carlo (MCMC) algorithms can provide reliable approximations of the posterior distribution but are expensive for large datasets and high-dimensional models. A standard approach to mitigate this complexity consists in using subsampling techniques or distributing the data across a cluster. However, these approaches are typically unreliable in high-dimensional scenarios. We focus here on a recent alternative class of MCMC schemes exploiting a splitting strategy akin to the one used by the celebrated ADMM optimization algorithm. These methods appear to provide empirically state-of-the-art performance but their theoretical behavior in high dimension is currently unknown. In this paper, we propose a detailed theoretical study of one of these algorithms known as the split Gibbs sampler. Under regularity conditions, we establish explicit convergence rates for this scheme using Ricci curvature and coupling ideas. We support our theory with numerical illustrations. | statistics |
Analysis of ac electrical systems can be performed via frame transformations in the time-domain or via harmonic transfer functions (HTFs) in the frequency-domain. The two approaches each have unique advantages but are hard to reconcile because the coupling effect in the frequency-domain leads to infinite dimensional HTF matrices that need to be truncated. This paper explores the relation between the two representations and shows that applying a similarity transformation to an HTF matrix creates a direct equivalence to a frame transformation on the input-output signals. Under certain conditions, such similarity transformations have a diagonalizing effect which, essentially, reduces the HTF matrix order from infinity to two or one, making the matrix tractable mathematically without truncation or approximation. This theory is applied to a droop-controlled voltage source inverter as an illustrative example. A stability criterion is derived in the frequency-domain which agrees with the conventional state-space model but offers greater insights into the mechanism of instability in terms of the negative damping (non-passivity) under droop control. The paper not only establishes a unified view in theory but also offers an effective practical tool for stability assessment. | electrical engineering and systems science |
We present pseudospectral direct-numerical-simulation (DNS) studies of the three-dimensional magnetohydrodynamic (MHD) equations (3DRFMHD) with a stochastic force that has zero mean and a variance $\sim k^{-3}$, where $k$ is the wavenumber, because 3DRFMHD is used in field-theoretic studies of the scaling of energy spectra in MHD turbulence. We obtain velocity and magnetic-field spectra and structure functions and, from these, the multiscaling exponent ratios $\zeta_p/\zeta_3$, by using the extended self similarity (ESS) procedure. These exponent ratios lie within error bars of their counterparts for conventional three-dimensional MHD turbulence (3DMHD). We then carry out a systematic comparison of the statistical properties of 3DMHD and 3DRFMHD turbulence by examining various probability distribution functions (PDFs), joint PDFs, and isosurfaces of of, e.g., the moduli of the vorticity and the current density for three magnetic Prandtl numbers ${\rm Pr_M} = 0.1,~1$, and $10$. | physics |
We compute general higher-point functions in the sector of large charge operators $\phi^n$, $\bar\phi^n$ at large charge in $O(2)$ $(\bar \phi\phi)^2$ theory. We find that there is a special class of "extremal" correlators having only one insertion of $\bar \phi^n$ that have a remarkably simple form in the double-scaling limit $n\to \infty $ at fixed $g\,n^2\equiv \lambda$, where $g\sim\epsilon $ is the coupling at the $O(2)$ Wilson-Fisher fixed point in $4-\epsilon$ dimensions. In this limit, also non-extremal correlators can be computed. As an example, we give the complete formula for $ \langle \phi(x_1)^{n}\,\phi(x_2)^{n}\,\bar{\phi}(x_3)^{n}\,\bar{\phi}(x_4)^{n}\rangle$, which reveals an interesting structure. | high energy physics theory |
The $\theta$-deformed Hopf fibration $\mathbb{S}^3_\theta\to \mathbb{S}^2$ over the commutative $2$-sphere is compared with its classical counterpart. It is shown that there exists a natural isomorphism between the corresponding associated module functors and that the affine spaces of classical and deformed connections are isomorphic. The latter isomorphism is equivariant under an appropriate notion of infinitesimal gauge transformations in these contexts. Gauge transformations and connections on associated modules are studied and are shown to be sensitive to the deformation parameter. A homotopy theoretic explanation for the existence of a close relationship between the classical and deformed Hopf fibrations is proposed. | mathematics |
We present a quantum algorithm to solve dynamic programming problems with convex value functions. For linear discrete-time systems with a $d$-dimensional state space of size $N$, the proposed algorithm outputs a quantum-mechanical representation of the value function in time $O(T \gamma^{dT}\mathrm{polylog}(N,(T/\varepsilon)^{d}))$, where $\varepsilon$ is the accuracy of the solution, $T$ is the time horizon, and $\gamma$ is a problem-specific parameter depending on the condition numbers of the cost functions. This allows us to evaluate the value function at any fixed state in time $O(T \gamma^{dT}\sqrt{N}\,\mathrm{polylog}(N,(T/\varepsilon)^{d}))$, and the corresponding optimal action can be recovered by solving a convex program. The class of optimization problems to which our algorithm can be applied includes provably hard stochastic dynamic programs. Finally, we show that the algorithm obtains a quadratic speedup (up to polylogarithmic factors) compared to the classical Bellman approach on some dynamic programs with continuous state space that have $\gamma=1$. | quantum physics |
We report a comprehensive study of low-power, octave-bandwidth, single-soliton microresonator frequency combs in both the 1550 nm and 1064 nm bands. Our experiments utilize fully integrated silicon-nitride Kerr microresonators, and we demonstrate direct soliton generation with widely available distributed-Bragg-reflector lasers that provide less than 40 mW of chip-coupled laser power. We report measurements of soliton thermal dynamics and demonstrate how rapid laser-frequency control, consistent with the thermal timescale of a microresonator, facilitates stabilization of octave-bandwidth soliton combs. Moreover, since soliton combs are completely described by fundamental linear and nonlinear dynamics of the intraresonator field, we demonstrate the close connection between modeling and generation of octave-bandwidth combs. Our experiments advance the development of self-referenced frequency combs with integrated-photonics technology, and comb-laser sources with tens of terahertz pulse bandwidth across the near-infrared. | physics |
We classify stacky curves in characteristic $p > 0$ with cyclic stabilizers of order $p$ using higher ramification data. This approach replaces the local root stack structure of a tame stacky curve, similar to the local structure of a complex orbifold curve, with a more sensitive structure called an Artin-Schreier root stack, allowing us to incorporate this ramification data directly into the stack. As an application, we compute dimensions of Riemann-Roch spaces for some examples of stacky curves in positive characteristic and suggest a program for computing spaces of modular forms in this setting. | mathematics |
Whether any OB stars form in isolation is a question central to theories of massive star formation. To address this, we search for tiny, sparse clusters around 210 field OB stars from the Runaways and Isolated O-Type Star Spectroscopic Survey of the SMC (RIOTS4), using friends-of-friends (FOF) and nearest neighbors (NN) algorithms. We also stack the target fields to evaluate the presence of an aggregate density enhancement. Using several statistical tests, we compare these observations with three random-field datasets, and we also compare the known runaways to non-runaways. We find that the local environments of non-runaways show higher aggregate central densities than for runaways, implying the presence of some "tips-of-iceberg" (TIB) clusters. We find that the frequency of these tiny clusters is low, $\sim 4-5\%$ of our sample. This fraction is much lower than some previous estimates, but is consistent with field OB stars being almost entirely runaway and walkaway stars. The lack of TIB clusters implies that such objects either evaporate on short timescales, or do not form, implying a higher cluster lower-mass limit and consistent with a relationship between maximum stellar mass ($m_{\rm max}$) and the mass of the cluster ($M_{\rm cl}$). On the other hand, we also cannot rule out that some OB stars may form in highly isolated conditions. Our results set strong constraints on the formation of massive stars in relative isolation. | astrophysics |
In this paper, the problem of minimizing energy and time consumption for task computation and transmission is studied in a mobile edge computing (MEC)-enabled balloon network. In the considered network, each user needs to process a computational task in each time instant, where high-altitude balloons (HABs), acting as flying wireless base stations, can use their powerful computational abilities to process the tasks offloaded from their associated users. Since the data size of each user's computational task varies over time, the HABs must dynamically adjust the user association, service sequence, and task partition scheme to meet the users' needs. This problem is posed as an optimization problem whose goal is to minimize the energy and time consumption for task computing and transmission by adjusting the user association, service sequence, and task allocation scheme. To solve this problem, a support vector machine (SVM)-based federated learning (FL) algorithm is proposed to determine the user association proactively. The proposed SVM-based FL method enables each HAB to cooperatively build an SVM model that can determine all user associations without any transmissions of either user historical associations or computational tasks to other HABs. Given the prediction of the optimal user association, the service sequence and task allocation of each user can be optimized so as to minimize the weighted sum of the energy and time consumption. Simulations with real data of city cellular traffic from the OMNILab at Shanghai Jiao Tong University show that the proposed algorithm can reduce the weighted sum of the energy and time consumption of all users by up to 16.1% compared to a conventional centralized method. | electrical engineering and systems science |
For an element $\Psi$ in the graded vector space $\Omega^*(M, TM)$ of tangent bundle valued forms on a smooth manifold $M$, a $\Psi$-submanifold is defined as a submanifold $N$ of $M$ such that $\Psi_{|N} \in \Omega^*(N, TN)$. The class of $\Psi$-submanifolds encompasses calibrated submanifolds, complex submanifolds and all Lie subgroups in compact Lie groups. The graded vector space $\Omega^*(M, TM)$ carries a natural graded Lie algebra structure, given by the Fr\"olicher-Nijenhuis bracket $[-,- ]^{FN}$. When $\Psi$ is an odd degree element with $[ \Psi, \Psi]^{FN} =0$, we associate to a $\Psi$-submanifold $N$ a strongly homotopy Lie algebra, which governs the formal and (under certain assumptions) smooth deformations of $N$ as a $\Psi$-submanifold, and we show that under certain assumptions these deformations form an analytic variety. As an application we revisit formal and smooth deformation theory of complex closed submanifolds and of $\varphi$-calibrated closed submanifolds, where $\varphi$ is a parallel form in a real analytic Riemannian manifold. | mathematics |
Symptom checkers have emerged as an important tool for collecting symptoms and diagnosing patients, minimizing the involvement of clinical personnel. We developed a machine-learning-backed system, SmartTriage, which goes beyond conventional symptom checking through a tight bi-directional integration with the electronic medical record (EMR). Conditioned on EMR-derived patient history, our system identifies the patient's chief complaint from a free-text entry and then asks a series of discrete questions to obtain relevant symptomatology. The patient-specific data are used to predict detailed ICD-10-CM codes as well as medication, laboratory, and imaging orders. Patient responses and clinical decision support (CDS) predictions are then inserted back into the EMR. To train the machine learning components of SmartTriage, we employed novel data sets of over 25 million primary care encounters and 1 million patient free-text reason-for-visit entries. These data sets were used to construct: (1) a long short-term memory (LSTM) based patient history representation, (2) a fine-tuned transformer model for chief complaint extraction, (3) a random forest model for question sequencing, and (4) a feed-forward network for CDS predictions. We also present the full production architecture for the pilot deployment of SmartTriage that covers 337 patient chief complaints. | computer science |
A manifestly covariant equation is derived to describe the second order perturbations in topological defects and membranes on arbitrary curved background spacetimes. This, on one hand, generalizes work on macroscopic strings in Minkowski spacetime and introduces a framework for studing in a precise manner membranes behavior near the black hole horizon and on the other hand, introduces a more general framework for examining the stability of topological defects in curved spacetimes. | high energy physics theory |
The Toffoli gate serving as a basic building block for reversible quantum computation, has manifested its great potentials in improving the error-tolerant rate in quantum communication. While current route to the creation of Toffoli gate requires implementing sequential single- and two-qubit gates, limited by longer operation time and lower average fidelity. We develop a new theoretical protocol to construct a universal $(n+1)$-qubit Toffoli gate sphere based on the Rydberg blockade mechanism, by constraining the behavior of one central target atom with $n$ surrounding control atoms. Its merit lies in the use of only five $\pi$ pulses independent of the control atom number $n$ which leads to the overall gate time as fast as $\sim$125$n$s and the average fidelity closing to 0.999. The maximal filling number of control atoms can be up to $n=46$, determined by the spherical diameter which is equal to the blockade radius, as well as by the nearest neighbor spacing between two trapped-atom lattices. Taking $n=2,3,4$ as examples we comparably show the gate performance with experimentally accessible parameters, and confirm that the gate errors mainly attribute to the imperfect blockade strength, the spontaneous atomic loss and the imperfect ground-state preparation. In contrast to an one-dimensional-array configuration it is remarkable that the spherical atomic sample preserves a high-fidelity output against the increasing of $n$, shedding light on the study of scalable quantum simulation and entanglement with multiple neutral atoms. | physics |
We show that there exists a positive arithmetical formula $\psi(x,R)$, where $x \in \om$, $R \subseteq \om$, with no hyperarithmetical fixed point. This answers a question of Gerhard Jaeger. As corollaries we obtain results on: (a) the proof-theoretic strength of the Kripke-Platek set theory; (b) fixed points of monotone functions in complete partial orders; (c) the uniformization of Borel sets; and (d) hyperdegrees of fixed points of positive formulae. | mathematics |
This work is devoted to the discussion and characterization of the tensor $2^{-(-)}$ meson spectrum, by making use of the Coulomb gauge Hamiltonian approach to QCD, with the interactions being given by an improved confining potential and a transverse hyperfine interaction, whose kernel is a Yukawa-type potential. Our aim is to study the basic features of $2^{-(-)}$ mesons within an unified framework through the whole range of quark masses. We concentrate our investigation on predictions of expected but yet-unobserved ground states of unflavored light mesons and on charmonium and bottomonium states. The numerical results are compared with existing literature. | high energy physics phenomenology |
Detecting traveling photons is an essential primitive for many quantum information processing tasks. We introduce a single-photon detector design operating in the microwave domain, based on a weakly nonlinear metamaterial where the nonlinearity is provided by a large number of Josephson junctions. The combination of weak nonlinearity and large spatial extent circumvents well-known obstacles limiting approaches based on a localized Kerr medium. Using numerical many-body simulations we show that the single-photon detection fidelity increases with the length of the metamaterial to approach one at experimentally realistic lengths. A remarkable feature of the detector is that the metamaterial approach allows for a large detection bandwidth. In stark contrast to conventional photon detectors operating in the optical domain, the photon is not destroyed by the detection and the photon wavepacket is minimally disturbed. The detector design we introduce offers new possibilities for quantum information processing, quantum optics and metrology in the microwave frequency domain. | quantum physics |
Given two pairs of quantum states, a fundamental question in the resource theory of asymmetric distinguishability is to determine whether there exists a quantum channel converting one pair to the other. In this work, we reframe this question in such a way that a catalyst can be used to help perform the transformation, with the only constraint on the catalyst being that its reduced state is returned unchanged, so that it can be used again to assist a future transformation. What we find here, for the special case in which the states in a given pair are commuting, and thus quasi-classical, is that this catalytic transformation can be performed if and only if the relative entropy of one pair of states is larger than that of the other pair. This result endows the relative entropy with a fundamental operational meaning that goes beyond its traditional interpretation in the setting of independent and identical resources. Our finding thus has an immediate application and interpretation in the resource theory of asymmetric distinguishability, and we expect it to find application in other domains. | quantum physics |
New methodologies are designed to reduce the time complexity of an implicit discrete-time differentiator and the simulation time to implement it. They rely on Horner's method and the Shaw-Traub algorithm. The algorithms are compared for differentiators of orders 3, 7, and 10. The Half-Horner and Full-Horner methods showed the best performance and time complexity. | electrical engineering and systems science |
Diabetes-related retinal conditions can be detected by examining the posterior of the eye. By contrast, examining the anterior of the eye can reveal conditions affecting the front of the eye, such as changes to the eyelids, cornea, or crystalline lens. In this work, we studied whether external photographs of the front of the eye can reveal insights into both diabetic retinal diseases and blood glucose control. We developed a deep learning system (DLS) using external eye photographs of 145,832 patients with diabetes from 301 diabetic retinopathy (DR) screening sites in one US state, and evaluated the DLS on three validation sets containing images from 198 sites in 18 other US states. In validation set A (n=27,415 patients, all undilated), the DLS detected poor blood glucose control (HbA1c > 9%) with an area under receiver operating characteristic curve (AUC) of 70.2; moderate-or-worse DR with an AUC of 75.3; diabetic macular edema with an AUC of 78.0; and vision-threatening DR with an AUC of 79.4. For all 4 prediction tasks, the DLS's AUC was higher (p<0.001) than using available self-reported baseline characteristics (age, sex, race/ethnicity, years with diabetes). In terms of positive predictive value, the predicted top 5% of patients had a 67% chance of having HbA1c > 9%, and a 20% chance of having vision threatening diabetic retinopathy. The results generalized to dilated pupils (validation set B, 5,058 patients) and to a different screening service (validation set C, 10,402 patients). Our results indicate that external eye photographs contain information useful for healthcare providers managing patients with diabetes, and may help prioritize patients for in-person screening. Further work is needed to validate these findings on different devices and patient populations (those without diabetes) to evaluate its utility for remote diagnosis and management. | electrical engineering and systems science |
We discuss quantum phase transition by an exactly solvable model in the dual gravity setup. By considering the effect of the scalar condensation on the fermion spectrum near the quantum critical point(QCP), we find that there is a topologically protected fermion zero mode associated with the metal to insulator transition. We also show that the strange metal phase with T-linear resistivity emerges at high enough temperature as far as the gravity has a horizon. The phase boundaries are calculated according to the density of states, giving insights on structures of the phase diagram near the QCP. | high energy physics theory |
We study the quantum circuit complexity of cosmological perturbations in different models of the early universe. A natural measure for the complexity of cosmological perturbations is based on the symplectic group, allowing us to identify complexity with geodesics in the hyperbolic plane. We investigate the complexity of both the mode functions and the physical perturbations, arguing that the latter often provides a more insightful description of the physics involved. In all models the total complexity reached is rather large. Inflationary perturbations may be represented by a comparatively simple quantum circuit, while the perturbations during a matter-dominated contracting phase present the most rapid growth in complexity. Ekpyrotic perturbations reside in the middle and are distinguished by the smallest growth of complexity before horizon exit. Our analysis serves to highlight how different cosmological models achieve the same end result for the perturbations via different routes and how all models show a pronounced sensitivity to initial conditions. | high energy physics theory |
In 2010/3, the Large Area Telescope on board Fermi revealed a transient gamma-ray source, positionally coincident with the optical nova in the symbiotic binary, V407Cyg. This event marked the first discovery of gamma-ray emission from a nova. We aimed to obtain resolved radio imaging of the material involved in the nova event; to determine the ejecta geometry and advance velocity directly in the image plane; to constrain the physical conditions of the system. We observed the source with the EVN and the VLBA over 16 epochs, between 20 days and 6 months after the optical discovery. The source is initially very dim but it later shows a substantial increase in brightness and a resolved shell-like structure 40 to 90 days after the optical event. The shell has a projected elliptical shape and is asymmetric in brightness and spectral index, being brighter and characterised by a rising spectrum at the S-E edge. We determine a projected velocity of ~3500 km/s in the initial phase, and ~2100 km/s between day 20 and 91. We also found an emitting feature about 350 mas (940 AU) to the N-W, advancing at a projected velocity of ~700 km/s along the polar axis of the binary. The total flux density in the VLBI images is significantly lower than that previously reported at similar epochs and over much wider angular scales with the VLA. Optical spectra demonstrated that in 2010 we were viewing V407Cyg along the equatorial plane and from behind the Mira. Our radio observations image the bipolar flow of the ejecta perpendicular to the orbital plane, where deceleration is much lower than through the equatorial plane probed by the truncated profile of optical emission lines. The separated polar knot at 350 mas and the bipolar flow strictly resemble the similar arrangement seen in Hen 2-104. The observed ~700 km/s expansion constrains the launch-date of the polar knot around 2004. [Abridged] | astrophysics |
RNO is the mid-scale discovery instrument designed to make the first observation of neutrinos from the cosmos at extreme energies, with sensitivity well beyond current instrument capabilities. This new observatory will be the largest ground-based neutrino telescope to date, enabling the measurement of neutrinos above $10^{16}$ eV, determining the nature of the astrophysical neutrino flux that has been measured by IceCube at higher energies, similarly extending the reach of multi-messenger astrophysics to the highest energies, and enabling investigations of fundamental physics at energies unreachable by particle accelerators on Earth. | astrophysics |
We study the orbital angular momentum (OAM) transfer from a weak Laguerre-Gaussian (LG) field to a weak plane-wave in two closed-loop three-level $V$-type atomic systems. In the first scheme, the atomic system has two non-degenerate upper levels which the corresponding transition is excited by a microwave plane-wave. It is analytically shown that the microwave field induces an OAM transfer from an LG field to a generated third field. In the second scheme, we consider a three-level $V$-type atomic system with two near-degenerate excited states and study the effect of the quantum interference due to the spontaneous emission on the OAM transfer. It is found that the spontaneously generated coherence (SGC) induces the OAM transfer from the LG field to the weak planar field, while the OAM transfer does not occur in the absence of the SGC. The suggested models prepare a rather simple method for the OAM transfer which can be used in quantum information processing and data storage. | physics |
The $B \to \kappa \bar \kappa$ decays are investigated for the first time in the perturbative QCD formalism based on the $k_T$ factorization theorem, where the light scalar $\kappa$ is assumed as a two-quark state. Our numerical results and phenomenological analyses on the CP-averaged branching ratios and CP-violating asymmetries show that: (a) the $B_s^0 \to \kappa^+ \kappa^-$ and $B_s^0 \to \kappa^0 \bar \kappa^0$ decays have large decay rates around ${\cal O}(10^{-5})$, which could be examined by the upgraded Large Hadron Collider beauty and/or Belle-II experiments in the near future; (b) a large decay rate about $3 \times 10^{-6}$ appears in the pure annihilation $B_d^0 \to \kappa^+ \kappa^-$ channel, which could provide more evidences to help distinguish different QCD-inspired factorization approaches, even understand the annihilation decay mechanism; (c) the pure penguin modes $B_d^0 \to \kappa^0 \bar \kappa^0$ and $B_s^0 \to \kappa^0 \bar \kappa^0$ would provide a promising ground to search for the possible new physics because of their zero direct and mixing-induced CP violations in the standard model. The examinations with good precision from the future experiments will help to further study the perturbative and/or nonperturbative QCD dynamics involved in these considered decay modes. | high energy physics phenomenology |
The purpose of this paper is to construct confidence intervals for the regression coefficients in the Fine-Gray model for competing risks data with random censoring, where the number of covariates can be larger than the sample size. Despite strong motivation from biomedical applications, a high-dimensional Fine-Gray model has attracted relatively little attention among the methodological or theoretical literature. We fill in this gap by developing confidence intervals based on a one-step bias-correction for a regularized estimation. We develop a theoretical framework for the partial likelihood, which does not have independent and identically distributed entries and therefore presents many technical challenges. We also study the approximation error from the weighting scheme under random censoring for competing risks and establish new concentration results for time-dependent processes. In addition to the theoretical results and algorithms, we present extensive numerical experiments and an application to a study of non-cancer mortality among prostate cancer patients using the linked Medicare-SEER data. | statistics |
Reaction-diffusion waves have long been used to describe the growth and spread of populations undergoing a spatial range expansion. Such waves are generally classed as either pulled, where the dynamics are driven by the very tip of the front and stochastic fluctuations are high, or pushed, where cooperation in growth or dispersal results in a bulk-driven wave in which fluctuations are suppressed. These concepts have been well studied experimentally in populations where the cooperation leads to a density-dependent growth rate. By contrast, relatively little is known about experimental populations that exhibit density-dependent dispersal. Using bacteriophage T7 as a test organism, we present novel experimental measurements that demonstrate that the diffusion of phage T7, in a lawn of host E. coli, is hindered by steric interactions with host bacteria cells. The coupling between host density, phage dispersal and cell lysis caused by viral infection results in an effective density-dependent diffusion coefficient akin to cooperative behavior. Using a system of reaction-diffusion equations, we show that this effect can result in a transition from a pulled to pushed expansion. Moreover, we find that a second, independent density-dependent effect on phage dispersal spontaneously emerges as a result of the viral incubation period, during which phage is trapped inside the host unable to disperse. Additional stochastic agent-based simulations reveal that lysis time dramatically affects the rate of diversity loss in viral expansions. Taken together, our results indicate both that bacteriophage can be used as a controllable laboratory population to investigate the impact of density-dependent dispersal on evolution, and that the genetic diversity and adaptability of expanding viral populations could be much greater than is currently assumed. | physics |
Background -- In phase I clinical trials, historical data may be available through multi-regional programs, reformulation of the same drug, or previous trials for a drug under the same class. Statistical designs that borrow information from historical data can reduce cost, speed up drug development, and maintain safety. Purpose -- Based on a hybrid design that partly uses probability models and partly uses algorithmic rules for decision making, we aim to improve the efficiency of the dose-finding trials in the presence of historical data, maintain safety for patients, and achieve a level of simplicity for practical applications. Methods -- We propose the Hi3+3 design, in which the letter "H" represents "historical data". We apply the idea in power prior to borrow historical data and define the effective sample size (ESS) of the prior. Dose-finding decision rules follow the idea in the i3+3 design while incorporating the historical data via the power prior and ESS. The proposed Hi3+3 design pretabulates the dosing decisions before the trial starts, a desirable feature for ease of application in practice. Results -- The Hi3+3 design is superior than the i3+3 design due to information borrow from historical data. It is capable of maintaining a high level of safety for trial patients without sacrificing the ability to identify the correct MTD. Illustration of this feature are found in the simulation results. Conclusion -- With the demonstrated safety, efficiency, and simplicity, the Hi3+3 design could be a desirable choice for dose-finding trials borrowing historical data. | statistics |
Although it is widely accepted that the electromagnetic spectrum from radio to very-high-energy $\gamma$-rays of pulsar wind nebulae (PWNe) originates from leptons, there is still an open question that protons (or more generally, ions) may exist in pulsar wind and are further accelerated in PWN. The broadband spectrum of the prototype PWN Crab, extended recently by the detection of the Tibet AS$\gamma$ and HAWC experiments above 100 TeV, may be helpful in constraining the acceleration efficiency of ions. Here, we model the broadest energy spectrum of Crab and find that the broadband spectrum can be explained by the one-zone leptonic model in which the electrons/positrons produce the emission from radio to soft $\gamma$-rays via the synchrotron process, and simultaneously generate the GeV-TeV $\gamma$-rays through inverse Compton scattering including the synchrotron self-Compton process. In the framework of this leptonic model, the fraction of energy converted into the energetic protons is constrained to be below $0.5\ (n_{\rm t}/10\ {\rm cm}^{-3})^{-1}$ per cent, where $n_{\rm t}$ is the target gas density in the Crab. However, this fraction can be up to $7\ (n_{\rm t}/10\ {\rm cm}^{-3})^{-1}$ per cent if only the $\gamma$-rays are used. | astrophysics |
In this paper, we consider the mobile edge offloading scenario consisting of one mobile device (MD) with multiple independent tasks and various remote edge devices. In order to save energy, the user's device can offload the tasks to available access points for edge computing. Data compression is applied to reduce offloaded data size prior to wireless transmission to minimize the execution latency. The problem of jointly optimizing the task allocation decision and the data compression ratio to minimize the total tasks' execution latency and the MD's energy consumption concurrently is proposed. We show that the design problem is a non-convex optimization one but it can be transformed into a convex one through a semidefinite relaxation (SDR) based approach. Numerical simulations demonstrate the outperformance of the proposed scheme compared to the benchmark one. | electrical engineering and systems science |
A newly introduced stochastic data assimilation method, the Ensemble Kalman Filter Semi-Qualitative (EnKF-SQ) is applied to a realistic coupled ice-ocean model of the Arctic, the TOPAZ4 configuration, in a twin experiment framework. The method is shown to add value to range-limited thin ice thickness measurements, as obtained from passive microwave remote sensing, with respect to more trivial solutions like neglecting the out-of-range values or assimilating climatology instead. Some known properties inherent to the EnKF-SQ are evaluated: the tendency to draw the solution closer to the thickness threshold, the skewness of the resulting analysis ensemble and the potential appearance of outliers. The experiments show that none of these properties prove deleterious in light of the other sub-optimal characters of the sea ice data assimilation system used here (non-linearities, non-Gaussian variables, lack of strong coupling). The EnKF-SQ has a single tuning parameter that is adjusted for best performance of the system at hand. The sensitivity tests reveal that the results do not depend critically on the choice of this tuning parameter. The EnKF-SQ makes overall a valid approach for assimilating semi-qualitative observations into high-dimensional nonlinear systems. | statistics |
Upcoming full-sky large-scale structure surveys such as Euclid can probe the primordial Universe. Using the specifications for the Euclid survey, we estimate the constraints on the inflation potential beyond slow-roll. We use mock Euclid and Planck data from fiducial cosmological models using the Wiggly Whipped Inflation (WWI) framework, which generates features in the primordial power spectrum. We include Euclid cosmic shear and galaxy clustering, with two setups (Conservative and Realistic) for the non-linear cut-off. We find that the addition of Euclid data gives an improvement in constraints in the WWI potential, with the Realistic setup providing marginal improvement over the Conservative for most models. This shows that Euclid may allow us to identify oscillations in the primordial spectrum present at intermediate to small scales. | astrophysics |
We study the spectrum of BPS particles on the Coulomb branch of five-dimensional superconformal field theories (5d SCFTs) compactified on a circle. By engineering these theories in M-theory on ${\mathbf X} \times S^1 $, for ${\mathbf X}$ an isolated Calabi-Yau threefold singularity, we naturally identify the BPS category of the 5d theory on a circle with the derived category of coherent sheaves on a resolution of ${\mathbf X}$. It follows that the BPS spectrum can be studied in terms of 5d BPS quivers, which are the fractional-brane quivers for the singularity ${\mathbf X}$. 5d BPS quivers generalize the well-studied 4d BPS quivers for 4d $\mathcal{N}{=}2$ gauge theories that can be obtained from ${\mathbf X}$ in so-called geometric engineering limits. We study the interplay between 4d and 5d BPS quivers in detail. We particularly focus on examples when ${\mathbf X}$ is a toric singularity, in which case the 5d BPS quiver is given in terms of a brane tiling. For instance, the well-studied $Y^{p,q}$ brane tiling gives a 5d BPS quiver for the $SU(p)_q$ 5d gauge theory. We present a conjecture about the structure of the BPS spectra of a wide class of models, which we test in the simple case of the 5d $SU(2)_0$ theory (more precisely, the $E_1$ SCFT). We also argue that 5d UV dualities can be realized in terms of mutation sequences on the BPS quivers, which are in turn interpreted as autoequivalences of the BPS category. | high energy physics theory |
Generating graph structures is a challenging problem due to the diverse representations and complex dependencies among nodes. In this paper, we introduce Graph Variational Recurrent Neural Network (GraphVRNN), a probabilistic autoregressive model for graph generation. Through modeling the latent variables of graph data, GraphVRNN can capture the joint distributions of graph structures and the underlying node attributes. We conduct experiments on the proposed GraphVRNN in both graph structure learning and attribute generation tasks. The evaluation results show that the variational component allows our network to model complicated distributions, as well as generate plausible structures and node attributes. | computer science |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.