text
stringlengths
11
9.77k
label
stringlengths
2
104
Existing deep learning based visual servoing approaches regress the relative camera pose between a pair of images. Therefore, they require a huge amount of training data and sometimes fine-tuning for adaptation to a novel scene. Furthermore, current approaches do not consider underlying geometry of the scene and rely on direct estimation of camera pose. Thus, inaccuracies in prediction of the camera pose, especially for distant goals, lead to a degradation in the servoing performance. In this paper, we propose a two-fold solution: (i) We consider optical flow as our visual features, which are predicted using a deep neural network. (ii) These flow features are then systematically integrated with depth estimates provided by another neural network using interaction matrix. We further present an extensive benchmark in a photo-realistic 3D simulation across diverse scenes to study the convergence and generalisation of visual servoing approaches. We show convergence for over 3m and 40 degrees while maintaining precise positioning of under 2cm and 1 degree on our challenging benchmark where the existing approaches that are unable to converge for majority of scenarios for over 1.5m and 20 degrees. Furthermore, we also evaluate our approach for a real scenario on an aerial robot. Our approach generalizes to novel scenarios producing precise and robust servoing performance for 6 degrees of freedom positioning tasks with even large camera transformations without any retraining or fine-tuning.
computer science
Yukawa production of a light scalar can be explored at a linear collider. Light (pseudo)scalar can exist in extended Higgs models and an interesting example is the light pseudoscalar in Type-X two Higgs doublet model. The model can explain the anomalous magnetic moment of muon at large $\tan \beta$. We show that the available parameter space in this model can be examined by the Yukawa process at 5$\sigma$ at the ILC.
high energy physics phenomenology
We construct closed immersions from initial degenerations of $\operatorname*{Gr}_{0}(d,n)$---the open cell in the Grassmannian $\operatorname*{Gr}(d,n)$ given by the nonvanishing of all Pl\"ucker coordinates---to limits of thin Schubert cells associated to diagrams induced by the face poset of the corresponding tropical linear space. These are isomorphisms when $(d,n)$ equals $(2,n)$, $(3,6)$ and $(3,7)$. As an application we prove $\operatorname*{Gr}_0(3,7)$ is sch\"on, and the Chow quotient of $\operatorname*{Gr}(3,7)$ by the maximal torus in $ \operatorname*{PGL}(7)$ is the log canonical compactification of the moduli space of 7 points in $\mathbb{P}^2$ in linear general position, making progress on a conjecture of Hacking, Keel, and Tevelev.
mathematics
We consider the Skyrme model modified by the addition of mass terms which explicitly break chiral symmetry and pick out a specific point on the model's target space as the unique true vacuum. However, they also allow the possibility of false vacua, local minima of the potential energy. These false vacuum configurations admit metastable skyrmions, which we call false skyrmions. False skyrmions can decay due to quantum tunnelling, consequently causing the decay of the false vacuum. We compute the rate of decay of the false vacuum due to the existence of false skyrmions.
high energy physics theory
Femtosecond laser excitation of FeRh/Pt bilayers launches an ultrafast pulse of electric photocurrent in the Pt-layer and thus results in emission of electromagnetic radiation in the THz spectral range. Analysis of the THz emission as a function of polarization of the femtosecond laser pulse, external magnetic field, sample temperature and sample orientation shows that photocurrent can emerge due to vertical spin pumping and photo-induced inverse spin-orbit torque at the FeRh/Pt interface. The vertical spin pumping from FeRh to Pt does not depend on the polarization of light and originates from ultrafast laser-induced demagnetization of the ferromagnetic phase of FeRh. The photo-induced inverse spin-orbit torque at the FeRh/Pt interface can be described in terms of a helicity-dependent effect of circularly polarized light on the magnetization of the ferromagnetic FeRh and subsequent generation of a photocurrent.
condensed matter
This paper proposes an explicit way to optimize the super-resolution network for generating visually pleasing images. The previous approaches use several loss functions which is hard to interpret and has the implicit relationships to improve the perceptual score. We show how to exploit the machine learning based model which is directly trained to provide the perceptual score on generated images. It is believed that these models can be used to optimizes the super-resolution network which is easier to interpret. We further analyze the characteristic of the existing loss and our proposed explicit perceptual loss for better interpretation. The experimental results show the explicit approach has a higher perceptual score than other approaches. Finally, we demonstrate the relation of explicit perceptual loss and visually pleasing images using subjective evaluation.
electrical engineering and systems science
A time series is a sequence of observations taken sequentially in time. The autoregressive integrated moving average is a class of the model more used for times series data. However, this class of model has two critical limitations. It fits well onlyGaussian data with the linear structure of correlation. Here, I present a new model named as generalized autoregressive neural networks, GARNN. The GARNN is an extension of the generalized linear model where the mean marginal depends on the lagged values via the inclusion of the neural network in the link function. A practical application of the model is shown using a well-known poliomyelitis case number, originated analyzed by Zeger and Qaqish (1988),
statistics
In this paper, we demonstrate how Hyperledger Fabric, one of the most popular permissioned blockchains, can benefit from network-attached acceleration. The scalability and peak performance of Fabric is primarily limited by the bottlenecks present in its block validation/commit phase. We propose Blockchain Machine, a hardware accelerator coupled with a hardware-friendly communication protocol, to act as the validator peer. It can be adapted to applications and their smart contracts, and is targeted for a server with network-attached FPGA acceleration card. The Blockchain Machine retrieves blocks and their transactions in hardware directly from the network interface, which are then validated through a configurable and efficient block-level and transaction-level pipeline. The validation results are then transferred to the host CPU where non-bottleneck operations are executed. From our implementation integrated with Fabric v1.4 LTS, we observed up to 17x speedup in block validation when compared to the software-only validator peer, with commit throughput of up to 95,600 tps (~4.5x improvement over the best reported in literature).
computer science
The aim of this work is to give an introduction to the theoretical background and computational complexity of Markov chain Monte Carlo methods. Most of the mathematical results related to the convergence are not found in most of the statistical references, and computational complexity is still an open question for most of the MCMC methods. In this work, we provide a general overview, references, and discussion about all these theoretical subjects.
statistics
We study the resurgent trans-series for the free energy of the two-dimensional O(4) sigma model in a magnetic field. Exploiting integrability, we obtain very high-order perturbative data, from which we can explore non-perturbative sectors. We are able to determine exactly the leading real-valued exponentially small terms, which we check against the direct numerical solution of the exact integral equation, and find complete agreement.
high energy physics theory
A time-reversed dynamics unwinds information scrambling, which is induced during the time-forward evolution with a complex Hamiltonian. We show that if the scrambled information is, in addition, partially damaged by a local measurement, then such a damage can still be treated by application of the time-reversed protocol. This information recovery is described by the long-time saturation value of a certain out-of-time-ordered correlator of local variables. We also propose a simple test that distinguishes between quantum and reversible classical chaotic information scrambling.
quantum physics
We study the problem of controlling oscillations in closed loop by combining positive and negative feedback in a mixed configuration. We develop a complete design procedure to set the relative strength of the two feedback loops to achieve steady oscillations. The proposed design takes advantage of dominance theory and adopts classical harmonic balance and fast/slow analysis to regulate the frequency of oscillations. The design is illustrated on a simple two-mass system, a setting that reveals the potential of the approach for locomotion, mimicking approaches based on central pattern generators.
electrical engineering and systems science
The aim is to describe new geometric approaches to define the statistics of spatio-temporal and polarimetric measurements of the states of an electromagnetic wave, using the works of Maurice Fr{\'e}chet, Jean-Louis Koszul and Jean-Marie Souriau, with in particular the notion of 'average' state of this digital measurement as a Fr{\'e}chet barycentre in a metric space and a model derived from statistical mechanics to define and calculate a maximum density of entropy (extension of the notion of Gaussian) to describe the fluctuations of the electromagnetic wave. The article will illustrate these new tools with examples of radar application for Doppler, spatio-temporal and polarimetric measurement of the electromagnetic wave by introducing a distance on the covariance matrices of the electromagnetic digital signal, based on Fisher's metric from Information Geometry.
electrical engineering and systems science
Following the direction of 1712.09990 and 1712.09994, this article continues to excavate more interesting aspects of the 4-particle amplituhedron for a better understanding of the 4-particle integrand of planar N=4 SYM to all loop orders, from the perspective of positive geometry. At 3-loop order, we introduce a much more refined dissection of the amplituhedron to understand its essential structure and maximally simplify its direct calculation, by fully utilizing its symmetry as well as the efficient Mondrian way for reorganizing all contributing pieces. Although significantly improved, this approach immediately encounters its technical bottleneck at 4-loop. Still, we manage to alleviate this difficulty by imitating the traditional (generalized) unitarity cuts, which is to use the so-called positive cuts. Given a basis of dual conformally invariant (DCI) loop integrals, we can figure out the coefficient of each DCI topology using its dlog form via positivity conditions. Explicit examples include all 2+5 non-rung-rule topologies at 4- and 5-loop respectively. These results remarkably agree with previous knowledge, which confirms the validity of amplituhedron up to 5-loop and develops a new approach of determining the coefficient of each distinct DCI loop integral.
high energy physics theory
We have developed a method that maps large astronomical images onto a two-dimensional map and clusters them. A combination of various state-of-the-art machine learning (ML) algorithms is used to develop a fully unsupervised image quality assessment and clustering system. Our pipeline consists of a data pre-processing step where individual image objects are identified in a large astronomical image and converted to smaller pixel images. This data is then fed to a deep convolutional autoencoder jointly trained with a self-organizing map (SOM). This part can be used as a recommendation system. The resulting output is eventually mapped onto a two-dimensional grid using a second, deep, SOM. We use data taken from ground-based telescopes and, as a case study, compare the system's ability and performance with the results obtained by supervised methods presented by Teimoorinia et al. (2020). The availability of target labels in this data allowed a comprehensive performance comparison between our unsupervised and supervised methods. In addition to image-quality assessments performed in this project, our method can have various other applications. For example, it can help experts label images in a considerably shorter time with minimum human intervention. It can also be used as a content-based recommendation system capable of filtering images based on the desired content.
astrophysics
Despite several different measures of efficiency that are applicable to the photosynthetic systems, a precise degree of efficiency of these systems is not completely determined. Introducing an efficient model for the dynamics of light-harvesting complexes in biological environments is a major purpose in investigating such systems. Here, we investigate the effect of macroscopic quantum behavior of a system of two pigments on the transport phenomena in this system model which interacts with an oscillating environment. We use the second-order perturbation theory to calculate the time-dependent population of excitonic states of a two-dimensional Hamiltonian using a non-master equation approach. Our results demonstrate that the quantum efficiency is robust with respect to the macroscopicity parameter h solely, but the ratio of macroscopicity over the pigment-pigment interaction energy can be considered as a parameter that may control the energy transfer efficiency at a given time. So, the dynamical behavior and the quantum efficiency of the supposed photosynthetic system may be influenced by a change in the macroscopic behavior of the system.
physics
Software bugs are prevalent in modern software systems and notoriously hard to debug manually. Therefore, a large body of research efforts have been dedicated to automated software debugging, including both automated fault localization and program repair. However, the existing fault localization techniques are usually ineffective on real-world software systems while even the most advanced program repair techniques can only fix a small ratio of real-world bugs. Although fault localization and program repair are inherently connected, we observe that in the literature their only connection is that program repair techniques usually use off-the-shelf fault localization techniques (e.g., Ochiai) to determine the potential candidate statements/elements for patching. In this work, we explore their connection in the other direction, i.e., can program repair in turn help with fault localization? In this way,we not only open a new dimension for more powerful fault localization, but also extend the application scope of program repair to all possible bugs (not only the bugs that can be directly automatically fixed).We have designed ProFL, a simplistic approach using patch-execution results (from program repair) as the feedback information for fault localization. The experimental results on the widely used Defects4J benchmark show that the basic ProFL can already localize 161 of the 395 studied bugs within Top-1, while state-of-the-art spectrum and mutation based fault localization techniques at most localize 117 within Top-1. We also demonstrate ProFL's effectiveness under different settings. Lastly, we show that ProFL can further boost state-of-the-art fault localization via both unsupervised and supervised learning.
computer science
Population size estimation requires access to unit-level data in order to correctly apply capture-recapture methods. Unfortunately, for reasons of confidentiality access to such data may be limited. To overcome this issue we apply and extend the hierarchical Poisson-Gamma model proposed by Zhang (2008), which initially was used to estimate the number of irregular foreigners in Norway. The model is an alternative to the current capture-recapture approach as it does not require linking multiple sources and is solely based on aggregated administrative data that include (1) the number of apprehended irregular foreigners, (2) the number of foreigners who faced criminal charges and (3) the number of foreigners registered in the central population register. The model explicitly assumes a relationship between the unauthorized and registered population, which is motivated by the interconnection between these two groups. This makes the estimation conditionally dependent on the size of regular population, provides interpretation with analogy to registered population and makes the estimated parameter more stable over time. In this paper, we modify the original idea to allow for covariates and flexible count distributions in order to estimate the number of irregular foreigners in Poland in 2019. We also propose a parametric bootstrap for estimating standard errors of estimates. Based on the extended model we conclude that in as of 31.03.2019 and 30.09.2019 around 15,000 and 20,000 foreigners and were residing in Poland without valid permits. This means that those apprehended by the Polish Border Guard account for around 15-20% of the total.
statistics
In this work, we study $\Lambda_{b}\to\Lambda_{c}$ and $\Sigma_{b}\to\Sigma_{c}$ weak decays in the light-front quark model. As is well known, the key point for such calculations is properly evaluating the hadronic transition matrix elements which are dominated by the non-perturbative QCD effect. In our calculation, we employ the light-front quark model and rather than the traditional diquark picture, we account the two spectator light quarks as individual ones. Namely during the transition, they retain their color indices, momenta and spin polarizations unchanged. Definitely, the subsystem composed of the two light quarks is still in a color-anti-triplet and possesses a definite spin, but we do not priori assume the two light quarks to be in a bound system-diquark. Our purpose is probing the diquark picture, via comparing the results with the available data, we test the validity and applicability of the diquark structure which turns a three-body problem into a two-body one, so greatly simplifies the calculation. It is indicated that the two approaches (diquark and a subsystem within which the two light quarks are free) lead to similar numerical results even though the model parameters in the two schemes might deviate slightly. Thus, the diquark approach seems sufficiently reasonable.
high energy physics phenomenology
Outflows and feedback are key ingredients of galaxy evolution. Evidence for an outflow arising from the Galactic center (GC) has recently been discovered at different wavelength. We show that the X-ray, radio, and infrared emissions are deeply interconnected, affecting one another and forming coherent features on scales of hundreds of parsecs, therefore indicating a common physical link associated with the GC outflow. We debate the location of the northern chimney and suggest that it might be located on the front side of the GC because of a significant tilt of the chimneys toward us. We report the presence of strong shocks at the interface between the chimneys and the interstellar medium, which are traced by radio and warm dust emission. We observe entrained molecular gas outflowing within the chimneys, revealing the multiphase nature of the outflow. In particular, the molecular outflow produces a long, strong, and structured shock along the northwestern wall of the chimney. Because of the different dynamical times of the various components of the outflow, the chimneys appear to be shaped by directed large-scale winds launched at different epochs. The data support the idea that the chimneys are embedded in an (often dominant) vertical magnetic field, which likely diverges with increasing latitude. We observe that the thermal pressure associated with the hot plasma appears to be smaller than the ram pressure of the molecular outflow and the magnetic pressure. This leaves open the possibility that either the main driver of the outflow is more powerful than the observed hot plasma, or the chimneys represent a "relic" of past and more powerful activity. These multiwavelength observations corroborate the idea that the chimneys represent the channel connecting the quasi-continuous, but intermittent, activity at the GC with the base of the Fermi bubbles.
astrophysics
Here we report a record thermoelectric power factor of up to 160 $\mu$ W m-1 K-2 for the conjugated polymer poly(3-hexylthiophene) (P3HT). This result is achieved through the combination of high-temperature rubbing of thin films together with the use of a large molybdenum dithiolene p-dopant with a high electron affinity. Comparison of the UV-vis-NIR spectra of the chemically doped samples to electrochemically oxidized material reveals an oxidation level of 10%, i.e. one polaron for every 10 repeat units. The high power factor arises due to an increase in the charge-carrier mobility and hence electrical conductivity along the rubbing direction. We conclude that P3HT, with its facile synthesis and outstanding processability, should not be ruled out as a potential thermoelectric material.
physics
We measure the Sun's velocity with respect to the Galactic halo using Gaia Early Data Release 3 (EDR3) observations of stellar streams. Our method relies on the fact that, in low-mass streams, the proper motion of stars should be directed along the stream structure in a non-rotating rest frame of the Galaxy, but the observed deviation arises due to the Sun's own reflex motion. This principle allows us to implement a simple geometrical procedure, which we use to analyse 17 streams over a $\sim 3-30$ kpc range. Our constraint on the Sun's motion is independent of any Galactic potential model, and it is also uncorrelated with the Sun's galactocentric distance. We infer the Sun's velocity as $V_{R,\odot}=8.88^{+1.20}_{-1.22}\,\rm{kms^{-1}}$ (radially towards the Galactic centre), $V_{\phi,\odot}=241.91^{+1.61}_{-1.73}\,\rm{kms^{-1}}$ (in the direction of Galactic rotation) and $V_{z,\odot}=3.08^{+1.06}_{-1.10}\,\rm{kms^{-1}}$ (vertically upwards), in global agreement with past measurements through other techniques; although we do note a small but significant difference in the $V_{z,\odot}$ component. Some of these parameters show significant correlation and we provide our MCMC output so it can be used by the reader as an input to future works. The comparison between our Sun's velocity inference and previous results, using other reference frames, indicates that the inner Galaxy is not moving with respect to the inertial frame defined by the halo streams.
astrophysics
This paper presents a kinematic definition of a serialized Stewart platform designed for autonomous in-space assembly called an Assembler. The Assemblers architecture describes problems inherent to the inverse kinematics of over-actuated mixed kinematic systems. This paper also presents a methodology for optimizing poses. In order to accomplish this with the Assembler system, an algorithm for finding a feasible solution to its inverse kinematics was developed with a wrapper for a nonlinear optimization algorithm designed to minimize the magnitude of forces incurred by each actuator. A simulated version of an Assembler was placed into a number of representative poses, and the positions were optimized. The results of these optimizations are discussed in terms of actuator forces, reachability of the platform, and applicability to high-payload structure assembly capabilities.
computer science
A Puiseux monoid is a submonoid of $(\mathbb{Q},+)$ consisting of nonnegative rational numbers. Although the operation of addition is continuous with respect to the standard topology, the set of irreducibles of a Puiseux monoid is, in general, difficult to describe. In this paper, we use topological density to understand how much a Puiseux monoid, as well as its set of irreducibles, spread through $\mathbb{R}_{\ge 0}$. First, we separate Puiseux monoids according to their density in $\mathbb{R}_{\ge 0}$, and we characterize monoids in each of these classes in terms of generating sets and sets of irreducibles. Then we study the density of the difference group, the root closure, and the conductor semigroup of a Puiseux monoid. Finally, we prove that every Puiseux monoid generated by a strictly increasing sequence of rationals is nowhere dense in $\mathbb{R}_{\ge 0}$ and has empty conductor.
mathematics
Ground-based whole sky cameras are extensively used for localized monitoring of clouds nowadays. They capture hemispherical images of the sky at regular intervals using a fisheye lens. In this paper, we propose a framework for estimating solar irradiance from pictures taken by those imagers. Unlike pyranometers, such sky images contain information about cloud coverage and can be used to derive cloud movement. An accurate estimation of solar irradiance using solely those images is thus a first step towards short-term forecasting of solar energy generation based on cloud movement. We derive and validate our model using pyranometers co-located with our whole sky imagers. We achieve a better performance in estimating solar irradiance and in particular its short-term variations as compared to other related methods using ground-based observations.
electrical engineering and systems science
Constructing confidence intervals for the coefficients of high-dimensional sparse linear models remains a challenge, mainly because of the complicated limiting distributions of the widely used estimators, such as the lasso. Several methods have been developed for constructing such intervals. Bootstrap lasso+ols is notable for its technical simplicity, good interpretability, and performance that is comparable with that of other more complicated methods. However, bootstrap lasso+ols depends on the beta-min assumption, a theoretic criterion that is often violated in practice. Thus, we introduce a new method, called bootstrap lasso+partial ridge, to relax this assumption. Lasso+partial ridge is a two-stage estimator. First, the lasso is used to select features. Then, the partial ridge is used to refit the coefficients. Simulation results show that bootstrap lasso+partial ridge outperforms bootstrap lasso+ols when there exist small, but nonzero coefficients, a common situation that violates the beta-min assumption. For such coefficients, the confidence intervals constructed using bootstrap lasso+partial ridge have, on average, $50\%$ larger coverage probabilities than those of bootstrap lasso+ols. Bootstrap lasso+partial ridge also has, on average, $35\%$ shorter confidence interval lengths than those of the de-sparsified lasso methods, regardless of whether the linear models are misspecified. Additionally, we provide theoretical guarantees for bootstrap lasso+partial ridge under appropriate conditions, and implement it in the R package "HDCI."
statistics
This is a review of exceptional field theory: a generalisation of Kaluza-Klein theory that unifies the metric and $p$-form gauge field degrees of freedom of supergravity into a generalised or extended geometry, whose additional coordinates may be viewed as conjugate to brane winding modes. This unifies the maximal supergravities, treating their previously-hidden exceptional Lie symmetries as a fundamental geometric symmetry. Duality orbits of solutions simplify into single objects, that in many cases have simple geometric interpretations, for instance as wave or monopole-type solutions. It also provides a route to explore exotic or non-geometric aspects of M-theory, such as exotic branes, U-folds, and more novel sorts of non-Riemannian spaces.
high energy physics theory
We investigate the possibilities for a deterministic conversion between two important types of maximally entangled multiqubit states, namely, $W$ and Greenberger-Horne-Zeilinger (GHZ) states, in the Rydberg-blockade regime of a neutral-atom system where each atom is subject to four external laser pulses. Such interconversions between $W$ states and their GHZ counterparts have quite recently been addressed using the method of shortcuts to adiabaticity, more precisely techniques based on Lewis-Riesenfeld invariants [R.-H. Zheng {\em et al.}, Phys. Rev. A {\bf 101}, 012345 (2020)]. Motivated in part by this recent work, we revisit the $W$ to GHZ state-conversion problem using a fundamentally different approach, which is based on the dynamical symmetries of the system and a Lie-algebraic parametrization of its permissible evolutions. In contrast to the previously used invariant-based approach, which leads to a state-conversion protocol characterized by strongly time-dependent Rabi frequencies of external lasers, ours can also yield one with time-independent Rabi frequencies. This feature makes our protocol more easily applicable experimentally, with the added advantage that it allows the desired state conversion to be carried out in a significantly shorter time with the same total laser pulse energy used.
quantum physics
The Einstein-de Haas (EdH) effect, where the spin angular momentum of electrons is transferred to the mechanical angular momentum of atoms, was established experimentally in 1915. While a semi-classical explanation of the effect exists, modern electronic structure methods have not yet been applied to modelling the phenomenon. In this paper we investigate its microscopic origins by means of a non-collinear tight-binding model of an $\textrm{O}_2$ dimer, which includes the effects of spin-orbit coupling, coupling to an external magnetic field, and vector Stoner exchange. By varying an external magnetic field in the presence of spin-orbit coupling, a torque can be generated on the dimer, validating the presence of the EdH effect. Avoided energy level crossings and the rate of change of magnetic field determine the evolution of the spin. We find also that the torque exerted on the nuclei by the electrons in a time-varying $B$ field is not only due to the EdH effect. Other contributions arise from field-induced changes in the electronic orbital angular momentum and from the direct action of the Faraday electric field associated with the time-varying magnetic field.
condensed matter
It is shown that it is feasible to use ultrashort time delay between two XUV femtosecond pulses in order to control two photon resonant ionization. The proposal is demonstrated on the spectrum of Helium, in terms of nonperturbative solutions of the time dependent Schroedinger equation. Comparison with results from second order time dependent perturbation theory provides additional insight.
physics
Abstract We investigate the spectral shift in collective forward scattering for a cold dense atomic cloud. The shift, sometimes called collective Lamb shift, results from resonant dipole-dipole interaction mediated by real and virtual photon exchange, forming many-body states displaying various super- and subradiant spectral behavior. The scattering spectrum reflects the overall radiative behavior from these states. However, it also averages out the radiative details associated with a single collective state, causing ambiguity in explaining the origin of the spectral shift and raising controversy on its scaling property. We employ a Monte-Carlo simulation to study how the collective states are occupied and contribute to emission. We thus distinguish two kinds of collective shift that follow different scaling laws. One results from dominant occupation of the near-resonant collective states. This shift is usually small and insensitive to the density or the number of participating atoms. The other comes from large spatial correlation of dipoles, associated with the states of higher degree of emission. This corresponds to larger collective shift that is approximately linearly dependent on the optical depth. Our analysis provides not only a novel perspective for the spectral features in collective scattering, but also a possible resolution to the controversy on the scaling property that has been reported elsewhere because of different origins.
quantum physics
Augmentation of disease diagnosis and decision-making in healthcare with machine learning algorithms is gaining much impetus in recent years. In particular, in the current epidemiological situation caused by COVID-19 pandemic, swift and accurate prediction of disease diagnosis with machine learning algorithms could facilitate identification and care of vulnerable clusters of population, such as those having multi-morbidity conditions. In order to build a useful disease diagnosis prediction system, advancement in both data representation and development of machine learning architectures are imperative. First, with respect to data collection and representation, we face severe problems due to multitude of formats and lack of coherency prevalent in Electronic Health Records (EHRs). This causes hindrance in extraction of valuable information contained in EHRs. Currently, no universal global data standard has been established. As a useful solution, we develop and publish a Python package to transform public health dataset into an easy to access universal format. This data transformation to an international health data format facilitates researchers to easily combine EHR datasets with clinical datasets of diverse formats. Second, machine learning algorithms that predict multiple disease diagnosis categories simultaneously remain underdeveloped. We propose two novel model architectures in this regard. First, DeepObserver, which uses structured numerical data to predict the diagnosis categories and second, ClinicalBERT_Multi, that incorporates rich information available in clinical notes via natural language processing methods and also provides interpretable visualizations to medical practitioners. We show that both models can predict multiple diagnoses simultaneously with high accuracy.
computer science
Talk presented at the conference on representation theory and harmonic analysis at Saclay, the talk presented the development in conformal field theory since 1968
high energy physics theory
In this manuscript, we derive symmetry indicator formulas for the filling anomaly on 2D square lattices with and without time reversal, inversion symmetry, or their product, in the presence of spin-orbit coupling. We go beyond previous work by considering lattices with atoms occupying multiple Wyckoff positions. We also provide an algorithm using the Smith normal form that systematizes the derivation. The formulas determine the corner charge in 2D atomic or fragile topological insulators, as well as in 3D insulators and semimetals by studying their 2D slices. We apply our results to a 3D tight-binding model on a body-centered tetragonal lattice, whose projection into the 2D plane has two atoms in the unit cell. Our symmetry indicators correctly describe the higher-order hinge states and Fermi arcs in cases where the existing indicators do not apply.
condensed matter
This paper describes SChME (Semantic Change Detection with Model Ensemble), a method usedin SemEval-2020 Task 1 on unsupervised detection of lexical semantic change. SChME usesa model ensemble combining signals of distributional models (word embeddings) and wordfrequency models where each model casts a vote indicating the probability that a word sufferedsemantic change according to that feature. More specifically, we combine cosine distance of wordvectors combined with a neighborhood-based metric we named Mapped Neighborhood Distance(MAP), and a word frequency differential metric as input signals to our model. Additionally,we explore alignment-based methods to investigate the importance of the landmarks used in thisprocess. Our results show evidence that the number of landmarks used for alignment has a directimpact on the predictive performance of the model. Moreover, we show that languages that sufferless semantic change tend to benefit from using a large number of landmarks, whereas languageswith more semantic change benefit from a more careful choice of landmark number for alignment.
computer science
In bi-static Ambient Backscatter Communications (AmBC) systems, the direct path from the ambient source to the receiver can be several orders of magnitude stronger than the scattered path modulated by the AmBC device. Because of the large power difference between these two signals, the receiver needs to operate at a large dynamic range. In this paper, we propose a novel analog-digital hybrid null-steering beamformer which allows the backscatter receiver to detect and decode the weak AmBC-modulated signal buried in the strong direct path signals and the noise without requiring the instantaneous channel state information. The analog cancellation of the strong signal components allow the receiver automatic gain control to adjust to the level of the weak AmBC signals. This hence allows common analog-to-digital converters to be used for sampling the signal. After cancelling the strong components, the ambient source signal appears as zero mean fast fading from the AmBC system point of view. We use the direct path signal component to track the phase of the unknown ambient signal. In order to avoid channel estimation, we propose AmBC to use orthogonal channelization codes.
computer science
The decoherence interpretation of quantum measurements is applied to Wigner's friend experiments. A framework in which all the experimental outcomes arise from unitary evolutions is proposed. Within it, a measurement is not completed until an uncontrolled environment monitorizes the state composed by the system, the apparatus and the observer. The (apparent) wave-function collapse and the corresponding randomness result from tracing out this environment; it is thus the ultimate responsible for the emergence of definite outcomes. Two main effects arise from this fact. First, external interference measurements, trademark of Wigner's friend experiments, modify the memory records of the internal observers; this framework provides a univocal protocol to calculate all these changes. Second, it can be used to build a consistent scenario for the recenly proposed extended versions of the Wigner's friend experiment. Regarding [D. Frauchiger and R. Renner, {\em Quantum theory cannot consistently describe the use of itself}, Nat. Comm. {\bf 9}, 3711 (2018)], this framework shows that the agents' claims become consistent if the changes in their memories are properly taken into account. Furthermore, the particular setup discussed in [C. Brukner, {\em A no-go theorem for observer-indepdendent facts}, Entropy {\bf 20}, 350 (2018)] cannot be tested against the decoherence framework, because it does not give rise to well-defined outcomes according to this formalism. A variation of this setup, devised to fill this gap, makes it possible to assign joint truth values to the observations made by all the agents. This framework also narrows down the requisites for such experiments, making them virtually impossible to apply to conscious (human) beings. Notwithstanding, it also opens the door to future relizations on quantum machines.
quantum physics
Dense matter is usually described using some kind of mean field theory (MFT) model based on Boltzmann-Gibbs (BG) extensive statistics. However, in many cases the conditions justifying the use of BG statistics are not fulfilled because the systems considered are explicitly nonextensive. In such cases one either enriches the original MFT by adding some dynamical elements violating extensivity (like, for example, long range correlations or intrinsic fluctuations), or one replaces the BG statistics by its nonextensive counterpart characterized by some nonextensivity parameter q 1 (for q -> 1 one returns to the extensive situation). In this work, using a simple quasi-particle description of dense matter (with interaction modelled by effective fugacities, z) we discuss the mutual interplay of non-extensiveness and dynamics (i.e., q and z).
high energy physics phenomenology
Non-intrusive load monitoring addresses the challenging task of decomposing the aggregate signal of a household's electricity consumption into appliance-level data without installing dedicated meters. By detecting load malfunction and recommending energy reduction programs, cost-effective non-intrusive load monitoring provides intelligent demand-side management for utilities and end users. In this paper, we boost the accuracy of energy disaggregation with a novel neural network structure named scale- and context-aware network, which exploits multi-scale features and contextual information. Specifically, we develop a multi-branch architecture with multiple receptive field sizes and branch-wise gates that connect the branches in the sub-networks. We build a self-attention module to facilitate the integration of global context, and we incorporate an adversarial loss and on-state augmentation to further improve the model's performance. Extensive simulation results tested on open datasets corroborate the merits of the proposed approach, which significantly outperforms state-of-the-art methods.
electrical engineering and systems science
As ancient, gravitationally bound stellar populations, globular clusters are abundant, vibrant laboratories characterized by high frequencies of dynamical interactions coupled to complex stellar evolution. Using surface brightness and velocity dispersion profiles from the literature, we fit $59$ Milky Way globular clusters to dynamical models from the \texttt{CMC Cluster Catalog}. Without doing any interpolation, and without any directed effort to fit any particular cluster, $26$ globular clusters are well-matched by at least one of our models. We discuss in particular the core-collapsed clusters NGC 6293, NGC 6397, NGC 6681, and NGC 6624, and the non-core-collapsed clusters NGC 288, NGC 4372, and NGC 5897. As NGC 6624 lacks well-fitting snapshots on the main \texttt{CMC Cluster Catalog}, we run six additional models in order to refine the fit. We calculate metrics for mass segregation, explore the production of compact object sources such as millisecond pulsars, cataclysmic variables, low-mass X-ray binaries, and stellar-mass black holes, finding reasonable agreement with observations. Additionally, closely mimicking observational cuts, we extract the binary fraction from our models, finding good agreement except in the dense core regions of core-collapsed clusters. Accompanying this paper are a number of \textsf{python} methods for examining the publicly accessible \texttt{CMC Cluster Catalog}, as well as any other models generated using \texttt{CMC}.
astrophysics
Apparent liquid permeability (ALP) in ultra-confined permeable media is primarily governed by the pore confinement and fluid-rock interactions. A new ALP model is required to predict the interactive effect of the above two on the flow in mixed-wet, heterogeneous nanoporous media. This study derives an ALP model and integrates the compiled results from molecular dynamics (MD) simulations, scanning electron microscopy, atomic force microscopy, and mercury injection capillary pressure. The ALP model assumes viscous forces, capillary forces, and liquid slippage in tortuous, rough pore throats. Predictions of the slippage of water and octane are validated against MD data reported in the literature. In up-scaling the proposed liquid transport model to the representative-elementary-volume scale, we integrate the geological fractals of the shale rock samples including their pore size distribution, pore throat tortuosity, and pore-surface roughness. Sensitivity results for the ALP indicate that when the pore size is below 100 nm pore confinement allows oil to slip in both hydrophobic and hydrophilic pores, yet it also restricts the ALP due to the restricted intrinsic permeability. The ALP reduces to the well-established Carman-Kozeny equation for no-slip viscous flow in a bundle of capillaries, which reveals a distinguishable liquid flow behavior in shales versus conventional rocks. Compared to the Klinkenberg equation, the proposed ALP model reveals an important insight into the similarities and differences between liquid versus gas flow in shales.
physics
This document is meant as a pedagogical introduction to the modern language used to talk about quantum theory, especially in the field of quantum information. It assumes that the reader has taken a first traditional course on quantum mechanics, and is familiar with the concept of Hilbert space and elementary linear algebra. As in the popular textbook on quantum information by Nielsen and Chuang, we introduce the generalised concept of states (density matrices), observables (POVMs) and transformations (channels), but we also characterise these structures from an algebraic standpoint, which provides many useful technical tools, and clarity as to their generality. This approach also makes it manifest that quantum theory is a direct generalisation of probability theory, and provides a unifying formalism for both fields. The focus on finite-dimensional systems allows for a self-contained presentation which avoids many of the technicalities inherent to the more general $C^*$-algebraic approach, while being appropriate for the quantum information literature.
quantum physics
The 21-cm signal from the Cosmic Dawn (CD) is likely to contain large fluctuations, with the most extreme astrophysical models on the verge of being ruled out by observations from radio interferometers. It is therefore vital that we understand not only the astrophysical processes governing this signal, but also other inherent processes impacting the signal itself, and in particular line-of-sight effects. Using our suite of fully numerical radiative transfer simulations, we investigate the impact on the redshifted 21-cm from the CD from one of these processes, namely the redshift-space distortions (RSDs). When RSDs are added, the resulting boost to the power spectra makes the signal more detectable for our models at all redshifts, further strengthening hopes that a power spectra measurement of the CD will be possible. RSDs lead to anisotropy in the signal at the beginning and end of the CD, but not while X-ray heating is underway. The inclusion of RSDs, however, decreases detectability of the non-Gaussianity of fluctuations from inhomogeneous X-ray heating measured by the skewness and kurtosis. On the other hand, mock observations created from all our simulations that include telescope noise corresponding to 1000 h observation with the Square Kilometre Array telescope show that we may be able image the CD for all heating models considered and suggest RSDs dramatically boost fluctuations coming from the inhomogeneous Ly-$\alpha$ background.
astrophysics
We analyze the thermodynamics of massless bosonic systems in D-dimensional anti-de Sitter spacetime, considering scalar, electromagnetic, and gravitational fields. Their dynamics are described by Poschl-Teller effective potentials and quantized in a unified framework, with the determination of the associated energy spectra. From the microscopic description developed, a macroscopic thermodynamic treatment is proposed, where an effective volume in anti-de Sitter geometry is defined and a suitable thermodynamic limit is considered. Partition functions are constructed for the bosonic gases, allowing the determination of several thermodynamic quantities of interest. With the obtained results, general aspects of the thermodynamics are explored.
high energy physics theory
In many real world chaotic systems, the interest is typically in determining when the system will behave in an extreme manner. Flooding and drought, extreme heatwaves, large earthquakes, and large drops in the stock market are examples of the extreme behaviors of interest. For clarity, in this paper we confine ourselves to the case where the chaotic system to be predicted is stationary so theory for asymptotic consistency can be easily illuminated. We will start with a simple case, where the attractor of the chaotic system is of known dimension so the answer is clear from prior work. Some extension will be made to stationary chaotic system with higher dimension where a number of empirical results will be described and a theoretical framework proposed to help explain them.
statistics
Recently, Frauchiger and Renner proposed a Gedankenexperiment, which was claimed to be able to prove that quantum theory cannot consistently describe the use of itself. Here we show that the conclusions of Frauchiger and Renner actually came from their incorrect description of some quantum states. With the correct description there will be no inconsistent results, no matter which quantum interpretation theory is used. Especially, the Copenhagen interpretation can satisfy all the three assumptions (C), (Q), and (S) of Frauchiger and Renner simultaneously, thus it has no problem consistently describing the use of itself.
quantum physics
The study of quantum gravity in the form of the holographic duality has uncovered and motivated the detailed investigation of various diagnostics of quantum chaos. One such measure is the operator size distribution, which characterizes the size of the support region of an operator and its evolution under Heisenberg evolution. In this work, we examine the role of the operator size distribution in holographic duality for the Sachdev-Ye-Kitaev (SYK) model. Using an explicit construction of AdS$_2$ bulk fermion operators in a putative dual of the low temperature SYK model, we study the operator size distribution of the boundary and bulk fermions. Our result provides a direct derivation of the relationship between (effective) operator size of both the boundary and bulk fermions and bulk $\text{SL}(2; \mathbb{R})$ generators.
high energy physics theory
We present a lattice study of a 2-flavor $U(1)$ gauge-Higgs model quantum field theory with a topological term at $\theta=\pi$. Such studies are prohibitively costly in the standard lattice formulation due to the sign-problem. Using a novel discretization of the model, along with an exact lattice dualization, we overcome the sign-problem and reliably simulate such systems. Our work provides the first ab initio demonstration that the model is in the spin-chain universality class, and demonstrates the power of the new approach to $U(1)$ gauge theories.
condensed matter
Multi-Agent Reinforcement Learning (MARL) methods find optimal policies for agents that operate in the presence of other learning agents. Central to achieving this is how the agents coordinate. One way to coordinate is by learning to communicate with each other. Can the agents develop a language while learning to perform a common task? In this paper, we formulate and study a MARL problem where cooperative agents are connected to each other via a fixed underlying network. These agents can communicate along the edges of this network by exchanging discrete symbols. However, the semantics of these symbols are not predefined and, during training, the agents are required to develop a language that helps them in accomplishing their goals. We propose a method for training these agents using emergent communication. We demonstrate the applicability of the proposed framework by applying it to the problem of managing traffic controllers, where we achieve state-of-the-art performance as compared to a number of strong baselines. More importantly, we perform a detailed analysis of the emergent communication to show, for instance, that the developed language is grounded and demonstrate its relationship with the underlying network topology. To the best of our knowledge, this is the only work that performs an in depth analysis of emergent communication in a networked MARL setting while being applicable to a broad class of problems.
computer science
We analyze the asymptotic stability of the $SU(n)$ Dark Monopole solutions and we show that there are unstable modes associated with them. We obtain the explicit form of the unstable perturbations and the associated negative-squared eigenfrequencies.
high energy physics theory
Cosmic rays are the most outstanding example of accelerated particles. They are about 1\% of the total mass of the Universe, so that cosmic rays would represent by far the most important energy transformation process of the Universe. Despite large progresses in building new detectors and in the analysis techniques, the key questions concerning origin, acceleration and propagation of the radiation are still open. One of the reasons is that there are significant discrepancies among the different results obtained by experiments located at ground probably due to unknown systematic errors affecting the measurements. In this note we will focus on detection of Galactic CRs from ground with EAS arrays. This is not a place for a complete review of CR physics (for which we recommend, for instance \cite{spurio,gaisser,grieder,longair,kampert,blasi,kachelriess}) but only to provide elements useful to understand the basic techniques used in reconstructing primary particle characteristics (energy, mass and arrival direction) from ground, and to show why indirect measurements are difficult and results still conflicting.
astrophysics
Group-based social dominance hierarchies are of essential interest in animal behavior research. Studies often record aggressive interactions observed over time, and models that can capture such dynamic hierarchy are therefore crucial. Traditional ranking methods summarize interactions across time, using only aggregate counts. Instead, we take advantage of the interaction timestamps, proposing a series of network point process models with latent ranks. We carefully design these models to incorporate important characteristics of animal interaction data, including the winner effect, bursting and pair-flip phenomena. Through iteratively constructing and evaluating these models we arrive at the final cohort Markov-Modulated Hawkes process (C-MMHP), which best characterizes all aforementioned patterns observed in interaction data. We compare all models using simulated and real data. Using statistically developed diagnostic perspectives, we demonstrate that the C-MMHP model outperforms other methods, capturing relevant latent ranking structures that lead to meaningful predictions for real data.
statistics
We propose a class of single-field, slow-roll inflation models in which a typical number of $e$-folds can be extremely large. The key point is to introduce a very shallow local minimum near the top of the potential in a hilltop inflation model. In particular, a typical number of $e$-folds is enhanced if classical behavior dominates around the local minimum such that the inflaton probability distribution is drifted to the local minimum as a whole. After the inflaton escapes from the local minimum due to the stochastic dynamics, the ordinary slow-roll inflation follows and it can generate the primordial density perturbation consistent with observation. Interestingly, our scenario inherits the advantages of the old and new inflation: the typical $e$-folds can be extremely large as in the old inflation, and slow-roll inflation naturally follows after the stochastic regime as in the new inflation. In our numerical example, the typical number of $e$-folds can be as large as $10^{10^{10}}$, which is large enough for various light scalars such the QCD axion to reach the Bunch-Davies distribution.
high energy physics phenomenology
Assuming that $X(3872)$ is a mixture between $2P$ charmonium and $\bar{D}D^{*}$ molecular states with $J^{PC}=1^{++}$, an analysis of $X(3872)$ radiative decays into $J/\psi \gamma$ and $\psi(2S)\gamma$ is presented. The modification of the radiative branching ratio due to possible constructive or destructive interferences between the meson-loop and the short-distance contact term, which is modeled by a charm quark loop, is shown. The model predictions are shown to be compatible with the experimentally determined ratio of the mentioned branching fractions for a wide range of the $X(3872)$ charmonium content. In the case of the destructive interference, a strong restriction on the charmonium admixture is found.
high energy physics phenomenology
The phenomenon of Microwave Hearing Effect(MHE) can be explained by assuming the inner ear specifically the Human Cochlea to act as a receiving antenna for pulsed Microwaves. The spiral structure of the Cochlea picks up these incoming waves and demodulates it due to its directional conductivity and a net voltage is induced which explains the audible clicks as observed in MHE. Further, the maximum Electromagnetic absorption is observed near the side of the head where the cochlea is located.
physics
We classify effective field theory (EFT) deformations of the Standard Model (SM) according to the analyticity property of the Lagrangian as a function of the Higgs doublet H. Our distinction in analytic and non-analytic corresponds to the more familiar one between linearly and non-linearly realized electroweak symmetry, but offers deeper physical insight. From the UV perspective, non-analyticity occurs when the new states acquire mass from electroweak symmetry breaking, and thus cannot be decoupled to arbitrarily high scales. This is reflected in the IR by the anomalous growth of the interaction strength for processes involving many Higgs bosons and longitudinally polarized massive vectors, with a breakdown of the EFT description below a scale $O(4 \pi v)$. Conversely, analyticity occurs when new physics can be pushed parametrically above the electroweak scale. We illustrate the physical distinction between these two EFT families by discussing Higgs boson self-interactions. In the analytic case, at the price of some unnaturalness in the Higgs potential, there exists space for $O(1)$ deviations of the cubic coupling, compatible with single Higgs and electroweak precision measurements, and with new particles out of the direct LHC reach. Larger deviations are possible, but subject to less robust assumptions about higher-dimensional operators in the Higgs potential. On the other hand, when the cubic coupling is produced by a non-analytic deformation of the SM, we show by an explicit calculation that the theory reaches strong coupling at $O(4 \pi v)$, quite independently of the magnitude of the cubic enhancement.
high energy physics phenomenology
We revisit electroweak radiative corrections to Standard Model Effective Field Theory (SMEFT) operators which are relevant for the $B$-meson semileptonic decays. The one-loop matching formulae onto the low-energy effective field theory are provided without imposing any flavor symmetry. The on-shell conditions are applied especially in dealing with quark-flavor mixings. Also, the gauge independence is shown explicitly in the $R_\xi$ gauge.
high energy physics phenomenology
A power system electromechanical wave propagates from the disturbance location to the rest of system, influencing various types of protections. In addition, since more power-electronics-interfaced generation and energy storage devices are being integrated into power systems, electromechanical wave propagation speeds in the future power systems are likely to change accordingly. In this paper, GPS-synchronized measurement data from a wide-area synchrophasor measurement system FNET/GridEye are used to analyze the characteristics of electromechanical wave propagation in the U.S. Eastern Interconnection (EI) system. Afterwards, high levels of photovoltaic (PV) penetration are modeled in the EI to investigate the influences of a typical power-electronics--interfaced resource on the electromechanical wave propagation speed. The result shows a direct correlation between the local penetration level of inverter-based generation and the electromechanical wave propagation speed.
electrical engineering and systems science
This paper presents the principles of operation of Resistive AC-Coupled Silicon Detectors (RSDs) and measurements of the temporal and spatial resolutions using a combined analysis of laser and beam test data. RSDs are a new type of n-in-p silicon sensor based on the Low-Gain Avalanche Diode (LGAD) technology, where the $n^+$ implant has been designed to be resistive, and the read-out is obtained via AC-coupling. The truly innovative feature of RSD is that the signal generated by an impinging particle is shared isotropically among multiple read-out pads without the need for floating electrodes or an external magnetic field. Careful tuning of the coupling oxide thickness and the $n^+$ doping profile is at the basis of the successful functioning of this device. Several RSD matrices with different pad width-pitch geometries have been extensively tested with a laser setup in the Laboratory for Innovative Silicon Sensors in Torino, while a smaller set of devices have been tested at the Fermilab Test Beam Facility with a 120 GeV/c proton beam. The measured spatial resolution ranges between $2.5\; \mu m$ for 70-100 pad-pitch geometry and $17\; \mu m$ with 200-500 matrices, a factor of 10 better than what is achievable in binary read-out ($bin\; size/ \sqrt{12}$). Beam test data show a temporal resolution of $\sim 40\; ps$ for 200-$\mu m$ pitch devices, in line with the best performances of LGAD sensors at the same gain.
physics
We study the effect of the stochastic gradient noise on the training of generative adversarial networks (GANs) and show that it can prevent the convergence of standard game optimization methods, while the batch version converges. We address this issue with a novel stochastic variance-reduced extragradient (SVRE) optimization algorithm, which for a large class of games improves upon the previous convergence rates proposed in the literature. We observe empirically that SVRE performs similarly to a batch method on MNIST while being computationally cheaper, and that SVRE yields more stable GAN training on standard datasets.
statistics
Correlators of unitary quantum field theories in Lorentzian signature obey certain analyticity and positivity properties. For interacting unitary CFTs in more than two dimensions, we show that these properties impose general constraints on families of minimal twist operators that appear in the OPEs of primary operators. In particular, we rederive and extend the convexity theorem which states that for the family of minimal twist operators with even spins appearing in the reflection-symmetric OPE of any scalar primary, twist must be a monotonically increasing convex function of the spin. Our argument is completely non-perturbative and it also applies to the OPE of nonidentical scalar primaries in unitary CFTs, constraining the twist of spinning operators appearing in the OPE. Finally, we argue that the same methods also impose constraints on the Regge behavior of certain CFT correlators.
high energy physics theory
The Nancy Grace Roman Space Telescope Coronagraph Instrument (CGI) will be capable of characterizing exoplanets in reflected light and will demonstrate space technologies essential for future missions to take spectra of Earthlike exoplanets. As the mission and instrument move into the final stages of design, simulation tools spanning from depth of search calculators to detailed diffraction models have been created by a variety of teams. We summarize these efforts, with a particular focus on publicly available datasets and software tools. These include speckle and point-spread-function models, signal-to-noise calculators, and science product simulations (e.g. predicted observations of debris disks and exoplanet spectra). This review is intended to serve as a reference to facilitate engagement with the technical and science capabilities of the CGI instrument.
astrophysics
The Laser Interferometer Space Antenna will be the first Gravitational Wave observatory in space. It is scheduled to fly in the early 2030's. LISA design predicts sensitivity levels that enable the detection a Stochastic Gravitational Wave Background signal. This stochastic type of signal is a superposition of signatures from sources that cannot be resolved individually and which are of various types, each one contributing with a different spectral shape. In this work we present a fast methodology to assess the detectability of a stationary, Gaussian, and isotropic stochastic signal in a set of frequency bins, combining information from the available data channels. We derive an analytic expression of the Bayes Factor between the instrumental noise-only and the signal plus instrumental noise models, that allows us to compute the detectability bounds of a given signal, as a function of frequency and prior knowledge on the instrumental noise spectrum.
astrophysics
Suppose $X$ is a smooth, proper, geometrically connected curve over $\mathbb F_q$ with an $\mathbb F_q$-rational point $x_0$. For any $\mathbb F_q^{\times}$-character $\sigma$ of $\pi_1(X)$ trivial on $x_0$, we construct a functor $\mathbb L_n^{\sigma}$ from the derived category of coherent sheaves on the moduli space of deformations of $\sigma$ over the Witt ring $W_n(\mathbb F_q)$ to the derived category of constructible $W_n(\mathbb F_q)$-sheaves on the Jacobian of $X$. The functors $\mathbb L_n^{\sigma}$ categorify the Artin reciprocity map for geometric class field theory with $p$-torsion coefficients. We then give a criterion for the fully faithfulness of (an enhanced version of) $\mathbb L_n^{\sigma}$ in terms of the Hasse-Witt matrix of $X$.
mathematics
Early detection of skin cancer, particularly melanoma, is crucial to enable advanced treatment. Due to the rapid growth in the numbers of skin cancers, there is a growing need of computerized analysis for skin lesions. The state-of-the-art public available datasets for skin lesions are often accompanied with very limited amount of segmentation ground truth labeling as it is laborious and expensive. The lesion boundary segmentation is vital to locate the lesion accurately in dermoscopic images and lesion diagnosis of different skin lesion types. In this work, we propose the use of fully automated deep learning ensemble methods for accurate lesion boundary segmentation in dermoscopic images. We trained the Mask-RCNN and DeepLabv3+ methods on ISIC-2017 segmentation training set and evaluate the performance of the ensemble networks on ISIC-2017 testing set. Our results showed that the best proposed ensemble method segmented the skin lesions with Jaccard index of 79.58% for the ISIC-2017 testing set. The proposed ensemble method outperformed FrCN, FCN, U-Net, and SegNet in Jaccard Index by 2.48%, 7.42%, 17.95%, and 9.96% respectively. Furthermore, the proposed ensemble method achieved an accuracy of 95.6% for some representative clinically benign cases, 90.78% for the melanoma cases, and 91.29% for the seborrheic keratosis cases on ISIC-2017 testing set, exhibiting better performance than FrCN, FCN, U-Net, and SegNet.
electrical engineering and systems science
In industrial applications, the early detection of malfunctioning factory machinery is crucial. In this paper, we consider acoustic malfunction detection via transfer learning. Contrary to the majority of current approaches which are based on deep autoencoders, we propose to extract features using neural networks that were pretrained on the task of image classification. We then use these features to train a variety of anomaly detection models and show that this improves results compared to convolutional autoencoders in recordings of four different factory machines in noisy environments. Moreover, we find that features extracted from ResNet based networks yield better results than those from AlexNet and Squeezenet. In our setting, Gaussian Mixture Models and One-Class Support Vector Machines achieve the best anomaly detection performance.
electrical engineering and systems science
Galaxy clusters are the largest gravitationally bound systems in the Universe and, as such, play an important role in cosmological studies. An important resource for studying their properties in a statistical manner are homogeneous and large image datasets covering diverse environments. In this sense, the wide-field images (1.4 deg^{2}) obtained by the Southern Photometric Local Universe Survey (S-PLUS) in 12 optical bands, constitute a valuable tool for that type of studies. In this work, we present a photometric analysis of pixel color-magnitude diagrams, corresponding to a sample of 24 galaxies of different morphological types located in the Fornax cluster.
astrophysics
The quantum approximate optimization algorithm (QAOA) is a hybrid variational quantum-classical algorithm that solves combinatorial optimization problems. While there is evidence suggesting that the fixed form of the original QAOA ansatz is not optimal, there is no systematic approach for finding better ans\"atze. We address this problem by developing an iterative version of QAOA that is problem-tailored, and which can also be adapted to specific hardware constraints. We simulate the algorithm on a class of Max-Cut graph problems and show that it converges much faster than the original QAOA, while simultaneously reducing the required number of CNOT gates and optimization parameters. We provide evidence that this speedup is connected to the concept of shortcuts to adiabaticity.
quantum physics
Distance measurements to molecular clouds are essential and important. We present directly measured distances to 169 molecular clouds in the fourth quadrant of the Milky Way. Based on the near-infrared photometry from the Two Micron All Sky Survey and the Vista Variables in the Via Lactea Survey, we select red clump stars in the overlapping directions of the individual molecular clouds and infer the bin averaged extinction values and distances to these stars. We track the extinction versus distance profiles of the sightlines toward the clouds and fit them with Gaussian dust distribution models to find the distances to the clouds. We have obtained distances to 169 molecular clouds selected from Rice et al. The clouds range in distances between 2 and 11 kpc from the Sun. The typical internal uncertainties in the distances are less than 5 per cent and the systematic uncertainty is about 7 per cent. The catalogue presented in this work is one of the largest homogeneous catalogues of distant molecular clouds with the direct measurement of distances. Based on the catalogue, we have tested different spiral arm models from the literature.
astrophysics
Let $ X $ be an $ m \times n $ matrix of distinct indeterminates over a field $ K $, where $ m \le n $. Set the polynomial ring $K[X] := K[X_{ij} : 1 \le i \le m, 1 \le j \le n] $. Let $ 1 \le k < l \le n $ be such that $ l - k + 1 \ge m $. Consider the submatrix $ Y_{kl} $ of consecutive columns of $ X $ from $ k $th column to $ l $th column. Let $ J_{kl} $ be the ideal generated by `diagonal monomials' of all $ m \times m $ submatrices of $ Y_{kl} $, where diagonal monomial of a square matrix means product of its main diagonal entries. We show that $ J_{k_1 l_1} J_{k_2 l_2} \cdots J_{k_s l_s} $ has a linear free resolution, where $ k_1 \le k_2 \le \cdots \le k_s $ and $ l_1 \le l_2 \le \cdots \le l_s $. This result is a variation of a theorem due to Bruns and Conca. Moreover, our proof is self-contained, elementary and combinatorial.
mathematics
We propose a framework for learning neural scene representations directly from images, without 3D supervision. Our key insight is that 3D structure can be imposed by ensuring that the learned representation transforms like a real 3D scene. Specifically, we introduce a loss which enforces equivariance of the scene representation with respect to 3D transformations. Our formulation allows us to infer and render scenes in real time while achieving comparable results to models requiring minutes for inference. In addition, we introduce two challenging new datasets for scene representation and neural rendering, including scenes with complex lighting and backgrounds. Through experiments, we show that our model achieves compelling results on these datasets as well as on standard ShapeNet benchmarks.
computer science
In this work, we explore the benefits of using multilingual bottleneck features (mBNF) in acoustic modelling for the automatic speech recognition of code-switched (CS) speech in African languages. The unavailability of annotated corpora in the languages of interest has always been a primary challenge when developing speech recognition systems for this severely under-resourced type of speech. Hence, it is worthwhile to investigate the potential of using speech corpora available for other better-resourced languages to improve speech recognition performance. To achieve this, we train a mBNF extractor using nine Southern Bantu languages that form part of the freely available multilingual NCHLT corpus. We append these mBNFs to the existing MFCCs, pitch features and i-vectors to train acoustic models for automatic speech recognition (ASR) in the target code-switched languages. Our results show that the inclusion of the mBNF features leads to clear performance improvements over a baseline trained without the mBNFs for code-switched English-isiZulu, English-isiXhosa, English-Sesotho and English-Setswana speech.
electrical engineering and systems science
We describe an extension of the most recent version of the Planck Catalogue of Compact Sources (PCCS2), produced using a new multi-band Bayesian Extraction and Estimation Package (BeeP). BeeP assumes that the compact sources present in PCCS2 at 857 GHz have a dust-like spectral energy distribution, which leads to emission at both lower and higher frequencies, and adjusts the parameters of the source and its SED to fit the emission observed in Planck's three highest frequency channels at 353, 545, and 857 GHz, as well as the IRIS map at 3000 GHz. In order to reduce confusion regarding diffuse cirrus emission, BeeP's data model includes a description of the background emission surrounding each source, and it adjusts the confidence in the source parameter extraction based on the statistical properties of the spatial distribution of the background emission. BeeP produces the following three new sets of parameters for each source: (a) fits to a modified blackbody (MBB) thermal emission model of the source; (b) SED-independent source flux densities at each frequency considered; and (c) fits to an MBB model of the background in which the source is embedded. BeeP also calculates, for each source, a reliability parameter, which takes into account confusion due to the surrounding cirrus. We define a high-reliability subset (BeeP/base), containing 26 083 sources (54.1 per cent of the total PCCS2 catalogue), the majority of which have no information on reliability in the PCCS2. The results of the BeeP extension of PCCS2, which are made publicly available via the PLA, will enable the study of the thermal properties of well-defined samples of compact Galactic and extra-galactic dusty sources.
astrophysics
For the past 150 years, the prevailing view of the local Interstellar Medium (ISM) was based on a peculiarity known as the Gould's Belt, an expanding ring of young stars, gas, and dust, tilted about 20$^\circ$ to the Galactic plane. Still, the physical relation between local gas clouds has remained practically unknown because the distance accuracy to clouds is of the same order or larger than their sizes. With the advent of large photometric surveys and the Gaia satellite astrometric survey this situation has changed. Here we report the 3-D structure of all local cloud complexes. We find a narrow and coherent 2.7 kpc arrangement of dense gas in the Solar neighborhood that contains many of the clouds thought to be associated with the Gould Belt. This finding is inconsistent with the notion that these clouds are part of a ring, disputing the Gould Belt model. The new structure comprises the majority of nearby star-forming regions, has an aspect ratio of about 1:20, and contains about 3 million solar masses of gas. Remarkably, the new structure appears to be undulating and its 3-D distribution is well described by a damped sinusoidal wave on the plane of the Milky Way, with an average period of about 2 kpc and a maximum amplitude of about 160 pc. Our results represent a first step in the revision of the local gas distribution and Galactic structure and offer a new, broader context to studies on the transformation of molecular gas into stars.
astrophysics
Convolutional neural networks (CNNs) have demonstrated promise in automated cardiac magnetic resonance imaging segmentation. However, when using CNNs in a large real world dataset, it is important to quantify segmentation uncertainty in order to know which segmentations could be problematic. In this work, we performed a systematic study of Bayesian and non-Bayesian methods for estimating uncertainty in segmentation neural networks. We evaluated Bayes by Backprop (BBB), Monte Carlo (MC) Dropout, and Deep Ensembles in terms of segmentation accuracy, probability calibration, uncertainty on out-of-distribution images, and segmentation quality control. We tested these algorithms on datasets with various distortions and observed that Deep Ensembles outperformed the other methods except for images with heavy noise distortions. For segmentation quality control, we showed that segmentation uncertainty is correlated with segmentation accuracy. With the incorporation of uncertainty estimates, we were able to reduce the percentage of poor segmentation to 5% by flagging 31% to 48% of the most uncertain images for manual review, substantially lower than random review of the results without using neural network uncertainty.
electrical engineering and systems science
Voice controlled applications can be a great aid to society, especially for physically challenged people. However this requires robustness to all kinds of variations in speech. A spoken language understanding system that learns from interaction with and demonstrations from the user, allows the use of such a system in different settings and for different types of speech, even for deviant or impaired speech, while also allowing the user to choose a phrasing. The user gives a command and enters its intent through an interface, after which the model learns to map the speech directly to the right action. Since the effort of the user should be as low as possible, capsule networks have drawn interest due to potentially needing little training data compared to deeper neural networks. In this paper, we show how capsules can incorporate multitask learning, which often can improve the performance of a model when the task is difficult. The basic capsule network will be expanded with a regularisation to create more structure in its output: it learns to identify the speaker of the utterance by forcing the required information into the capsule vectors. To this end we move from a speaker dependent to a speaker independent setting.
electrical engineering and systems science
The ability to perceive and recognize objects is fundamental for the interaction with the external environment. Studies that investigate them and their relationship with brain activity changes have been increasing due to the possible application in an intuitive brain-machine interface (BMI). In addition, the distinctive patterns when presenting different visual stimuli that make data differentiable enough to be classified have been studied. However, reported classification accuracy still low or employed techniques for obtaining brain signals are impractical to use in real environments. In this study, we aim to decode electroencephalography (EEG) signals depending on the provided visual stimulus. Subjects were presented with 72 photographs belonging to 6 different semantic categories. We classified 6 categories and 72 exemplars according to visual stimuli using EEG signals. In order to achieve a high classification accuracy, we proposed an attention driven convolutional neural network and compared our results with conventional methods used for classifying EEG signals. We reported an accuracy of 50.37% and 26.75% for 6-class and 72-class, respectively. These results statistically outperformed other conventional methods. This was possible because of the application of the attention network using human visual pathways. Our findings showed that EEG signals are possible to differentiate when subjects are presented with visual stimulus of different semantic categories and at an exemplar-level with a high classification accuracy; this demonstrates its viability to be applied it in a real-world BMI.
electrical engineering and systems science
Recently, several studies have proven the global convergence and generalization abilities of the gradient descent method for two-layer ReLU networks. Most studies especially focused on the regression problems with the squared loss function, except for a few, and the importance of the positivity of the neural tangent kernel has been pointed out. On the other hand, the performance of gradient descent on classification problems using the logistic loss function has not been well studied, and further investigation of this problem structure is possible. In this work, we demonstrate that the separability assumption using a neural tangent model is more reasonable than the positivity condition of the neural tangent kernel and provide a refined convergence analysis of the gradient descent for two-layer networks with smooth activations. A remarkable point of our result is that our convergence and generalization bounds have much better dependence on the network width in comparison to related studies. Consequently, our theory provides a generalization guarantee for less over-parameterized two-layer networks, while most studies require much higher over-parameterization.
statistics
We investigate superconductor-insulator quantum phase transitions in ultrathin capacitively coupled superconducting nanowires with proliferating quantum phase slips. We derive a set of coupled Berezinskii-Kosterlitz-Thouless-like renormalization group equations demonstrating that interaction between quantum phase slips in one of the wires gets modified due to the effect of plasma modes propagating in another wire. As a result, the superconductor-insulator phase transition in each of the wires is controlled not only by its own parameters but also by those of the neighboring wire as well as by mutual capacitance. We argue that superconducting nanowires with properly chosen parameters may turn insulating once they are brought sufficiently close to each other.
condensed matter
We use $\Delta$SYM-H to capture the variation in the SYM-H index during the main phase of a geomagnetic storm. We define great geomagnetic storms as those with $\Delta$SYM-H $\le$ -200 nT. After analyzing the data that were not obscured by solar winds, we determined that 11 such storms occurred during solar cycle 23. We calculated time integrals for the southward interplanetary magnetic field component I(B$_s$), the solar wind electric field I(E$_y$), and a combination of E$_y$ and the solar wind dynamic pressure I(Q) during the main phase of a great geomagnetic storm. The strength of the correlation coefficient (CC) between $\Delta$SYM-H and each of the three integrals I(B$_s$) (CC = 0.74), I(E$_y$) (CC = 0.85), and I(Q) (CC = 0.94) suggests that Q, which encompasses both the solar wind electric field and the solar wind dynamic pressure, is the main driving factor that determines the intensity of a great geomagnetic storm. The results also suggest that the impact of B$_s$ on the great geomagnetic storm intensity is much more significant than that of the solar wind speed and the dynamic pressure during the main phase of associated great geomagnetic storm. How to estimate the intensity of an extreme geomagnetic storm based on solar wind parameters is also discussed.
physics
We discuss under what circumstances a signal in upcoming laboratory searches for keV-scale sterile neutrinos would be compatible with those particles being a sizable part or all of dark matter. In the parameter space that will be experimentally accessible by KATRIN/TRISTAN, strong X-ray limits need to be relaxed and dark matter overproduction needs to be avoided. We discuss postponing the dark matter production to lower temperatures, a reduced sterile neutrino contribution to dark matter, and a reduction of the branching ratio in photons and active neutrinos through cancellation with a new physics diagram. Both the Dodelson-Widrow and the Shi-Fuller mechanisms for sterile neutrino dark matter production are considered. As a final exotic example, potential consequences of CPT violation are discussed.
high energy physics phenomenology
The remaining theoretical uncertainties from unknown higher-order corrections in the prediction for the light Higgs-boson mass of the MSSM are estimated. The uncertainties associated with three different approaches that are implemented in the publicly available code FeynHiggs are compared: the fixed-order diagrammatic approach, suitable for low SUSY scales, the effective field theory (EFT) approach, suitable for high SUSY scales, and the hybrid approach which combines the fixed-order and the EFT approaches. It is demonstrated for a simple single-scale scenario that the result based on the hybrid approach yields a precise prediction for low, intermediate and high SUSY scales with a theoretical uncertainty of up to $\sim 1.5$ GeV for large stop mixing and $\sim 0.5$ GeV for small stop mixing. The uncertainty estimate of the hybrid calculation approaches the uncertainty estimate of the fixed-order result for low SUSY scales and the uncertainty estimate of the EFT approach for high SUSY scales, while for intermediate scales it is reduced compared to both of the individual results. The estimate of the theoretical uncertainty is also investigated in scenarios with more than one mass scale. A significantly enhanced uncertainty is found in scenarios where the gluino is substantially heavier than the scalar top quarks. The uncertainty estimate presented in this paper will be part of the public code FeynHiggs.
high energy physics phenomenology
The recent discovery of double charm baryon states by the LHCb Collaborarion and their high precision mass determination calls for a comprehensive analysis of the nonleptonic decays of double and single heavy baryons. Nonleptonic baryon decays play an important role in particle phenomenology since they allow to study the interplay of long and short distance dynamics of the Standard Model (SM). Further, they allow one to search for New Physics effects beyond the SM. We review recent progress in experimental and theoretical studies of the nonleptonic decays of heavy baryons with a focus on double charm baryon states and their decays. In particular, we discuss new ideas proposed by the present authors to calculate the $W$-exchange matrix elements of the nonleptonic decays of double heavy baryons. An important ingredient in our approach is the compositeness condition of Salam and Weinberg, and an effective implementation of infrared confinement both of which allow one to describe the nonperturbative structure of baryons composed of light and heavy quarks. Further we discuss an ab initio calculational method for the treatment of the so-called $W$-exchange diagrams generated by $W^{\pm}$ boson exchange between quarks. We found that the $W^{\pm}$-exchange contributions are not suppressed in comparison with the tree-level (factrorizing) diagrams and must be taken into account in the evaluation of matrix elements. Moreover, there are decay processes such as the doubly Cabibbo-suppressed decay $\Xi_c^+ \to p \phi$ recently observed by the LHCb Collaboration which is contributed to only by one single $W$-exchange diagram.
high energy physics phenomenology
Merger simulations predict that tidally induced gas inflows can trigger kpc-scale dual active galactic nuclei (dAGN) in heavily obscured environments. Previously with the Very Large Array, we have confirmed four dAGN with redshifts between $0.04 < z < 0.22$ and projected separations between 4.3 and 9.2 kpc in the SDSS Stripe 82 field. Here, we present $Chandra$ X-ray observations that spatially resolve these dAGN and compare their multi-wavelength properties to those of single AGN from the literature. We detect X-ray emission from six of the individual merger components and obtain upper limits for the remaining two. Combined with previous radio and optical observations, we find that our dAGN have properties similar to nearby low-luminosity AGN, and they agree well with the black hole fundamental plane relation. There are three AGN-dominated X-ray sources, whose X-ray hardness-ratio derived column densities show that two are unobscured and one is obscured. The low obscured fraction suggests these dAGN are no more obscured than single AGN, in contrast to the predictions from simulations. These three sources show an apparent X-ray deficit compared to their mid-infrared continuum and optical [OIII] line luminosities, suggesting higher levels of obscuration, in tension with the hardness-ratio derived column densities. Enhanced mid-infrared and [OIII] luminosities from star formation may explain this deficit. There is ambiguity in the level of obscuration for the remaining five components since their hardness ratios may be affected by non-nuclear X-ray emissions, or are undetected altogether. They require further observations to be fully characterized.
astrophysics
We present a framework to design nonlinear robust output feedback model predictive control (MPC) schemes that ensure constraint satisfaction under noisy output measurements and disturbances. We provide novel estimation methods to bound the magnitude of the estimation error based on: stability properties of the observer; detectability; set-membership estimation; moving horizon estimation (MHE). Robust constraint satisfaction is guaranteed by suitably incorporating these online validated bounds on the estimation error in a homothetic tube based MPC formulation. In addition, we show how the performance can be further improved by combining MHE and MPC in a single optimization problem. The framework is applicable to a general class of detectable and (incrementally) stabilizable nonlinear systems. While standard output feedback MPC schemes use offline computed worst-case bounds on the estimation error, the proposed framework utilizes online validated bounds, thus reducing conservatism and improving performance. We demonstrate the reduced conservatism of the proposed framework using a nonlinear 10-state quadrotor example.
electrical engineering and systems science
We define a holographic dual to the Donaldson-Witten topological twist of $\mathcal{N}=2$ gauge theories on a Riemannian four-manifold. This is described by a class of asymptotically locally hyperbolic solutions to $\mathcal{N}=4$ gauged supergravity in five dimensions, with the four-manifold as conformal boundary. Under AdS/CFT, minus the logarithm of the partition function of the gauge theory is identified with the holographically renormalized supergravity action. We show that the latter is independent of the metric on the boundary four-manifold, as required for a topological theory. Supersymmetric solutions in the bulk satisfy first order differential equations for a twisted $Sp(1)$ structure, which extends the quaternionic Kahler structure that exists on any Riemannian four-manifold boundary. We comment on applications and extensions, including generalizations to other topological twists.
high energy physics theory
Mechanical metamaterials are usually designed to show desired responses to prescribed forces. In some applications, the desired force-response relationship might be hard to specify exactly, although examples of forces and corresponding desired responses are easily available. Here we propose a framework for supervised learning in a thin creased sheet that learns the desired force-response behavior from training examples of spatial force patterns and can then respond correctly to previously unseen test forces. During training, we fold the sheet using different training forces and assume a learning rule that changes stiffness of creases in response to their folding strain. We find that this learning process reshapes non-linearities inherent in folding a sheet so as to show the correct response for previously unseen test forces. We study the relationship between training error, test error and sheet size which plays the role of model complexity. Our framework shows how the complex energy landscape of disordered mechanical materials can be reshaped using an iterative local learning rule.
condensed matter
Feynman integrals are central to all calculations in perturbative Quantum Field Theory. They often give rise to iterated integrals of dlog-forms with algebraic arguments, which in many cases can be evaluated in terms of multiple polylogarithms. This has led to certain folklore beliefs in the community stating that all such integrals evaluate to polylogarithms. Here we discuss a concrete example of a double iterated integral of two dlog-forms that evaluates to a period of a cusp form. The motivic versions of these integrals are shown to be algebraically independent from all multiple polylogarithms evaluated at algebraic arguments. From a mathematical perspective, we study a mixed elliptic Hodge structure arising from a simple geometric configuration in $\mathbb{P}^2$, consisting of a modular plane elliptic curve and a set of lines which meet it at torsion points, which may provide an interesting worked example from the point of view of periods, extensions of motives, and L-functions.
high energy physics theory
Automatic pneumonia Detection based on deep learning has increasing clinical value. Although the existing Feature Pyramid Network (FPN) and its variants have already achieved some great successes, their detection accuracies for pneumonia lesions in medical images are still unsatisfactory. In this paper, we propose a pneumonia detection network based on feature pyramid attention enhancement, which integrates attended high-level semantic features with low-level information. We add another information extracting path equipped with feature enhancement modules, which are conducted with an attention mechanism. Experimental results show that our proposed method can achieve much better performances, as a higher value of 4.02% and 3.19%, than the baselines in detecting pneumonia lesions.
electrical engineering and systems science
We study the $O(N)$ model in dimension three (3$d$) at large and infinite $N$ and show that the line of fixed points found at $N=\infty$ --the Bardeen-Moshe-Bander (BMB) line-- has an intriguing origin at finite $N$. The large $N$ limit that allows us to find the BMB line must be taken on particular trajectories in the $(d,N)$-plane: $d=3-\alpha/N$ and not at fixed dimension $d=3$. Our study also reveals that the known BMB line is only half of the true line of fixed points, the second half being made of singular fixed points. The potentials of these singular fixed points show a cusp for a finite value of the field and their finite $N$ counterparts a boundary layer.
high energy physics theory
Diagnostic testing is germane to a variety of scenarios in medicine, pandemic tracking, threat detection, and signal processing. This is an expository paper with some original results. Here we first set up a mathematical architecture for diagnostics, and explore its probabilistic underpinnings. Doing so enables us to develop new metrics for assessing the efficacy of different kinds of diagnostic tests, and for solving a long standing open problem in diagnostics, namely, comparing tests when their receiver operating characteristic curves cross. The first is done by introducing the notion of what we call, a Gini Coefficient; the second by invoking the information theoretic notion of dinegentropy. Taken together, these may be seen a contribution to the state of the art of diagnostics. The spirit of our work could also be relevant to the much discussed topic of batch testing, where each batch is defined by the partitioning strategy used to create it. However this possibility has not been explored here in any detail. Rather, we invite the attention of other researchers to investigate this idea, as future work.
statistics
Since the start of COVID-19, several relevant corpora from various sources are presented in the literature that contain millions of data points. While these corpora are valuable in supporting many analyses on this specific pandemic, researchers require additional benchmark corpora that contain other epidemics to facilitate cross-epidemic pattern recognition and trend analysis tasks. During our other efforts on COVID-19 related work, we discover very little disease related corpora in the literature that are sizable and rich enough to support such cross-epidemic analysis tasks. In this paper, we present EPIC30M, a large-scale epidemic corpus that contains 30 millions micro-blog posts, i.e., tweets crawled from Twitter, from year 2006 to 2020. EPIC30M contains a subset of 26.2 millions tweets related to three general diseases, namely Ebola, Cholera and Swine Flu, and another subset of 4.7 millions tweets of six global epidemic outbreaks, including 2009 H1N1 Swine Flu, 2010 Haiti Cholera, 2012 Middle-East Respiratory Syndrome (MERS), 2013 West African Ebola, 2016 Yemen Cholera and 2018 Kivu Ebola. Furthermore, we explore and discuss the properties of the corpus with statistics of key terms and hashtags and trends analysis for each subset. Finally, we demonstrate the value and impact that EPIC30M could create through a discussion of multiple use cases of cross-epidemic research topics that attract growing interest in recent years. These use cases span multiple research areas, such as epidemiological modeling, pattern recognition, natural language understanding and economical modeling.
computer science
We describe quantum limits to field sensing that relate noise, geometry and measurement duration to fundamental constants, with no reference to particle number. We cast the Tesche and Clarke (TC) bound on dc-SQUID sensitivity as such a limit, and find analogous limits for volumetric spin-precession magnetometers. We describe how randomly-arrayed spins, coupled to an external magnetic field of interest and to each other by the magnetic dipole-dipole interaction, execute a spin dynamics that depolarizes the spin ensemble even in the absence of coupling to an external reservoir. We show the resulting spin dynamics are scale invariant, with a depolarization rate proportional to spin number density and thus a number-independent quantum limit on the energy resolution per bandwidth $E_R$. Numerically, we find $E_R \ge \alpha \hbar$, $\alpha \sim 1$, in agreement with the TC limit, for paradigmatic spin-based measurements of static and oscillating magnetic fields.
quantum physics
The Word Mover's Distance (WMD) proposed by Kusner et al. is a distance between documents that takes advantage of semantic relations among words that are captured by their embeddings. This distance proved to be quite effective, obtaining state-of-art error rates for classification tasks, but is also impracticable for large collections/documents due to its computational complexity. For circumventing this problem, variants of WMD have been proposed. Among them, Relaxed Word Mover's Distance (RWMD) is one of the most successful due to its simplicity, effectiveness, and also because of its fast implementations. Relying on assumptions that are supported by empirical properties of the distances between embeddings, we propose an approach to speed up both WMD and RWMD. Experiments over 10 datasets suggest that our approach leads to a significant speed-up in document classification tasks while maintaining the same error rates.
computer science
We present results from our 47-night imaging campaign of Comet 41P/Tuttle-Giacobini-Kresak conducted from Lowell Observatory between 2017 February 16 and July 2. Coma morphology revealed gas jets, whose appearance and motion as a function of time yielded the rotation period and other properties. All narrowband CN images exhibited either one or two jets; one jet appeared as a partial face-on spiral with clockwise rotation while the second jet evolved from a side-on corkscrew, through face-on, and finally corkscrew again, with only a slow evolution throughout the apparition due to progressive viewing geometry changes. A total of 78 period determinations were made over a 7-week interval, yielding a smooth and accelerating rotation period starting at 24 hr (March 21&22) and passing 48 hr on April 28. While this is by far the fastest rate of change ever measured for a comet nucleus, the torque required is readily within what can exist given likely properties of the nucleus. If the torque remained constant, we estimate that the nucleus could have stopped rotating and/or began to tumble as soon as only two months following perihelion, and will certainly reach this stage by early in the next apparition. Working backwards in time, Tuttle-Giacobini-Kresak would have been rotating near its rotational break-up velocity 3-4 orbits earlier, suggesting that its extreme 7-magnitude outburst observed in 2001 might have been caused by a partial fragmentation at that time, as might the pair of 1973 8-magnitude outbursts if there had been an earlier spin-down and spin-up cycle.
astrophysics
In many practical applications, such as fraud detection, credit risk modeling or medical decision making, classification models for assigning instances to a predefined set of classes are required to be both precise as well as interpretable. Linear modeling methods such as logistic regression are often adopted, since they offer an acceptable balance between precision and interpretability. Linear methods, however, are not well equipped to handle categorical predictors with high-cardinality or to exploit non-linear relations in the data. As a solution, data preprocessing methods such as weight-of-evidence are typically used for transforming the predictors. The binning procedure that underlies the weight-of-evidence approach, however, has been little researched and typically relies on ad-hoc or expert driven procedures. The objective in this paper, therefore, is to propose a formalized, data-driven and powerful method. To this end, we explore the discretization of continuous variables through the binning of spline functions, which allows for capturing non-linear effects in the predictor variables and yields highly interpretable predictors taking only a small number of discrete values. Moreover, we extend upon the weight-of-evidence approach and propose to estimate the proportions using shrinkage estimators. Together, this offers an improved ability to exploit both non-linear and categorical predictors for achieving increased classification precision, while maintaining interpretability of the resulting model and decreasing the risk of overfitting. We present the results of a series of experiments in a fraud detection setting, which illustrate the effectiveness of the presented approach. We facilitate reproduction of the presented results and adoption of the proposed approaches by providing both the dataset and the code for implementing the experiments and the presented approach.
statistics
Research on wireless sensors represents a continuously evolving technological domain thanks to their high flexibility and scalability, fast and economical deployment, pervasiveness in industrial, civil and domestic contexts. However, the maintenance costs and the sensors reliability are strongly affected by the battery lifetime, which may limit their use. In this paper we consider a wireless smart camera, equipped with a low-energy radio receiver, and used to visually detect a moving radio-emitting target. To preserve the camera lifetime without sacrificing the detection capabilities, we design a probabilistic energy-aware controller to switch on/off the camera. The radio signal strength is used to predict the target detectability, via self-supervised Gaussian Process Regression combined with Recursive Bayesian Estimation. The automatic training process minimizes the human intervention, while the controller guarantees high detection accuracy and low energy consumption, as numerical and experimental results show.
electrical engineering and systems science
We present the analysis of the microlensing event OGLE-2018-BLG-1428, which has a short-duration ($\sim 1$ day) caustic-crossing anomaly. The event was caused by a planetary lens system with planet/host mass ratio $q=1.7\times10^{-3}$. Thanks to the detection of the caustic-crossing anomaly, the finite source effect was well measured, but the microlens parallax was not constrained due to the relatively short timescale ($t_{\rm E}=24$ days). From a Bayesian analysis, we find that the host star is a dwarf star $M_{\rm host}=0.43^{+0.33}_{-0.22} \ M_{\odot}$ at a distance $D_{\rm L}=6.22^{+1.03}_{-1.51}\ {\rm kpc}$ and the planet is a Jovian-mass planet $M_{\rm p}=0.77^{+0.77}_{-0.53} \ M_{\rm J}$ with a projected separation $a_{\perp}=3.30^{+0.59}_{-0.83}\ {\rm au}$. The planet orbits beyond the snow line of the host star. Considering the relative lens-source proper motion of $\mu_{\rm rel} = 5.58 \pm 0.38\ \rm mas\ yr^{-1}$, the lens can be resolved by adaptive optics with a 30m telescope in the future.
astrophysics
Airlines today are faced with a number of large scale scheduling problems. One such problem is the tail assignment problem, which is the task of assigning individual aircraft to a given set of flights, minimizing the overall cost. Each aircraft is identified by the registration number on its tail fin. In this article, we simulate the Quantum Approximate Optimization Algorithm (QAOA) applied to instances of this problem derived from real world data. The QAOA is a variational hybrid quantum-classical algorithm recently introduced and likely to run on near-term quantum devices. The instances are reduced to fit on quantum devices with 8, 15 and 25 qubits. The reduction procedure leaves only one feasible solution per instance, which allows us to map the tail assignment problem onto the Exact Cover problem. We find that repeated runs of the QAOA identify the feasible solution with close to unit probability for all instances. Furthermore, we observe patterns in the variational parameters such that an interpolation strategy can be employed which significantly simplifies the classical optimization part of the QAOA. Finally, we empirically find a relation between the connectivity of the problem graph and the single-shot success probability of the algorithm.
quantum physics
We revisit the constraints on the properties of right handed neutrinos from the requirement to explain the observed light neutrino oscillation data in the type-I seesaw model. We use well-known relations to show that there is in general no lower bound on the mixing of a given heavy neutrino with any individual Standard Model generation. Lower bounds quoted in the literature only apply if the masses of the heavy neutrinos are so degenerate that they cannot be distinguished experimentally. A lower bound on the total mixing (summed over Standard Model generations) can be derived for each heavy neutrino individually, but it strongly depends on the mass of the lightest Standard Model neutrino and on the number of heavy neutrinos that contribute to the seesaw mechanism. Our results have implications for the perspectives of future colliders or fixed target experiments to rule out certain mass ranges for heavy neutrinos.
high energy physics phenomenology