text
stringlengths
11
9.77k
label
stringlengths
2
104
Recent years have bore witness to the proliferation of distributed filtering techniques, where a collection of agents communicating over an ad-hoc network aim to collaboratively estimate and track the state of a system. These techniques form the enabling technology of modern multi-agent systems and have gained great importance in the engineering community. Although most distributed filtering techniques come with a set of stability and convergence criteria, the conditions imposed are found to be unnecessarily restrictive. The paradigm of stability and convergence in distributed filtering is revised in this manuscript. Accordingly, a general distributed filter is constructed and its estimation error dynamics is formulated. The conducted analysis demonstrates that conditions for achieving stable filtering operations are the same as those required in the centralized filtering setting. Finally, the concepts are demonstrated in a Kalman filtering framework and validated using simulation examples.
electrical engineering and systems science
Many search processes are conducted in the vicinity of a favored location, i.e., a home, which is visited repeatedly. Foraging animals return to their dens and nests to rest, scouts return to their bases to resupply, and drones return to their docking stations to recharge or refuel. Yet, despite its prevalence, very little is known about search with home returns as its analysis is much more challenging than that of unconstrained, free-range, search. Here, we develop a theoretical framework for search with home returns. This makes no assumptions on the underlying search process and is furthermore suited to treat generic return and home-stay strategies. We show that the solution to the home-return problem can then be given in terms of the solution to the corresponding free-range problem---which not only reduces overall complexity but also gives rise to a simple, and universal, phase-diagram for search. The latter reveals that search with home returns outperforms free-range search in conditions of high uncertainty. Thus, when living gets rough, a home will not only provide warmth and shelter but also allow one to locate food and other resources quickly and more efficiently than in its absence.
condensed matter
In this article we investigate a class of linear error correcting codes in relation with the order polytopes. In particular we consider the order polytopes of tree posets and bipartite posets. We calculate the parameters of the associated toric variety codes.
mathematics
In this paper, we show universal relations among the transport coefficients by calculating the electrical conductivity, thermal conductivity and thermo-electric conductivity in the presence of a chemical potential and magnetic fields for Einstein-Maxwell-dilaton-axion system in arbitrary but even dimensional bulk spacetime as well as for Einstein-DBI-dilaton-axion system in $3+1$ dimensional bulk spacetime. Moreover, we have also obtained a new hyperscale violating black hole solution with finite charge density and magnetic fields but with a trivial dilaton field at IR.
high energy physics theory
The ability to capture good quality images in the dark and near-zero lux conditions has been a long-standing pursuit of the computer vision community. The seminal work by Chen et al. [5] has especially caused renewed interest in this area, resulting in methods that build on top of their work in a bid to improve the reconstruction. However, for practical utility and deployment of low-light enhancement algorithms on edge devices such as embedded systems, surveillance cameras, autonomous robots and smartphones, the solution must respect additional constraints such as limited GPU memory and processing power. With this in mind, we propose a deep neural network architecture that aims to strike a balance between the network latency, memory utilization, model parameters, and reconstruction quality. The key idea is to forbid computations in the High-Resolution (HR) space and limit them to a Low-Resolution (LR) space. However, doing the bulk of computations in the LR space causes artifacts in the restored image. We thus propose Pack and UnPack operations, which allow us to effectively transit between the HR and LR spaces without incurring much artifacts in the restored image. We show that we can enhance a full resolution, 2848 x 4256, extremely dark single-image in the ballpark of 3 seconds even on a CPU. We achieve this with 2 - 7x fewer model parameters, 2 - 3x lower memory utilization, 5 - 20x speed up and yet maintain a competitive image reconstruction quality compared to the state-of-the-art algorithms.
computer science
The Bell and Wigner inequalities are commonly derived using logically separate procedures.It is not generally appreciated that they are closely related.Their relationship follows from the fact that the Bell inequality describes a constraint on the correlations of random variable pairs and leads to a constraint on the probabilities from which they are computed.In the case of the Bell inequality, the logic of the constraint is further clarified when it is found that the inequality that Bell derived for correlation functions must be identically satisfied by the data sets of +-1s used to compute the correlations.This data set inequality is independent of the assumptions used by Bell in the course of derivation of the correlation inequality.Thus, the Bell inequality in its most fundamental form cannot be violated by the number of data sets used in deriving it regardless of their individual characteristics.When the Bell inequality is applied to three predicted correlations using properties based on perfect entanglement, the resulting symmetries allow correlations to be replaced by their probabilities in the inequality.The Wigner inequality follows.The two related inequalities are satisfied by correlations and probabilities respectively computed using quantum mechanical principles.
quantum physics
The rational Krylov subspace method (RKSM) and the low-rank alternating directions implicit (LR-ADI) iteration are established numerical tools for computing low-rank solution factors of large-scale Lyapunov equations. In order to generate the basis vectors for the RKSM, or extend the low-rank factors within the LR-ADI method the repeated solution to a shifted linear system is necessary. For very large systems this solve is usually implemented using iterative methods, leading to inexact solves within this inner iteration. We derive theory for a relaxation strategy within these inexact solves, both for the RKSM and the LR-ADI method. Practical choices for relaxing the solution tolerance within the inner linear system are then provided. The theory is supported by several numerical examples.
mathematics
In the context of multiparameter quantum estimation theory, we investigate the construction of linear schemes in order to infer two classical parameters that are encoded in the quadratures of two quantum coherent states. The optimality of the scheme built on two phase-conjugate coherent states is proven with the saturation of the quantum Cram\'er--Rao bound under some global energy constraint. In a more general setting, we consider and analyze a variety of $n$-mode schemes that can be used to encode $n$ classical parameters into $n$ quantum coherent states and then estimate all parameters optimally and simultaneously.
quantum physics
Diffusion-weighted magnetic resonance imaging (DW-MRI) can be used to characterise the microstructure of the nervous tissue, e.g. to delineate brain white matter connections in a non-invasive manner via fibre tracking. Magnetic Resonance Imaging (MRI) in high spatial resolution would play an important role in visualising such fibre tracts in a superior manner. However, obtaining an image of such resolution comes at the expense of longer scan time. Longer scan time can be associated with the increase of motion artefacts, due to the patient's psychological and physical conditions. Single Image Super-Resolution (SISR), a technique aimed to obtain high-resolution (HR) details from one single low-resolution (LR) input image, achieved with Deep Learning, is the focus of this study. Compared to interpolation techniques or sparse-coding algorithms, deep learning extracts prior knowledge from big datasets and produces superior MRI images from the low-resolution counterparts. In this research, a deep learning based super-resolution technique is proposed and has been applied for DW-MRI. Images from the IXI dataset have been used as the ground-truth and were artificially downsampled to simulate the low-resolution images. The proposed method has shown statistically significant improvement over the baselines and achieved an SSIM of $0.913\pm0.045$.
electrical engineering and systems science
We propose the approach for a lattice investigation of light-cone distribution amplitudes (LCDA) of heavy-light mesons, such as the $B$-meson, using the formalism of parton pseudo-distributions. A basic ingredient of the approach is the study of short-distance behavior of the $B$-meson Ioffe-time distribution amplitude (ITDA), which is a generalization of the $B$-meson LCDA in coordinate space. We construct a reduced ITDA for the $B$-meson, and derive the matching relation between the reduced ITDA and the LCDA. The reduced ITDA is ultraviolet finite, which guarantees that the continuum limit exists on the lattice.
high energy physics phenomenology
A simple, flexible approach to creating expressive priors in Gaussian process (GP) models makes new kernels from a combination of basic kernels, e.g. summing a periodic and linear kernel can capture seasonal variation with a long term trend. Despite a well-studied link between GPs and Bayesian neural networks (BNNs), the BNN analogue of this has not yet been explored. This paper derives BNN architectures mirroring such kernel combinations. Furthermore, it shows how BNNs can produce periodic kernels, which are often useful in this context. These ideas provide a principled approach to designing BNNs that incorporate prior knowledge about a function. We showcase the practical value of these ideas with illustrative experiments in supervised and reinforcement learning settings.
statistics
The reservoir computing neural network architecture is widely used to test hardware systems for neuromorphic computing. One of the preferred tasks for bench-marking such devices is automatic speech recognition. However, this task requires acoustic transformations from sound waveforms with varying amplitudes to frequency domain maps that can be seen as feature extraction techniques. Depending on the conversion method, these may obscure the contribution of the neuromorphic hardware to the overall speech recognition performance. Here, we quantify and separate the contributions of the acoustic transformations and the neuromorphic hardware to the speech recognition success rate. We show that the non-linearity in the acoustic transformation plays a critical role in feature extraction. We compute the gain in word success rate provided by a reservoir computing device compared to the acoustic transformation only, and show that it is an appropriate benchmark for comparing different hardware. Finally, we experimentally and numerically quantify the impact of the different acoustic transformations for neuromorphic hardware based on magnetic nano-oscillators.
electrical engineering and systems science
I discuss the matching relations for the running renormalizable parameters when the heavy particles (top quark, Higgs scalar, Z and W vector bosons) are simultaneously decoupled from the Standard Model. The complete two-loop order matching for the electromagnetic coupling and all light fermion masses are obtained, augmenting existing results at 4-loop order in pure QCD and complete two-loop order for the strong coupling. I also review the further sequential decouplings of the lighter fermions (bottom quark, tau lepton, and charm quark) from the low-energy effective theory.
high energy physics phenomenology
Quantum fluids of light are the photonic counterpart of Bose gases. They currently attract increasing interest since they are versatile and highly tunable systems for probing many-body physics quantum phenomena, such as superfluidity. Superfluid flow of light has already been reported in microcavity exciton-polariton condensates but clear observation of this phenomenon in cavityless systems, that is, for propagating photon fluids, remains elusive. In this thesis, we study the hydrodynamical properties of light propagating close to resonance in hot rubidium vapors. Whereas photons in air are not interacting with each other, the situation is different in rubidium vapors, as an effective interaction between them, mediated by the atomic ensemble, appears. The light behaves therefore as a fluid flowing in the plane perpendicular to the optical axis. The primary purpose of this thesis was to show that light in those systems can behave as a superfluid. The first step toward the observation of superfluidity is to measure the dispersion relation of small amplitude density waves travelling onto the propagating photon fluid. I show that this dispersion exhibits a linear trend for small excitation wave-vectors, which is, according to the Landau criterion, a sufficient condition for guaranteeing superfluidity. I present then an all-optical defect experiment - where the photon fluid flows against an optically induced obstacle (namely, a local change of refractive index) - designed to measure the drag force cancellation at the fluid/superfluid threshold.
physics
We for the first time obtain the analytical solution for the quirk equation of motion in an approximate way. Based on it, we study several features of quirk trajectory in a more precise way, including quirk oscillation amplitude, number of periods, as well as the thickness of quirk pair plane. Moreover, we find an exceptional case where the quirk crosses at least one of the tracking layers repeatedly. Finally, we consider the effects of ionization energy loss and fixed direction of infracolor string for a few existing searches.
high energy physics phenomenology
When estimating the phase of a single mode, the quantum Fisher information for a pure probe state is proportional to the photon number variance of the probe state. In this work, we point out particular states that offer photon number distributions exhibiting a large variance, which would help to improve the local estimation precision. These theoretical examples are expected to stimulate the community to put more attention to those states that we found, and to work towards their experimental realization and usage in quantum metrology.
quantum physics
Topological materials often exhibit remarkably linear, non-saturating magnetoresistance (LMR), which is both of scientific and technological importance. However, the role of topologically non-trivial states in the emergence of such a behaviour has been difficult to establish in experiments. Here, we show how strong interaction between the topological surface states (TSS) with a positive g-factor and the bulk carriers can lead to a smearing of the Landau levels giving rise to an LMR behavior in a semi-metallic Heusler compound. The role of TSS is established by controllably reducing the surface-bulk coupling by a combination of substitution alloying and the application of high magnetic field, when the LMR behavior transmutes into a quantum Hall phase arising from the TSS. Our work establishes that small changes in the coupling strength between the surface and the bulk carriers can have a profound impact on the magnetotransport behavior in topological materials. In the process, we lay out a strategy to both reveal and manipulate the exotic properties of TSS in compounds with a semi-metallic bulk band structure, as is the case in multi-functional Heusler compounds.
condensed matter
Context. In recent years, an in-depth gamma-ray analysis of the Orion region has been carried out by the AGILE and Fermi-LAT (Large Area Telescope) teams with the aim of estimating the H2-CO conversion factor, XCO. The comparison of the data from both satellites with models of diffuse gamma-ray Galactic emission unveiled an excess at (l,b)=[213.9, -19.5], in a region at a short angular distance from the OB star k-Ori. Possible explanations of this excess are scattering of the so-called "dark gas", non-linearity in the H2-CO relation, or Cosmic-Ray (CR) energization at the k-Ori wind shock. Aims. Concerning this last hypothesis, we want to verify whether cosmic-ray acceleration or re-acceleration could be triggered at the k-Ori forward shock, which we suppose to be interacting with a star-forming shell detected in several wavebands and probably triggered by high energy particles. Methods. Starting from the AGILE spectrum of the detected gamma-ray excess, showed here for the first time, we developed a valid physical model for cosmic-ray energization, taking into account re-acceleration, acceleration, energy losses, and secondary electron contribution. Results. Despite the characteristic low velocity of an OB star forward shock during its "snowplow" expansion phase, we find that the Orion gamma-ray excess could be explained by re-acceleration of pre-existing cosmic rays in the interaction between the forward shock of k-Ori and the CO-detected, star-forming shell swept-up by the star expansion. According to our calculations, a possible contribution from freshly accelerated particles is sub-dominant with respect the re-acceleration contribution. However, a simple adiabatic compression of the shell could also explain the detected gamma-ray emission. Futher GeV and TeV observations of this region are highly recommended in order to correctly identify the real physical scenario.
astrophysics
In this work, we seek a more refined understanding of the complexity of local optimum computation for Max-Cut and pure Nash equilibrium (PNE) computation for congestion games with weighted players and linear latency functions. We show that computing a PNE of linear weighted congestion games is PLS-complete either for very restricted strategy spaces, namely when player strategies are paths on a series-parallel network with a single origin and destination, or for very restricted latency functions, namely when the latency on each resource is equal to the congestion. Our results reveal a remarkable gap regarding the complexity of PNE in congestion games with weighted and unweighted players, since in case of unweighted players, a PNE can be easily computed by either a simple greedy algorithm (for series-parallel networks) or any better response dynamics (when the latency is equal to the congestion). For the latter of the results above, we need to show first that computing a local optimum of a natural restriction of Max-Cut, which we call \emph{Node-Max-Cut}, is PLS-complete. In Node-Max-Cut, the input graph is vertex-weighted and the weight of each edge is equal to the product of the weights of its endpoints. Due to the very restricted nature of Node-Max-Cut, the reduction requires a careful combination of new gadgets with ideas and techniques from previous work. We also show how to compute efficiently a $(1+\eps)$-approximate equilibrium for Node-Max-Cut, if the number of different vertex weights is constant.
computer science
We study the effects of Snyder-de Sitter commutation relations on relativistic bosons by solving analytically in the momentum space representation the Klein-Gordon oscillator in arbitrary dimensions. The exact bound states spectrum and the corresponding momentum space wave functions are obtained using Gegenbauer polynomials in one dimension space and Jacobi polynomials in D dimensions case. Finally, we study the thermodynamic properties of the system in the high temperature regime where we found that the corrections increase the free energy but decrease the energy, the entropy and the specific heat which is no longer constant. This work extends the part concerning the Klein-Gordon oscillator for the Snyder-de Sitter case studied in two-dimensional space in J. Math. Phys. 60, 013505 (2019).
quantum physics
In this paper, we discuss the usage and implementation of the compressive sensing (CS) for the efficient measurement and analysis of the von K\'arm\'an vortices. We consider two different flow fields, the flow fields around a circle and an ellipse. We solve the governing $k-\epsilon$ transport equations numerically in order to model the flow fields around these bodies. Using the time series of the drag, $C_D$, and the lift, $C_L$, coefficients, and their Fourier spectra, we show that compressive sampling can be effectively used to measure and analyze Von K\'arm\'an vortices. We discuss the effects of the number of samples on reconstruction and the benefits of using compressive sampling over the classical Shannon sampling in the flow measurement and analysis where Von K\'arm\'an vortices are present. We comment on our findings and indicate their possible usage areas and extensions. Our results can find many important applications including but are not limited to measure, control, and analyze vibrations around coastal and offshore structures, bridges, aerodynamics, and Bose-Einstein condensation, just to name a few.
physics
In a general two Higgs doublet model, we study flavor changing neutral Higgs (FCNH) decays into leptons at hadron colliders, $pp \to \phi^0 \to \tau^\mp\mu^\pm +X$, where $\phi^0$ could be a CP-even scalar ($h^0$, $H^0$) or a CP-odd pseudoscalar ($A^0$). The light Higgs boson $h^0$ is found to resemble closely the Standard Model Higgs boson at the Large Hadron Collider. In the alignment limit of $\cos(\beta-\alpha) \cong 0$ for $h^0$--$H^0$ mixing, FCNH couplings of $h^0$ are naturally suppressed, but such couplings of the heavier $H^0, A^0$ are sustained by $\sin(\beta-\alpha) \simeq 1$. We evaluate physics backgrounds from dominant processes with realistic acceptance cuts and tagging efficiencies. We find promising results for $\sqrt{s} = 14$ TeV, which we extend further to $\sqrt{s} = 27$ TeV and 100 TeV future pp colliders.
high energy physics phenomenology
We present a general framework for modifying quantum approximate optimization algorithms (QAOA) to solve constrained network flow problems. By exploiting an analogy between flow-constraints and Gauss' law for electromagnetism, we design lattice quantum electrodynamics (QED) inspired mixing Hamiltonians that preserve flow constraints throughout the QAOA process. This results in an exponential reduction in the size of the configuration space that needs to be explored, which we show through numerical simulations, yields higher quality approximate solutions compared to the original QAOA routine. We outline a specific implementation for edge-disjoint path (EDP) problems related to traffic congestion minimization, numerically analyze the effect of initial state choice, and explore trade-offs between circuit complexity and qubit resources via a particle-vortex duality mapping. Comparing the effect of initial states reveals that starting with an ergodic (unbiased) superposition of solutions yields better performance than beginning with the mixer ground-state, suggesting a departure from the "short-cut to adiabaticity" mechanism often used to motivate QAOA.
quantum physics
Magnetic Particle Imaging (MPI) is an emerging imaging modality that maps the spatial distribution of magnetic nanoparticles. The x-space reconstruction in MPI results in highly blurry images, where the resolution depends on both system parameters and nanoparticle type. Previous techniques to counteract this blurring rely on the knowledge of the imaging point spread function (PSF), which may not be available or may require additional measurements. This work proposes a blind deconvolution algorithm for MPI to recover the precise spatial distribution of nanoparticles. The proposed algorithm exploits the observation that the imaging PSF in MPI has zero phase in Fourier domain. Thus, even though the reconstructed images are highly blurred, phase remains unaltered. We leverage this powerful property to iteratively enforce consistency of phase and bounded l1 energy information, using an orthogonal Projections Onto Convex Sets (POCS) algorithm. To demonstrate the method, comprehensive simulations were performed without and with nanoparticle relaxation effects, and at various noise levels. In addition, imaging experiments were performed on an in-house MPI scanner using a three-vial phantom that contained different nanoparticle types. Image quality was compared with conventional deconvolution methods, Wiener deconvolution and Lucy-Richardson method, which explicitly rely on the knowledge of PSF. Both the simulation results and experimental imaging results show that the proposed blind deconvolution algorithm outperforms the conventional deconvolution methods. Without utilizing the imaging PSF, the proposed algorithm improves image quality and resolution even in the case of different nanoparticle types, while displaying reliable performance against loss of the fundamental harmonic, nanoparticle relaxation effects, and noise.
physics
We study the RKKY interaction of magnetic impurities in the $\alpha-\mathcal{T}_3$ model which hosts pseudospin-1 fermions with two dispersive and one flat bands. By using the effective low-energy Hamiltonian we calculate the RKKY coupling for impurities placed on the same or different sublattices. We find that there are three types of interaction, which depend on the model parameter defining the relative strength of hoppings between sublattices, two of them can be reduced to graphene case while the third one is new and is due to the presence of a flat zero-energy band. We derive general analytical expressions for the RKKY interaction in terms of Mellin-Barnes type integrals and analyze different limiting cases. The cases of finite chemical potential and temperature, as well as asymptotic at large distances are considered. We show that the interaction between impurities located at different rim sites displays a very strong temperature dependence at small doping being a direct consequence of the flat band. The subtleties of the theorem for signs of the RKKY interaction at zero doping, as applied to the $\mathcal{T}_3$ lattice, related to the existence of a dispersionless flat band are discussed.
condensed matter
Curvit is an open-source Python package that facilitates the creation of light curves from the data collected by the Ultra-Violet Imaging Telescope (UVIT) onboard AstroSat, India's first multi-wavelength astronomical satellite. The input to Curvit is the calibrated events list generated by the UVIT-Payload Operation Center (UVIT-POC) and made available to the principal investigators through the Indian Space Science Data Center. The features of Curvit include (i) automatically detecting sources and generating light curves for all the detected sources and (ii) custom generation of light curve for any particular source of interest. We present here the capabilities of Curvit and demonstrate its usability on the UVIT observations of the intermediate polar FO Aqr as an example. Curvit is publicly available on GitHub at https://github.com/prajwel/curvit.
astrophysics
Despite its short history, Generative Adversarial Network (GAN) has been extensively studied and used for various tasks, including its original purpose, i.e., synthetic sample generation. However, applying GAN to different data types with diverse neural network architectures has been hindered by its limitation in training, where the model easily diverges. Such a notorious training of GANs is well known and has been addressed in numerous studies. Consequently, in order to make the training of GAN stable, numerous regularization methods have been proposed in recent years. This paper reviews the regularization methods that have been recently introduced, most of which have been published in the last three years. Specifically, we focus on general methods that can be commonly used regardless of neural network architectures. To explore the latest research trends in the regularization for GANs, the methods are classified into several groups by their operation principles, and the differences between the methods are analyzed. Furthermore, to provide practical knowledge of using these methods, we investigate popular methods that have been frequently employed in state-of-the-art GANs. In addition, we discuss the limitations in existing methods and propose future research directions.
electrical engineering and systems science
Neural networks have demonstrated remarkable performance in classification and regression tasks on chest X-rays. In order to establish trust in the clinical routine, the networks' prediction mechanism needs to be interpretable. One principal approach to interpretation is feature attribution. Feature attribution methods identify the importance of input features for the output prediction. Building on Information Bottleneck Attribution (IBA) method, for each prediction we identify the chest X-ray regions that have high mutual information with the network's output. Original IBA identifies input regions that have sufficient predictive information. We propose Inverse IBA to identify all informative regions. Thus all predictive cues for pathologies are highlighted on the X-rays, a desirable property for chest X-ray diagnosis. Moreover, we propose Regression IBA for explaining regression models. Using Regression IBA we observe that a model trained on cumulative severity score labels implicitly learns the severity of different X-ray regions. Finally, we propose Multi-layer IBA to generate higher resolution and more detailed attribution/saliency maps. We evaluate our methods using both human-centric (ground-truth-based) interpretability metrics, and human-independent feature importance metrics on NIH Chest X-ray8 and BrixIA datasets. The Code is publicly available.
electrical engineering and systems science
Resolved images suggest that asymmetric structures are a common feature of cold debris disks. While planets close to these disks are rarely detected, their hidden presence and gravitational perturbations provide plausible explanations for some of these features. To put constraints on the properties of yet undetected planetary companions, we aim to predict what features such a planet imprints in debris disks undergoing continuous collisional evolution. We discuss the basic equations, analytic approximations and timescales governing collisions, radiation pressure and secular perturbations. In addition, we combine our numerical model of the collisional evolution of the size and spatial distributions in debris disks with the gravitational perturbation by a single planet. We find that the distributions of orbital elements in the disks are strongly dependent on grain sizes. Secular precession is differential with respect to involved semi-major axes and grain sizes. This leads to observable differences between the big grains tracing the parent belt and the small grains in the trailing halo. Observations at different wavelengths can be used to constrain the properties of a possible planet.
astrophysics
This paper investigates an intelligent reflecting surface (IRS)-aided multi-cell multiple-input single-output (MISO) system consisting of several multi-antenna base stations (BSs) each communicating with a single-antenna user, in which an IRS is dedicatedly deployed for assisting the wireless transmission and suppressing the inter-cell interference. Under this setup, we jointly optimize the coordinated transmit beamforming at the BSs and the reflective beamforming at the IRS, for the purpose of maximizing the minimum weighted received signal-to-interference-plus-noise ratio (SINR) at users, subject to the individual maximum transmit power constraints at the BSs and the reflection constraints at the IRS. To solve the difficult non-convex minimum SINR maximization problem, we propose efficient algorithms based on alternating optimization, in which the transmit and reflective beamforming vectors are optimized in an alternating manner. In particular, we use the second-order-cone programming (SOCP) for optimizing the coordinated transmit beamforming, and develop two efficient designs for updating the reflective beamforming based on the techniques of semi-definite relaxation (SDR) and successive convex approximation (SCA), respectively. Numerical results show that the use of IRS leads to significantly higher SINR values than benchmark schemes without IRS or without proper reflective beamforming optimization; while the developed SCA-based solution outperforms the SDR-based one with lower implementation complexity.
electrical engineering and systems science
Context. The frequencies, lifetimes, and eigenfunctions of solar acoustic waves are affected by turbulent convection, which is random in space and in time. Since the correlation time of solar granulation and the periods of acoustic waves ($\sim$5 min) are similar, the medium in which the waves propagate cannot a priori be assumed to be time independent. Aims. We compare various effective-medium solutions with numerical solutions in order to identify the approximations that can be used in helioseismology. For the sake of simplicity, the medium is one dimensional. Methods. We consider the Keller approximation, the second-order Born approximation, and spatial homogenization to obtain theoretical values for the effective wave speed and attenuation (averaged over the realizations of the medium). Numerically, we computed the first and second statistical moments of the wave field over many thousands of realizations of the medium (finite-amplitude sound-speed perturbations are limited to a 30 Mm band and have a zero mean). Results. The effective wave speed is reduced for both the theories and the simulations. The attenuation of the coherent wave field and the wave speed are best described by the Keller theory. The numerical simulations reveal the presence of coda waves, trailing the coherent wave packet. These late arrival waves are due to multiple scattering and are easily seen in the second moment of the wave field. Conclusions. We find that the effective wave speed can be calculated, numerically and theoretically, using a single snapshot of the random medium (frozen medium); however, the attenuation is underestimated in the frozen medium compared to the time-dependent medium. Multiple scattering cannot be ignored when modeling acoustic wave propagation through solar granulation.
astrophysics
We demonstrate a first-principles method to study magnetotransport in materials by solving the Boltzmann transport equation (BTE) in the presence of an external magnetic field. Our approach employs ab initio electron-phonon interactions and takes spin-orbit coupling into account. We apply our method to various semiconductors (Si and GaAs) and two-dimensional (2D) materials (graphene) as representative case studies. The magnetoresistance, Hall mobility and Hall factor in Si and GaAs are in very good agreement with experiments. In graphene, our method predicts a large magnetoresistance, consistent with experiments. Analysis of the steady-state electron occupations in graphene shows the dominant role of optical phonon scattering and the breaking of the relaxation time approximation. Our work provides a detailed understanding of the microscopic mechanisms governing magnetotransport coefficients, establishing the BTE in a magnetic field as a broadly applicable first-principles tool to investigate transport in semiconductors and 2D materials.
condensed matter
Khintchine's theorem is a classical result from metric number theory which relates the Lebesgue measure of certain limsup sets with the convergence/divergence of naturally occurring volume sums. In this paper we ask whether an analogous result holds for iterated function systems (IFSs). We say that an IFS is approximation regular if we observe Khintchine type behaviour, i.e., if the size of certain limsup sets defined using the IFS is determined by the convergence/divergence of naturally occurring sums. We prove that an IFS is approximation regular if it consists of conformal mappings and satisfies the open set condition. The divergence condition we introduce incorporates the inhomogeneity present within the IFS. We demonstrate via an example that such an approach is essential. We also formulate an analogue of the Duffin-Schaeffer conjecture and show that it holds for a set of full Hausdorff dimension. Combining our results with the mass transference principle of Beresnevich and Velani \cite{BerVel}, we prove a general result that implies the existence of exceptional points within the attractor of our IFS. These points are exceptional in the sense that they are "very well approximated". As a corollary of this result, we obtain a general solution to a problem of Mahler, and prove that there are badly approximable numbers that are very well approximated by quadratic irrationals. The ideas put forward in this paper are introduced in the general setting of IFSs that may contain overlaps. We believe that by viewing IFS's from the perspective of metric number theory, one can gain a greater insight into the extent to which they overlap. The results of this paper should be interpreted as a first step in this investigation.
mathematics
Photon counting optical time-domain reflectometry (PC-OTDR) based on the single photon detection is an effective scheme to attain the high spatial resolution for optical fiber fault monitoring. Currently, due to the spatial resolution of PC-OTDR is proportional to the pulse width of a laser beam, short laser pulses are essential for the high spatial resolution. However, short laser pulses have a large bandwidth, which would be widened by the dispersion of fiber, thereby causing inevitable deterioration in spatial resolution, especially for long-haul fiber links. In this letter, we propose a scheme of dispersion independent PC-OTDR based on an infinite backscatter technique. Our experimental results -with more than 50 km long fiber - show that the spatial resolution of the PC-OTDR system is independent with the total dispersion of the fiber under test. Our method provides an avenue for developing the long-haul PC-OTDR with high performance.
quantum physics
As electric grids experience high penetration levels of renewable generation, fundamental changes are required to address real-time situational awareness. This paper uses unique traits of tensors to devise a model-free situational awareness and energy forecasting framework for distribution networks. This work formulates the state of the network at multiple time instants as a three-way tensor; hence, recovering full state information of the network is tantamount to estimating all the values of the tensor. Given measurements received from $\mu$phasor measurement units and/or smart meters, the recovery of unobserved quantities is carried out using the low-rank canonical polyadic decomposition of the state tensor---that is, the state estimation task is posed as a tensor imputation problem utilizing observed patterns in measured quantities. Two structured sampling schemes are considered: slab sampling and fiber sampling. For both schemes, we present sufficient conditions on the number of sampled slabs and fibers that guarantee identifiability of the factors of the state tensor. Numerical results demonstrate the ability of the proposed framework to achieve high estimation accuracy in multiple sampling scenarios.
electrical engineering and systems science
We study a class of topological materials which in their momentum-space band structure exhibit three-fold degeneracies known as triple points. Focusing specifically on $\mathcal{P}\mathcal{T}$-symmetric crystalline solids with negligible spin-orbit coupling, we find that such triple points can be stabilized by little groups containing a three-, four- or six-fold rotation axis, and we develop a classification of all possible triple points as type-A vs. type-B according to the absence vs. presence of attached nodal-line arcs. Furthermore, by employing the recently discovered non-Abelian band topology, we argue that a rotation-symmetry-breaking strain transforms type-A triple points into multi-band nodal links. Although multi-band nodal-line compositions were previously theoretically conceived and related to topological monopole charges, a practical condensed-matter platform for their manipulation and inspection has hitherto been missing. By reviewing the known triple-point materials with weak spin-orbit coupling, and by performing first-principles calculations to predict new ones, we identify suitable candidates for the realization of multi-band nodal links in applied strain. In particular, we report that an ideal compound to study this phenomenon is Li$_2$NaN, in which the conversion of triple points to multi-band nodal links facilitates largely tunable density of states and optical conductivity with doping and strain, respectively.
condensed matter
Deep learning is changing many areas in molecular physics, and it has shown great potential to deliver new solutions to challenging molecular modeling problems. Along with this trend arises the increasing demand of expressive and versatile neural network architectures which are compatible with molecular systems. A new deep neural network architecture, Molecular Configuration Transformer (Molecular CT), is introduced for this purpose. Molecular CT is composed of a relation-aware encoder module and a computationally universal geometry learning unit, thus able to account for the relational constraints between particles meanwhile scalable to different particle numbers and invariant w.r.t. the trans-rotational transforms. The computational efficiency and universality make Molecular CT versatile for a variety of molecular learning scenarios and especially appealing for transferable representation learning across different molecular systems. As examples, we show that Molecular CT enables representational learning for molecular systems at different scales, and achieves comparable or improved results on common benchmarks using a more light-weighted structure compared to baseline models.
computer science
It is known that the classical framework of causal models is not general enough to allow for causal reasoning about quantum systems. While the framework has been generalized in a variety of different ways to the quantum case, much of this work leaves open whether causal concepts are fundamental to quantum theory, or only find application at an emergent level of classical devices and measurement outcomes. Here, we present a framework of quantum causal models, with causal relations defined in terms intrinsic to quantum theory, and the central object of study being the quantum process itself. Following Allen et al., Phys. Rev. X 7, 031021 (2017), the approach defines quantum causal relations in terms of unitary evolution, in a way analogous to an approach to classical causal models that assumes underlying determinism and situates causal relations in functional dependences between variables. We show that any unitary quantum circuit has a causal structure corresponding to a directed acyclic graph, and that when marginalising over local noise sources, the resulting quantum process satisfies a Markov condition with respect to the graph. We also prove a converse to this statement. We introduce an intrinsically quantum notion that plays a role analogous to the conditional independence of classical variables, and (generalizing a central theorem of the classical framework) show that d-separation is sound and complete for it in the quantum case. We present generalizations of the three rules of the classical do-calculus, in each case relating a property of the causal structure to a formal property of the quantum process, and to an operational statement concerning the outcomes of interventions. In addition, we introduce and derive similar results for classical split-node causal models, which are more closely analogous to quantum causal models than the classical causal models that are usually studied.
quantum physics
Many models of physics beyond the Standard Model predict a strong first-order phase transition (SFOPT) in the early Universe that leads to observable gravitational waves (GWs). In this paper, we propose a novel method for presenting and comparing the GW signals that are predicted by different models. Our approach is based on the observation that the GW signal has an approximately model-independent spectral shape. This allows us to represent it solely in terms of a finite number of observables, that is, a set of peak amplitudes and peak frequencies. As an example, we consider the GW signal in the real-scalar-singlet extension of the Standard Model (xSM). We construct the signal region of the xSM in the space of observables and show how it will be probed by future space-borne interferometers. Our analysis results in sensitivity plots that are reminiscent of similar plots that are typically shown for dark-matter direct-detection experiments, but which are novel in the context of GWs from a SFOPT. These plots set the stage for a systematic model comparison, the exploration of underlying model-parameter dependencies, and the construction of distribution functions in the space of observables. In our plots, the experimental sensitivities of future searches for a stochastic GW signal are indicated by peak-integrated sensitivity curves. A detailed discussion of these curves, including fit functions, is contained in a companion paper [2002.04615]. The data and code that we used in our analysis can be downloaded from Zenodo [https://doi.org/10.5281/zenodo.3699415].
high energy physics phenomenology
Conversational search is an approach to information retrieval (IR), where users engage in a dialogue with an agent in order to satisfy their information needs. Previous conceptual work described properties and actions a good agent should exhibit. Unlike them, we present a novel conceptual model defined in terms of conversational goals, which enables us to reason about current research practices in conversational search. Based on the literature, we elicit how existing tasks and test collections from the fields of IR, natural language processing (NLP) and dialogue systems (DS) fit into this model. We describe a set of characteristics that an ideal conversational search dataset should have. Lastly, we introduce MANtIS (the code and dataset are available at https://guzpenha.github.io/MANtIS/), a large-scale dataset containing multi-domain and grounded information seeking dialogues that fulfill all of our dataset desiderata. We provide baseline results for the conversation response ranking and user intent prediction tasks.
computer science
Local windows are routinely used in computer vision and almost without exception the center of the window is aligned with the pixels being processed. We show that this conventional wisdom is not universally applicable. When a pixel is on an edge, placing the center of the window on the pixel is one of the fundamental reasons that cause many filtering algorithms to blur the edges. Based on this insight, we propose a new Side Window Filtering (SWF) technique which aligns the window's side or corner with the pixel being processed. The SWF technique is surprisingly simple yet theoretically rooted and very effective in practice. We show that many traditional linear and nonlinear filters can be easily implemented under the SWF framework. Extensive analysis and experiments show that implementing the SWF principle can significantly improve their edge preserving capabilities and achieve state of the art performances in applications such as image smoothing, denoising, enhancement, structure-preserving texture-removing, mutual-structure extraction, and HDR tone mapping. In addition to image filtering, we further show that the SWF principle can be extended to other applications involving the use of a local window. Using colorization by optimization as an example, we demonstrate that implementing the SWF principle can effectively prevent artifacts such as color leakage associated with the conventional implementation. Given the ubiquity of window based operations in computer vision, the new SWF technique is likely to benefit many more applications.
computer science
The presence of dark matter substructure will boost the signatures of dark matter annihilation. We review recent progress on estimates of this subhalo boost factor---a ratio of the luminosity from annihilation in the subhalos to that originating the smooth component---based on both numerical $N$-body simulations and semi-analytic modelings. Since subhalos of all the scales, ranging from the Earth mass (as expected, e.g., the supersymmetric neutralino, a prime candidate for cold dark matter) to galaxies or larger, give substantial contribution to the annihilation rate, it is essential to understand subhalo properties over a large dynamic range of more than twenty orders of magnitude in masses. Even though numerical simulations give the most accurate assessment in resolved regimes, extrapolating the subhalo properties down in sub-grid scales comes with great uncertainties---a straightforward extrapolation yields a very large amount of the subhalo boost factor of $\gtrsim$100 for galaxy-size halos. Physically motivated theoretical models based on analytic prescriptions such as the extended Press-Schechter formalism and tidal stripping modeling, which are well tested against the simulation results, predict a more modest boost of order unity for the galaxy-size halos. Giving an accurate assessment of the boost factor is essential for indirect dark matter searches and thus, having models calibrated at large ranges of host masses and redshifts, is strongly urged upon.
astrophysics
In a September 1976 PRL Eguchi and Freund considered two topological invariants: the Pontryagin number $P \sim \int d^4x \sqrt{g}R^* R$ and the Euler number $\chi \sim \int d^4x \sqrt{g}R^* R^*$ and posed the question: to what anomalies do they contribute? They found that $P$ appears in the integrated divergence of the axial fermion number current, thus providing a novel topological interpretation of the anomaly found by Kimura in 1969 and Delbourgo and Salam in 1972. However, they found no analogous role for $\chi$. This provoked my interest and, drawing on my April 1976 paper with Deser and Isham on gravitational Weyl anomalies, I was able to show that for Conformal Field Theories the trace of the stress tensor depends on just two constants: \[ g^{\mu\nu}\langle T_{\mu\nu}\rangle=\frac{1}{(4\pi)^2}(cF-aG)\] where $F$ is the square of the Weyl tensor and $\int d^4x\sqrt{g} G/(4\pi)^2$ is the Euler number. For free CFTs with $N_s$massless fields of spin $s$ \[ 720c=6N_0 + 18N_{1/2} + 72 N_1~~~~ 720a=2N_0 + 11N_{1/2} + 124N_1 \]
high energy physics theory
The interaction rate of an ultrarelativistic active neutrino at a temperature below the electroweak crossover plays a role in leptogenesis scenarios based on oscillations between active neutrinos and GeV-scale sterile neutrinos. By making use of a Euclideanization property of a thermal light-cone correlator, we determine the $O(g)$ correction to such an interaction rate in the high-temperature limit $\pi T \gg m_W$, finding a $\sim 15 ... 40\%$ reduction. For a benchmark point, this NLO correction decreases the lepton asymmetries produced by $\sim 1\%$.
high energy physics phenomenology
We introduce ADMM-pruned Compressive AutoEncoder (CAE-ADMM) that uses Alternative Direction Method of Multipliers (ADMM) to optimize the trade-off between distortion and efficiency of lossy image compression. Specifically, ADMM in our method is to promote sparsity to implicitly optimize the bitrate, different from entropy estimators used in the previous research. The experiments on public datasets show that our method outperforms the original CAE and some traditional codecs in terms of SSIM/MS-SSIM metrics, at reasonable inference speed.
computer science
Spectra of unbound electron-positron pairs (dielectrons, in brief) and photons from decays of parapositronia produced in ultraperipheral collisions of electrically charged objects are calculated. Their shapes at energies of the NICA collider are demonstrated. Soft dielectrons and photons are abundantly produced. The relevance of these processes to the astrophysical problem of cooling electron-positron pairs and the intense emission of 511 keV photons from the Galactic center is discussed.
high energy physics phenomenology
Histology images are inherently symmetric under rotation, where each orientation is equally as likely to appear. However, this rotational symmetry is not widely utilised as prior knowledge in modern Convolutional Neural Networks (CNNs), resulting in data hungry models that learn independent features at each orientation. Allowing CNNs to be rotation-equivariant removes the necessity to learn this set of transformations from the data and instead frees up model capacity, allowing more discriminative features to be learned. This reduction in the number of required parameters also reduces the risk of overfitting. In this paper, we propose Dense Steerable Filter CNNs (DSF-CNNs) that use group convolutions with multiple rotated copies of each filter in a densely connected framework. Each filter is defined as a linear combination of steerable basis filters, enabling exact rotation and decreasing the number of trainable parameters compared to standard filters. We also provide the first in-depth comparison of different rotation-equivariant CNNs for histology image analysis and demonstrate the advantage of encoding rotational symmetry into modern architectures. We show that DSF-CNNs achieve state-of-the-art performance, with significantly fewer parameters, when applied to three different tasks in the area of computational pathology: breast tumour classification, colon gland segmentation and multi-tissue nuclear segmentation.
electrical engineering and systems science
This paper introduces a large-scale multimodal and multilingual dataset that aims to facilitate research on grounding words to images in their contextual usage in language. The dataset consists of images selected to unambiguously illustrate concepts expressed in sentences from movie subtitles. The dataset is a valuable resource as (i) the images are aligned to text fragments rather than whole sentences; (ii) multiple images are possible for a text fragment and a sentence; (iii) the sentences are free-form and real-world like; (iv) the parallel texts are multilingual. We set up a fill-in-the-blank game for humans to evaluate the quality of the automatic image selection process of our dataset. We show the utility of the dataset on two automatic tasks: (i) fill-in-the blank; (ii) lexical translation. Results of the human evaluation and automatic models demonstrate that images can be a useful complement to the textual context. The dataset will benefit research on visual grounding of words especially in the context of free-form sentences.
computer science
The progenitors of Type Ia supernovae (SNe Ia) are debated, particularly the evolutionary state of the binary companion that donates mass to the exploding carbon-oxygen white dwarf. In previous work, we presented hydrodynamic models and optically thin radio synchrotron light-curves of SNe Ia interacting with detached, confined shells of CSM, representing CSM shaped by novae. In this work, we extend these light-curves to the optically thick regime, considering both synchrotron self-absorption and free-free absorption. We obtain simple formulae to describe the evolution of optical depth seen in the simulations, allowing optically thick light-curves to be approximated for arbitrary shell properties. We then demonstrate the use of this tool by interpreting published radio data. First, we consider the non-detection of PTF11kx - an SN Ia known to have a detached, confined shell - and find that the non-detection is consistent with current models for its CSM, and that observations at a later time would have been useful for this event. Secondly, we statistically analyze an ensemble of radio non-detections for SNe Ia with no signatures of interaction, and find that shells with masses $(10^{-4}-0.3)~M_\odot$ located $(10^{15}-10^{16})$ cm from the progenitor are currently not well constrained by radio datasets, due to their dim, rapidly-evolving light-curves.
astrophysics
By making use of the nice behavior of Hawking masses of slices of a weak solution of inverse mean curvature flow in three dimensional asymptotically hyperbolic manifolds, we are able to show that each slice of the flow is star-shaped after a long time, and then we get the regularity of the weak solution of inverse mean curvature flow in asymptotically hyperbolic manifolds. As an application, we prove that the limit of Hawking mass of the slices of a weak solution of inverse mean curvature flow with any connected $C^2$-smooth surface as initial data in asymptotically ADS-Schwarzschild manifolds with positive mass is bigger than or equal to the total mass, which is completely different from the situation in asymptotically flat case.
mathematics
We analyze the singularities of the two-point function in a conformal field theory at finite temperature. In a free theory, the only singularity is along the boundary light cone. In the holographic limit, a new class of singularities emerges since two boundary points can be connected by a nontrivial null geodesic in the bulk, encircling the photon sphere of the black hole. We show that these new singularities are resolved by tidal effects due to the black hole curvature, by solving the string worldsheet theory in the Penrose limit. Singularities in the asymptotically flat black hole geometry are also discussed.
high energy physics theory
Gamma-ray burst (GRB) 150910A was detected by {\it Swift}/BAT, and then rapidly observed by {\it Swift}/XRT, {\it Swift}/UVOT, and ground-based telescopes. We report Lick Observatory spectroscopic and photometric observations of GRB~150910A, and we investigate the physical origins of both the optical and X-ray afterglows, incorporating data obtained with BAT and XRT. The light curves show that the jet emission episode lasts $\sim 360$~s with a sharp pulse from BAT to XRT (Episode I). In Episode II, the optical emission has a smooth onset bump followed by a normal decay ($\alpha_{\rm R,2} \approx -1.36$), as predicted in the standard external shock model, while the X-ray emission exhibits a plateau ($\alpha_{\rm X,1} \approx -0.36$) followed by a steep decay ($\alpha_{\rm X,2} \approx -2.12$). The light curves show obvious chromatic behavior with an excess in the X-ray flux. Our results suggest that GRB 150910A is an unusual GRB driven by a newly-born magnetar with its extremely energetic magnetic dipole (MD) wind in Episode II, which overwhelmingly dominates the observed early X-ray plateau. The radiative efficiency of the jet prompt emission is $\eta_{\gamma} \approx 11\%$. The MD wind emission was detected in both the BAT and XRT bands, making it the brightest among the current sample of MD winds seen by XRT. We infer the initial spin period ($P_0$) and the surface polar cap magnetic field strength ($B_p$) of the magnetar as $1.02 \times 10^{15}~{\rm G} \leq B_{p} \leq 1.80 \times 10^{15}~{\rm G}$ and 1~ms $\leq P_{0}v\leq 1.77$~ms, and the radiative efficiency of the wind is $\eta_w \geq 32\%$.
astrophysics
Recent work explores the candidate phases of the 4d adjoint quantum chromodynamics (QCD$_4$) with an SU(2) gauge group and two massless adjoint Weyl fermions. Both Cordova-Dumitrescu and Bi-Senthil propose possible low energy 4d topological quantum field theories (TQFTs) to saturate the higher 't Hooft anomalies of adjoint QCD$_4$ under a renormalization-group (RG) flow from high energy. In this work, we generalize the symmetry-extension method [arXiv:1705.06728] to higher symmetries, and formulate higher group cohomology and cobordism theory approach to construct higher-symmetric TQFTs. We prove that the symmetry-extension method saturates certain anomalies, but also prove that neither $A \mathcal{P}_2(B_2)$ nor $\mathcal{P}_2(B_2)$ can be fully trivialized, with the background 1-form field $A$, Pontryagin square $\mathcal{P}_2$ and 2-form field $B_2$. Surprisingly, this indicates an obstruction to constructing a fully 1-form center and 0-form chiral symmetry (full discrete axial symmetry) preserving 4d TQFT with confinement, a no-go scenario via symmetry-extension for specific higher anomalies. We comment on the implications and constraints on deconfined quantum critical points (dQCP), quantum spin liquids (QSL) or quantum fermionic liquids in condensed matter, and ultraviolet-infrared (UV-IR) duality in 3+1 spacetime dimensions.
high energy physics theory
The exceptional Drinfel'd algebra (EDA) is a Leibniz algebra introduced to provide an algebraic underpinning with which to explore generalised notions of U-duality in M-theory. In essence it provides an M-theoretic analogue of the way a Drinfel'd double encodes generalised T-dualities of strings. In this note we detail the construction of the EDA in the case where the regular U-duality group is $E_{6(6)}$. We show how the EDA can be realised geometrically as a generalised Leibniz parallelisation of the exceptional generalised tangent bundle for a six-dimensional group manifold $G$, endowed with a Nambu-Lie structure. When the EDA is of coboundary type, we show how a natural generalisation of the classical Yang-Baxter equation arises. The construction is illustrated with a selection of examples including some which embed Drinfel'd doubles and others that are not of this type.
high energy physics theory
With millions of images that are shared online on social networking sites, effective methods for image privacy prediction are highly needed. In this paper, we propose an approach for fusing object, scene context, and image tags modalities derived from convolutional neural networks for accurately predicting the privacy of images shared online. Specifically, our approach identifies the set of most competent modalities on the fly, according to each new target image whose privacy has to be predicted. The approach considers three stages to predict the privacy of a target image, wherein we first identify the neighborhood images that are visually similar and/or have similar sensitive content as the target image. Then, we estimate the competence of the modalities based on the neighborhood images. Finally, we fuse the decisions of the most competent modalities and predict the privacy label for the target image. Experimental results show that our approach predicts the sensitive (or private) content more accurately than the models trained on individual modalities (object, scene, and tags) and prior privacy prediction works. Also, our approach outperforms strong baselines, that train meta-classifiers to obtain an optimal combination of modalities.
computer science
With their continued increase in coverage and quality, data collected from personal air quality monitors has become an increasingly valuable tool to complement existing public health monitoring system over urban areas. However, the potential of using such `citizen science data' for automatic early warning systems is hampered by the lack of models able to capture the high-resolution, nonlinear spatio-temporal features stemming from local emission sources such as traffic, residential heating and commercial activities. In this work, we propose a machine learning approach to forecast high-frequency spatial fields which has two distinctive advantages from standard neural network methods in time: 1) sparsity of the neural network via a spike-and-slab prior, and 2) a small parametric space. The introduction of stochastic neural networks generates additional uncertainty, and in this work we propose a fast approach for forecast calibration, both marginal and spatial. We focus on assessing exposure to urban air pollution in San Francisco, and our results suggest an improvement of 35.7% in the mean squared error over standard time series approach with a calibrated forecast for up to 5 days.
statistics
Acoustic Event Detection (AED), aiming at detecting categories of events based on audio signals, has found application in many intelligent systems. Recently deep neural network significantly advances this field and reduces detection errors to a large scale. However how to efficiently execute deep models in AED has received much less attention. Meanwhile state-of-the-art AED models are based on large deep models, which are computational demanding and challenging to deploy on devices with constrained computational resources. In this paper, we present a simple yet effective compression approach which jointly leverages knowledge distillation and quantization to compress larger network (teacher model) into compact network (student model). Experimental results show proposed technique not only lowers error rate of original compact network by 15% through distillation but also further reduces its model size to a large extent (2% of teacher, 12% of full-precision student) through quantization.
electrical engineering and systems science
Cobalt orthogermanate (GeCo$_2$O$_4$) is a unique system in the family of cobalt spinels ACo$_2$O$_4$ (A= Sn, Ti, Ru, Mn, Al, Zn, Fe, etc.) in which magnetic Co ions stabilize on the pyrochlore lattice exhibiting a large degree of orbital frustration. Due to the complexity of the low-temperature antiferromagnetic (AFM) ordering and long-range magnetic exchange interactions, the lattice dynamics and magnetic structure of GeCo$_2$O$_4$ spinel has remained puzzling. To address this issue, here we present theoretical and experimental investigations of the highly frustrated magnetic structure, and the infrared (IR) and Raman-active phonon modes in the spinel GeCo$_2$O$_4$, which exhibits an AFM ordering below the N\'eel temperature $T_N$ ~21 K, followed by a cubic ($Fd{\bar 3}m$) to tetragonal ($I4_{1}/amd$) structural phase transition at $T_S$ ~16 K. Our density-functional theory (DFT+U) calculations reveal that one needs to consider magnetic-exchange interactions up to the third nearest neighbors to get an accurate description of the low-temperature AFM order in GeCo$_2$O$_4$. At room temperature three distinct IR-active modes ($T_{1u}$) are observed at frequencies 680, 413, and 325 cm$^{-1}$ along with four Raman-active modes $A_{1g}$, $T_{2g}(1)$, $T_{2g}(2)$, and $E_{g}$ at frequencies 760, 647, 550, and 308 cm$^{-1}$, respectively, which match reasonably well with our DFT+U calculated values. All the IR-active and Raman-active phonon modes exhibit signatures of moderate spin-phonon coupling. The temperature dependence of various parameters, such as the shift, width, and intensity, of the Raman-active modes, is also discussed. Noticeable changes around $T_N$ and $T_S$ are observed in the Raman line parameters of the $E_{g}$ and $T_{2g}$ modes, which are associated with the modulation of the Co-O bonds in CoO$_6$ octahedra during the excitations of these modes.
condensed matter
Recently sequence-to-sequence models have started to achieve state-of-the-art performance on standard speech recognition tasks when processing audio data in batch mode, i.e., the complete audio data is available when starting processing. However, when it comes to performing run-on recognition on an input stream of audio data while producing recognition results in real-time and with low word-based latency, these models face several challenges. For many techniques, the whole audio sequence to be decoded needs to be available at the start of the processing, e.g., for the attention mechanism or the bidirectional LSTM (BLSTM). In this paper, we propose several techniques to mitigate these problems. We introduce an additional loss function controlling the uncertainty of the attention mechanism, a modified beam search identifying partial, stable hypotheses, ways of working with BLSTM in the encoder, and the use of chunked BLSTM. Our experiments show that with the right combination of these techniques, it is possible to perform run-on speech recognition with low word-based latency without sacrificing in word error rate performance.
electrical engineering and systems science
We present a delay-compensating control method that transforms exponentially stabilizing controllers for an undelayed system into a sample-based predictive controller with numerical integration. Our method handles both first-order and transport delays in actuators and trades-off numerical accuracy with computation delay to guaranteed stability under hardware limitations. Through hybrid stability analysis and numerical simulation, we demonstrate the efficacy of our method from both theoretical and simulation perspectives.
electrical engineering and systems science
It has been suggested that low energy effective field theories should satisfy given conditions in order to be successfully embedded into string theory. In the case of a single canonically normalized scalar field this translates into conditions on its potential and the derivatives thereof. In this Letter we revisit stochastic models of small field inflation and study the compatibility of the swampland constraints with entropy considerations. We show that stochastic inflation either violates entropy bounds or the swampland criterium on the slope of the scalar field potential. Furthermore, we illustrate that such models are faced with a graceful exit problem: any patch of space which exits the region of eternal inflation is either not large enough to explain the isotropy of the cosmic microwave background, or has a spectrum of fluctuations with an unacceptably large red tilt.
high energy physics theory
We present a simple modification of the Altarelli-Feruglio $A_4$ flavor symmetry model using a minimal number of parameters congruous with the symmetries of the original theory. The resulting model is consistent with all presently known data on neutrino masses and mixings. Furthermore, it makes testable tight predictions for future experiments aimed at improving the measurements of $\sin^2\theta_{12}$, $\sin^2\theta_{13}$, $\delta_{\rm CP}$ and the neutrinoless double-beta decay parameter $|m_{ee}|$. Our model exploits the unique possibility of multiple allowed, yet qualitatively different, contractions of fields charged under the $A_4$ discrete symmetry.
high energy physics phenomenology
We investigate the Morita equivalences of (4+1)-dimensional topological orders. We show that any (4+1)-dimensional super (fermionic) topological order admits a gapped boundary condition -- in other words, all (4+1)-dimensional super topological orders are Morita trivial. As a result, there are no inherently gapless super (3+1)-dimensional theories. On the other hand, we show that there are infinitely many algebraically Morita-inequivalent bosonic (4+1)-dimensional topological orders.
high energy physics theory
In this paper, we study convex optimization problems where agents of a network cooperatively minimize the global objective function which consists of multiple local objective functions. Different from most of the existing works, the local objective function of each agent is presented as the average of finite instantaneous functions. The intention of this work is to solve large-scale optimization problems where the local objective function is complicated and numerous. Integrating the gradient tracking algorithm with stochastic averaging gradient technology, we propose a novel distributed stochastic gradient tracking (termed as S-DIGing) algorithm. At each time instant, only one randomly selected gradient of a instantaneous function is computed and applied to approximate the gradient of local objection function. Based on a primal-dual interpretation of the S-DIGing algorithm, we show that the S-DIGing algorithm linearly converges to the global optimal solution when step-size lies in an explicit internal under the assumptions that the instantaneous functions are strongly convex and have Lipschitz-continuous gradient. Numerical experiments on the logistic regression problem are presented to demonstrate the practicability of the algorithm and correctness of the theoretical results.
mathematics
It is a challenging task to extract the best of both worlds by combining the spatial characteristics of a visible image and the spectral content of an infrared image. In this work, we propose a spatially constrained adversarial autoencoder that extracts deep features from the infrared and visible images to obtain a more exhaustive and global representation. In this paper, we propose a residual autoencoder architecture, regularised by a residual adversarial network, to generate a more realistic fused image. The residual module serves as primary building for the encoder, decoder and adversarial network, as an add on the symmetric skip connections perform the functionality of embedding the spatial characteristics directly from the initial layers of encoder structure to the decoder part of the network. The spectral information in the infrared image is incorporated by adding the feature maps over several layers in the encoder part of the fusion structure, which makes inference on both the visual and infrared images separately. In order to efficiently optimize the parameters of the network, we propose an adversarial regulariser network which would perform supervised learning on the fused image and the original visual image.
statistics
In this work, we consider secure communications in wireless multi-user (MU) multiple-input single-output (MISO) systems with channel coding in the presence of a multi-antenna eavesdropper (Eve). In this setting, we exploit machine learning (ML) tools to design soft and hard decoding schemes by using precoded pilot symbols as training data. In this context, we propose ML frameworks for decoders that allow an Eve to determine the transmitted message with high accuracy. We thereby show that MU-MISO systems are vulnerable to such eavesdropping attacks even when relatively secure transmission techniques are employed, such as symbol-level precoding (SLP). To counteract this attack, we propose two novel SLP-based schemes that increase the bit-error rate at Eve by impeding the learning process. We design these two security-enhanced schemes to meet different requirements regarding complexity, security, and power consumption. Simulation results validate both the ML-based eavesdropping attacks as well as the countermeasures, and show that the gain in security is achieved without affecting the decoding performance at the intended users.
electrical engineering and systems science
We classify generic coadjoint orbits for symplectomorphism groups of compact symplectic surfaces with or without boundary. We also classify simple Morse functions on such surfaces.
mathematics
Inflationary perturbations are approximately Gaussian and deviations from Gaussianity are usually calculated using in-in perturbation theory. This method, however, fails for unlikely events on the tail of the probability distribution: in this regime non-Gaussianities are important and perturbation theory breaks down for $|\zeta| \gtrsim |f_{\rm \scriptscriptstyle NL}|^{-1}$. In this paper we show that this regime is amenable to a semiclassical treatment, $\hbar \to 0$. In this limit the wavefunction of the Universe can be calculated in saddle-point, corresponding to a resummation of all the tree-level Witten diagrams. The saddle can be found by solving numerically the classical (Euclidean) non-linear equations of motion, with prescribed boundary conditions. We apply these ideas to a model with an inflaton self-interaction $\propto \lambda \dot\zeta^4$. Numerical and analytical methods show that the tail of the probability distribution of $\zeta$ goes as $\exp(-\lambda^{-1/4}\zeta^{3/2})$, with a clear non-perturbative dependence on the coupling. Our results are relevant for the calculation of the abundance of primordial black holes.
high energy physics theory
We explore the interplay of New Physics (NP) effects in $(g-2)_\ell$ and $h \to \ell^+ \ell^-$ within the Standard Model Effective Field Theory (SMEFT) framework, including one-loop Renormalization Group (RG) evolution of the Wilson coefficients as well as matching to the observables below the electroweak symmetry breaking scale. We include both the leading dimension six chirality flipping operators including a Higgs and $SU(2)_L$ gauge bosons as well as four-fermion scalar and tensor operators, forming a closed operator set under the SMEFT RG equations. We compare present and future experimental sensitivity to different representative benchmark scenarios. We also consider two simple UV completions, a Two Higgs Doublet Model and a single scalar LeptoQuark extension of the SM, and show how tree level matching to SMEFT followed by the one-loop RG evolution down to the electroweak scale can reproduce with high accuracy the $(g-2)_\ell$ and $h \to \ell^+ \ell^-$ contributions obtained by the complete one- and even two-loop calculations in the full models.
high energy physics phenomenology
We consider the principal subspaces of certain level $k\geqslant 1$ integrable highest weight modules and generalized Verma modules for the untwisted affine Lie algebras in types $D$, $E$ and $F$. Generalizing the approach of G. Georgiev we construct their quasi-particle bases. We use the bases to derive presentations of the principal subspaces, calculate their character formulae and find some new combinatorial identities.
mathematics
This work presented a block triple-relaxation-time (B-TriRT) lattice Boltzmann model for simulating melting in a rectangular cavity heated from below at high Rayleigh (Ra) number (Ra = 108). The test of benchmark problem shows that present B-TriRT can dramatically reduce the numerical diffusion across the phase interface. In addition, the influences of the location of the heated region are investigated. The results indicate that the location of heated region plays an essential role in melting rate and the full melting occur earliest when the heated region is located in the middle region.
physics
The iron stress-induced protein A (IsiA) is a source of interest and debate in biological research. The IsiA super-complex, binding over 200 chlorophylls, assembles in multimeric rings around photosystem I (PSI). Recently, the IsiA-PSI structure was resolved to 3.48 {\AA}. Based on this structure, we created a model simulating a single excitation event in an IsiA monomer. This model enabled us to calculate the fluorescence and the localisation of the excitation in the IsiA structure. To further examine this system, noise was introduced to the model in two forms -- thermal and positional. Introducing noise highlights the functional differences in the system between cryogenic temperatures and biologically relevant temperatures. Our results show that the energetics of the IsiA pigment-protein complex are very robust at room temperature. Nevertheless, shifts in the position of specific chlorophylls lead to large changes in their optical and fluorescence properties. Based on these results we discuss the implication of highly robust structures, with potential for serving different roles in a context dependent manner, on our understanding of the function and evolution of photosynthetic processes.
physics
The subtle and unique imprint of dark matter substructure on extended arcs in strong lensing systems contains a wealth of information about the properties and distribution of dark matter on small scales and, consequently, about the underlying particle physics. However, teasing out this effect poses a significant challenge since the likelihood function for realistic simulations of population-level parameters is intractable. We apply recently-developed simulation-based inference techniques to the problem of substructure inference in galaxy-galaxy strong lenses. By leveraging additional information extracted from the simulator, neural networks are efficiently trained to estimate likelihood ratios associated with population-level parameters characterizing substructure. Through proof-of-principle application to simulated data, we show that these methods can provide an efficient and principled way to simultaneously analyze an ensemble of strong lenses, and can be used to mine the large sample of lensing images deliverable by near-future surveys for signatures of dark matter substructure.
astrophysics
We discuss the Cottingham formula and evaluate the proton-neutron electromagnetic mass difference exploiting the state-of-the-art phenomenological input. We decompose individual contributions to the mass splitting into Born, inelastic and subtraction terms. We evaluate the subtraction-function contribution from the experimental input only and the Born term accounting for the modern low-$Q^2$ data.
high energy physics phenomenology
The power of neural networks lies in their ability to generalize to unseen data, yet the underlying reasons for this phenomenon remain elusive. Numerous rigorous attempts have been made to explain generalization, but available bounds are still quite loose, and analysis does not always lead to true understanding. The goal of this work is to make generalization more intuitive. Using visualization methods, we discuss the mystery of generalization, the geometry of loss landscapes, and how the curse (or, rather, the blessing) of dimensionality causes optimizers to settle into minima that generalize well.
computer science
In this paper we propose a refinement of Sims conjecture concerning the cardinality of the point stabilizers in finite primitive groups and we make some progress towards this refinement. In this process, when dealing with primitive groups of diagonal type, we construct a finite primitive group $G$ on $\Omega$ and two distinct points $\alpha,\beta\in \Omega$ with $G_{\alpha\beta}\unlhd G_\alpha$ and $G_{\alpha\beta}\ne 1$, where $G_{\alpha}$ is the stabilizer of $\alpha$ in $G$ and $G_{\alpha\beta}$ is the stabilizer of $\alpha$ and $\beta$ in $G$. In particular, this example gives an answer to a question raised independently by Peter Cameron and by Alexander Fomin.
mathematics
The special Galileon stands out amongst scalar field theories due to its soft limits, non-linear symmetries and scattering amplitudes. This prompts the question what the origin of its underlying symmetry is. We show that it is intimately connected to general relativity: the special Galileon is the Goldstone mode of the affine group, consisting of linear coordinate transformations, analogous to the dilaton for conformal symmetries. We construct the corresponding metric, and discuss various relations to gravity, Yang-Mills and the non-linear sigma-model.
high energy physics theory
Being able to predict when invoices will be paid is valuable in multiple industries and supports decision-making processes in most financial workflows. However, due to the complexity of data related to invoices and the fact that the decision-making process is not registered in the accounts receivable system, performing this prediction becomes a challenge. In this paper, we present a prototype able to support collectors in predicting the payment of invoices. This prototype is part of a solution developed in partnership with a multinational bank and it has reached up to 81% of prediction accuracy, which improved the prioritization of customers and supported the daily work of collectors. Our simulations show that the adoption of our model to prioritize the work o collectors saves up to ~1.75 million dollars per month. The methodology and results presented in this paper will allow researchers and practitioners in dealing with the problem of invoice payment prediction, providing insights and examples of how to tackle issues present in real data.
computer science
In this research notebook in the four-part, quantum computation and applications, quantum computation and algorithms, quantum communication protocol, and universal quantum computation for quantum engineers, researchers, and scientists, we will discuss and summarized the core principles and practical application areas of quantum computation. We first discuss the historical prospect from which quantum computing emerged from the early days of computing before the dominance of modern microprocessors. And the re-emergence of that quest with the sunset of Moore's law in the current decade. The mapping of computation onto the behavior of physical systems is a historical challenge vividly illustrate by considering how quantum bits may be realized with a wide variety of physical systems, spanning from atoms to photons, using semiconductors and superconductors. The computing algorithms also change with the underline variety of physical systems and the possibility of encoding the information in the quantum systems compared to the ordinary classical computers because of these new abilities afforded by quantum systems. We will also consider the emerging engineering, science, technology, business, and social implications of these advancements. We will describe a substantial difference between quantum and classical computation paradigm. After we will discuss and understand engineering challenges currently faced by developers of the real quantum computation system. We will evaluate the essential technology required for quantum computers to be able to function correctly. Later on, discuss the potential business application, which can be touch by these new computation capabilities. We utilize the IBM Quantum Experience to run the real-world problem, although on a small scale.
quantum physics
Hierarchies permeate the structure of real networks, whose nodes can be ranked according to different features. However, networks are far from tree-like structures and the detection of hierarchical ordering remains a challenge, hindered by the small-world property and the presence of a large number of cycles, in particular clustering. Here, we use geometric representations of undirected networks to achieve an enriched interpretation of hierarchy that integrates features defining popularity of nodes and similarity between them, such that the more similar a node is to a less popular neighbor the higher the hierarchical load of the relationship. The geometric approach allows us to measure the local contribution of nodes and links to the hierarchy within a unified framework. Additionally, we propose a link filtering method, the similarity filter, able to extract hierarchical backbones containing the links that represent statistically significant deviations with respect to the maximum entropy null model for geometric heterogeneous networks. We applied our geometric approach to the detection of similarity backbones of real networks in different domains and found that the backbones preserve local topological features at all scales. Interestingly, we also found that similarity backbones favor cooperation in evolutionary dynamics modelling social dilemmas.
physics
The ability to generate samples of the random effects from their conditional distributions is fundamental for inference in mixed effects models. Random walk Metropolis is widely used to perform such sampling, but this method is known to converge slowly for medium dimensional problems, or when the joint structure of the distributions to sample is spatially heterogeneous. The main contribution consists of an independent Metropolis-Hastings (MH) algorithm based on a multidimensional Gaussian proposal that takes into account the joint conditional distribution of the random effects and does not require any tuning. Indeed, this distribution is automatically obtained thanks to a Laplace approximation of the incomplete data model. Such approximation is shown to be equivalent to linearizing the structural model in the case of continuous data. Numerical experiments based on simulated and real data illustrate the performance of the proposed methods. For fitting nonlinear mixed effects models, the suggested MH algorithm is efficiently combined with a stochastic approximation version of the EM algorithm for maximum likelihood estimation of the global parameters.
statistics
Swampland conjectures are a set of proposed necessary conditions for a low-energy effective field theory to have a UV completion inside a theory of quantum gravity. Swampland conjectures have interesting phenomenological consequences, and conversely phenomenological considerations are useful guidelines in sharping our understanding of quantum gravity.
high energy physics phenomenology
The concept of Higgs inflation can be elegantly incorporated in the Next-to-Minimal Supersymmetric Standard Model (NMSSM). A linear combination of the two Higgs-doublet fields plays the role of the inflaton which is non-minimally coupled to gravity. This non-minimal coupling appears in the low-energy effective superpotential and changes the phenomenology at the electroweak scale. While the field content of the inflation-inspired model is the same as in the NMSSM, there is another contribution to the $\mu$ term in addition to the vacuum expectation value of the singlet. We explore this extended parameter space and point out scenarios with phenomenological differences compared to the pure NMSSM. A special focus is set on the electroweak vacuum stability and the parameter dependence of the Higgs and neutralino sectors. We highlight regions which yield a SM-like $125\,$GeV Higgs boson compatible with the experimental observations and are in accordance with the limits from searches for additional Higgs bosons. Finally, we study the impact of the non-minimal coupling to gravity on the Higgs mixing and in turn on the decays of the Higgs bosons in this model.
high energy physics phenomenology
Medical image segmentation is routinely performed to isolate regions of interest, such as organs and lesions. Currently, deep learning is the state of the art for automatic segmentation, but is usually limited by the need for supervised training with large datasets that have been manually segmented by trained clinicians. The goal of semi-superised and unsupervised image segmentation is to greatly reduce, or even eliminate, the need for training data and therefore to minimze the burden on clinicians when training segmentation models. To this end we introduce a novel network architecture for capable of unsupervised and semi-supervised image segmentation called TricycleGAN. This approach uses three generative models to learn translations between medical images and segmentation maps using edge maps as an intermediate step. Distinct from other approaches based on generative networks, TricycleGAN relies on shape priors rather than colour and texture priors. As such, it is particularly well-suited for several domains of medical imaging, such as ultrasound imaging, where commonly used visual cues may be absent. We present experiments with TricycleGAN on a clinical dataset of kidney ultrasound images and the benchmark ISIC 2018 skin lesion dataset.
electrical engineering and systems science
This paper is devoted to linear space representations of contextual probabilities - in generalized Fock space. This gives the possibility to use the calculus of creation and annihilation operators to express probabilistic dynamics in the Fock space (in particular, the wide class of classical kinetic equations). In this way we reproduce the Doi-Peliti formalism. The context-dependence of probabilities can be quantified with the aid of the generalized formula of total probability - by the magnitude of the interference term.
quantum physics
Small amplitude dipolar oscillations are considered in artificial spin ice on a square lattice in two dimensions. The net magnetic moment of each elongated magnetic island in the spin ice is assumed to have Heisenberg-like dynamics. Each island's magnetic moment is assumed to be influenced by shape anisotropies and by the dipolar interactions with its nearest neighbors. The magnetic dynamics is linearized around one of the ground states, leading to an $8\times 8$ matrix to be diagonalized for the magnetic spin wave modes. Analytic solutions are found and classified as antisymmetric and symmetric with regard to their in-plane dynamic fluctuations. Although only the leading dipolar interactions are included, modes similar to these may be observable experimentally.
condensed matter
Frege's definition of the real numbers, as envisaged in the second volume of \textit{Grundgesetze der Arithmetik}, is fatally flawed by the inconsistency of Frege's ill-fated \textit{Basic Law V}. We restate Frege's definition in a consistent logical framework and investigate whether it can provide a logical foundation of real analysis. Our conclusion will deem it doubtful that such a foundation along the lines of Frege's own indications is possible at all.
mathematics
In this article, we describe the spectral sheaves of algebras of commuting differential operators of genus one and rank two with singular spectral curve, solving a problem posed by Previato and Wilson. We also classify all indecomposable semi-stable sheaves of slope one and ranks two or three on a cuspidal Weierstrass cubic.
mathematics
We construct the classical double copy formalism for M-theory. This extends the current state of the art by including the three form potential of eleven dimensional supergravity along with the metric. The key for this extension is to construct a Kerr-Schild type Ansatz for exceptional field theory. This Kerr-Schild Ansatz then allows us to find the solutions of charged objects such as the membrane from a set of single copy fields. The exceptional field theory formalism then automatically produces the IIB Kerr-Schild ansatz allowing the construction of the single copy for the fields of IIB supergravity (with manifest $SL(2)$ symmetry).
high energy physics theory
In this work we consider the Landau-de Gennes model for liquid crystals with an external electromagnetic field to model the occurrence of the saturn ring effect under the assumption of rotational equivariance. After a rescaling of the energy, a variational limit is derived. Our analysis relies on precise estimates around the singularities and the study of a radial auxiliary problem in regions, where a continuous director field exists. Studying the limit problem, we explain the transition between the dipole and saturn ring configuration and the occurrence of a hysteresis phenomenon, giving a rigorous explanation of what was conjectured previously by [H. Stark, Eur. Phys. J. B 10, 311-321 (1999)].
mathematics
NeurIPS 2020 requested that research paper submissions include impact statements on "potential nefarious uses and the consequences of failure." However, as researchers, practitioners and system designers, a key challenge to anticipating risks is overcoming what Clarke (1962) called 'failures of imagination.' The growing research on bias, fairness, and transparency in computational systems aims to illuminate and mitigate harms, and could thus help inform reflections on possible negative impacts of particular pieces of technical work. The prevalent notion of computational harms -- narrowly construed as either allocational or representational harms -- does not fully capture the open, context dependent, and unobservable nature of harms across the wide range of AI infused systems.The current literature focuses on a small range of examples of harms to motivate algorithmic fixes, overlooking the wider scope of probable harms and the way these harms might affect different stakeholders. The system affordances may also exacerbate harms in unpredictable ways, as they determine stakeholders' control(including of non-users) over how they use and interact with a system output. To effectively assist in anticipating harmful uses, we argue that frameworks of harms must be context-aware and consider a wider range of potential stakeholders, system affordances, as well as viable proxies for assessing harms in the widest sense.
computer science
We present candidates for non-pulsating stars lying in the classical Cepheid instability strip based on OGLE photometric maps combined with Str\"omgren photometry obtained with the 4.1-m SOAR telescope, and Gaia DR2 data in four fields in the Large Magellanic Cloud. We selected 19 candidates in total. After analysis of their light curves from OGLE surveys we found that all these stars appear to be photometrically stable at the level of a few mmag. Our results show that non-pulsating stars might constitute to about 21%-30% of the whole sample of giant stars located in the classical instability strip. Furthermore, we identified potential candidates for classical Cepheids with hot companions based on their Str\"omgren colours.
astrophysics
Responsive Survey Design (RSD) aims to increase the efficiency of survey data collection via live monitoring of paradata and the introduction of protocol changes when survey errors and increased costs seem imminent. Daily predictions of response propensity for all active sampled cases are among the most important quantities for live monitoring of data collection outcomes, making sound predictions of these propensities essential for the success of RSD. Because it relies on real-time updates of prior beliefs about key design quantities, such as predicted response propensities, RSD stands to benefit from Bayesian approaches. However, empirical evidence of the merits of these approaches is lacking in the literature, and the derivation of informative prior distributions is required for these approaches to be effective. In this paper, we evaluate the ability of two approaches to deriving prior distributions for the coefficients defining daily response propensity models to improve predictions of daily response propensity in a real data collection employing RSD. The first approach involves analyses of historical data from the same survey, and the second approach involves literature review. We find that Bayesian methods based on these two approaches result in higher-quality predictions of response propensity than more standard approaches ignoring prior information. This is especially true during the early-to-middle periods of data collection when interventions are often considered in an RSD framework.
statistics
We present a data-driven method for solving the linear quadratic regulator problem for systems with multiplicative disturbances, the distribution of which is only known through sample estimates. We adopt a distributionally robust approach to cast the controller synthesis problem as semidefinite programs. Using results from high dimensional statistics, the proposed methodology ensures that their solution provides mean-square stabilizing controllers with high probability even for low sample sizes. As sample size increases the closed-loop cost approaches that of the optimal controller produced when the distribution is known. We demonstrate the practical applicability and performance of the method through a numerical experiment.
electrical engineering and systems science
As a part of the CALYPSO large programme, we constrain the properties of protostellar jets and outflows in a sample of 21 Class 0 protostars with internal luminosities, Lint, from 0.035 to 47 Lsun. We analyse high angular resolution (~0.5"-1") IRAM PdBI observations in CO (2-1), SO ($5_6-4_5$), and SiO (5-4). CO (2-1), which probes outflowing gas, is detected in all the sources (for the first time in SerpS-MM22 and SerpS-MM18b). Collimated high-velocity jets in SiO (5-4) are detected in 67% of the sources (for the first time in IRAS4B2, IRAS4B1, L1448-NB, SerpS-MM18a), and 77% of these also show jet/outflow emission in SO ($5_6-4_5$). In 5 sources (24% of the sample) SO ($5_6-4_5$) probes the inner envelope and/or the disk. The CALYPSO survey shows that the outflow phenomenon is ubiquitous and that the detection rate of high-velocity jets increases with protostellar accretion, with at least 80% of the sources with Lint>1 Lsun driving a jet. The protostellar flows exhibit an onion-like structure, where the SiO jet (opening angle ~10$^o$) is nested into a wider angle SO (~15$^o$) and CO (~25$^o$) outflow. On scales >300 au the SiO jets are less collimated than atomic jets from Class II sources (~3$^o$). Velocity asymmetry between the two jet lobes are detected in one third of the sources, similarly to Class II atomic jets, suggesting that the same launching mechanism is at work. Most of the jets are SiO rich (SiO/H2 from >2.4e-7 to >5e-6), which indicates efficient release of >1%-10% of silicon in gas phase likely in dust-free winds, launched from inside the dust sublimation radius. The mass-loss rates (from ~7e-8 to ~3e-6 Msun/yr) are larger than what was measured for Class II jets. Similarly to Class II sources, the mass-loss rates are ~1%-50% of the mass accretion rates suggesting that the correlation between ejection and accretion in young stars holds from 1e4 yr up to a few Myr.
astrophysics
We apply torque equilibrium spin wave theory (TESWT) to investigate an anisotropic XXZ antiferromagnetic model with Dzyaloshinskii-Moriya (DM) interaction in a triangular lattice. Considering the quasiparticle vacuum as our reference, we provide an accurate analysis of the non-collinear ground state of a frustrated triangular lattice magnet using the TESWT formalism. We elucidate the effects of quantum fluctuations on the ordering wave vector based on model system parameters. We study the single magnon dispersion, the two-magnon continuum using the spectral function, and the Raman spectrum of bimagnon and trimagnon excitations. We present our results for the $HH, VV$, and the $HV$ polarization Raman geometry dependence of the bimagnon and the trimagnon excitation spectrum where $H (V)$ represents horizontal (vertical) polarization. Our calculations show that both the $HH$ and the $HV$ polarization spectrum can be used to determine the degree of anisotropy of our system. We calculate the Raman spectra of Ba$_3$CoSb$_2$O$_9$ and Cs$_2$CuCl$_4$.
condensed matter
Inverse problems exist in many domains such as phase imaging, image processing, and computer vision. These problems are often solved with application-specific algorithms, even though their nature remains the same: mapping input image(s) to output image(s). Deep convolutional neural networks have shown great potential for highly variable tasks across many image-based domains, but are usually difficult to train due to their inner high non-linearities. We propose a novel neural network architecture highlighting fast convergence as a generic solution addressing image(s)-to-image(s) inverse problems of different domains. Here we show that this approach is effective at predicting phases from direct intensity measurements, imaging objects from diffused reflections and denoising scanning transmission electron microscopy images, with just different training datasets. This opens a way to solve problems statistically through big data, in contrast to implementing explicit inversion algorithms from their mathematical formulas. Previous works have targeted much more on \textit{how} can we reconstruct rather than \textit{what} can be reconstructed. Our strategy offers a paradigm shift.
physics
We investigate the dynamical properties of the two-bosons quantum walk in system with different degrees of coherence, where the effect of the coherence on the two-bosons quantum walk can be naturally introduced. A general analytical expression of the two-bosons correlation function for both pure states and mixed states is given. We propose a possible two-photon quantum-walk scheme with a mixed initial state and find that the two-photon correlation function and the average distance between two photons can be influenced by either the initial photon distribution, or the relative phase, or the degree of coherence. The propagation features of our numerical results can be explained by our analytical two-photon correlation function.
quantum physics
A comprehensively theoretical analysis on the broadband spectral energy distributions (SEDs) of large-scale jet knots in 3C 273 is presented for revealing their X-ray radiation mechanism. We show that these SEDs cannot be explained with a single electron population model when the Doppler boosting effect is either considered or not. By adding a more energetic electron (the leptonic model) or proton (the hadronic model) population, the SEDs of all knots are well represented. In the leptonic model, the electron population that contributes the X-ray emission is more energetic than the one responsible for the radio-optical emission by almost two orders of magnitude; the derived equipartition magnetic field strengths (B_eq) are ~0.1 mG. In the hadronic model, the protons with energy of ~20 PeV are required to interpret the observed X-rays; the B_eq values are several mG, larger than that in the leptonic model. Based on the fact that no resolved substructures are observed in these knots and the fast cooling-time of the high-energy electrons is difficult to explain the observed X-ray morphologies, we argue that two distinct electron populations accelerated in these knots are unreasonable and their X-ray emission would be attributed to the proton synchrotron radiation accelerated in these knots. In case of these knots have relativistic motion towards the observer, the super-Eddington issue of the hadronic model could be avoided. Multiwavelength polarimetry and the gamma-ray observations with high resolution may be helpful to discriminate these models.
astrophysics
We propose a 4D convolutional neural network (CNN) for the segmentation of retrospective ECG-gated cardiac CT, a series of single-channel volumetric data over time. While only a small subset of volumes in the temporal sequence is annotated, we define a sparse loss function on available labels to allow the network to leverage unlabeled images during training and generate a fully segmented sequence. We investigate the accuracy of the proposed 4D network to predict temporally consistent segmentations and compare with traditional 3D segmentation approaches. We demonstrate the feasibility of the 4D CNN and establish its performance on cardiac 4D CCTA.
electrical engineering and systems science