text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
Entanglement is a fundamental feature of quantum mechanics, considered a key resource in quantum information processing. Measuring entanglement is an essential step in a wide range of applied and foundational quantum experiments. When a two-particle quantum state is not pure, standard methods to measure the entanglement require detection of both particles. We introduce a method in which detection of only one of the particles is required to characterize the entanglement of a two-particle mixed state. Our method is based on the principle of quantum interference. We use two identical sources of a two-photon mixed state and generate a set of single-photon interference patterns. The entanglement of the two-photon quantum state is characterized by the visibility of the interference patterns. Our experiment thus opens up a distinct avenue for verifying and measuring entanglement, and can allow for mixed state entanglement characterization even when one particle in the pair cannot be detected. | quantum physics |
We present theoretical results of the calculations of optical functions for Cu$_2$O quantum well (QW) with Rydberg excitons in an external homogeneous electric field of an arbitrary field strength. Two configurations of an external electric field perpendicular and parallel to the QW planes are considered in the energetic region for discrete excitonic states and continuum states. With the help of the real density matrix approach, which enables the derivation of the analytical expressions for the QW electro-optical functions, absorption spectra are calculated for the case of the excitation energy below the gap energy. | condensed matter |
We consider exact distance oracles for directed weighted planar graphs in the presence of failing vertices. Given a source vertex $u$, a target vertex $v$ and a set $X$ of $k$ failed vertices, such an oracle returns the length of a shortest $u$-to-$v$ path that avoids all vertices in $X$. We propose oracles that can handle any number $k$ of failures. More specifically, for a directed weighted planar graph with $n$ vertices, any constant $k$, and for any $q \in [1,\sqrt n]$, we propose an oracle of size $\tilde{\mathcal{O}}(\frac{n^{k+3/2}}{q^{2k+1}})$ that answers queries in $\tilde{\mathcal{O}}(q)$ time. In particular, we show an $\tilde{\mathcal{O}}(n)$-size, $\tilde{\mathcal{O}}(\sqrt{n})$-query-time oracle for any constant $k$. This matches, up to polylogarithmic factors, the fastest failure-free distance oracles with nearly linear space. For single vertex failures ($k=1$), our $\tilde{\mathcal{O}}(\frac{n^{5/2}}{q^3})$-size, $\tilde{\mathcal{O}}(q)$-query-time oracle improves over the previously best known tradeoff of Baswana et al. [SODA 2012] by polynomial factors for $q = \Omega(n^t)$, $t \in (1/4,1/2]$. For multiple failures, no planarity exploiting results were previously known. | computer science |
Dark matter subhalos, predicted in large numbers in the cold dark matter scenario, should have an impact on particle dark matter searches. Recent results show that tidal disruption of these objects in computer simulations is over-efficient due to numerical artifacts and resolution effects. Accounting for these results, we re-estimate the subhalo abundance in the Milky Way using semi-analytical techniques. In particular, we show that the boost factor for gamma rays and cosmic-ray antiprotons is increased by roughly a factor of two. | astrophysics |
Radio maps are important enablers for many applications in wireless networks, ranging from network planning and optimization to fingerprint based localization. Sampling the complete map is prohibitively expensive in practice, so methods for reconstructing the complete map from a subset of measurements are increasingly gaining attention in the literature. In this paper, we propose two algorithms for this purpose, which build on existing approaches that aim at minimizing the tensor rank while additionally enforcing smoothness of the radio map. Experimental results with synthetic measurements derived via ray tracing show that our algorithms outperform state of the art techniques. | electrical engineering and systems science |
With growing amounts of available textual data, development of algorithms capable of automatic analysis, categorization and summarization of these data has become a necessity. In this research we present a novel algorithm for keyword identification, i.e., an extraction of one or multi-word phrases representing key aspects of a given document, called Transformer-based Neural Tagger for Keyword IDentification (TNT-KID). By adapting the transformer architecture for a specific task at hand and leveraging language model pretraining on a domain specific corpus, the model is capable of overcoming deficiencies of both supervised and unsupervised state-of-the-art approaches to keyword extraction by offering competitive and robust performance on a variety of different datasets while requiring only a fraction of manually labeled data required by the best performing systems. This study also offers thorough error analysis with valuable insights into the inner workings of the model and an ablation study measuring the influence of specific components of the keyword identification workflow on the overall performance. | computer science |
We generalize the Hart-Shelah example \cite{HaSh:323} to higher infinitary logics. We build, for each natural number $k\geq 2$ and for each infinite cardinal $\lambda$, a sentence $\psi_k^\lambda$ of the logic $L_{(2^\lambda)^+,\omega}$ that (modulo mild set theoretical hypotheses around $\lambda$ and assuming $2^\lambda < \lambda^{+m}$) is categorical in $\lambda^+,\dots,\lambda^{+k-1}$ but not in $\beth_{k+1}(\lambda)^+$ (or beyond); we study the dimensional encoding of combinatorics involved in the construction of this sentence and study various model-theoretic properties of the resulting abstract elementary class ${\mathcal K}^*(\lambda,k)=(Mod(\psi_k^\lambda),\prec_{(2^\lambda)^+,\omega})$ in the finite interval of cardinals $\lambda,\lambda^+,\dots,\lambda^{+k}$. | mathematics |
D-instanton world-volume theory has open string zero modes describing collective coordinates of the instanton. The usual perturbative amplitudes in the D-instanton background suffer from infra-red divergences due to the presence of these zero modes, and the usual approach of analytic continuation in momenta does not work since all open string states on a D-instanton carry strictly zero momentum. String field theory is well-suited for tackling these issues. However we find a new subtlety due to the existence of additional zero modes in the ghost sector. This causes a breakdown of the Siegel gauge, but a different gauge fixing consistent with the BV formalism renders the perturbation theory finite and unambiguous. At each order, this produces extra contribution to the amplitude besides what is obtained from integration over the moduli space of Riemann surfaces. | high energy physics theory |
Micro-inverter technologies are becoming increasingly popular as a choice of grid connection for small-scale photovoltaic systems. Efficiently harvesting the maximum energy from a photovoltaic system reduces the Levelized cost for solar energy, enhancing its role in combatting climate change. Various topologies are proposed through research and have been summarised in this paper. Furthermore, this paper investigates two popular Maximum Power Point Tracking (MPPT) methods through simulation using Matlab Simulink. | electrical engineering and systems science |
For $G$ a topological group, existence theorems by Milnor (1956), Gelfand-Fuks (1968), and Segal (1975) of classifying spaces for principal $G$-bundles are generalized to $G$-spaces with torsion. Namely, any $G$-space approximately covered by tubes (a generalization of local trivialization) is the pullback of a universal space indexed by the orbit types of tubes and cardinality of the cover. For $G$ a Lie group, via a metric model we generalize the corresponding uniqueness theorem by Palais (1960) and Bredon (1972) for compact $G$. Namely, the $G$-homeomorphism types of proper $G$-spaces over a metric space correspond to stratified-homotopy classes of orbit classifying maps. The former existence result is enabled by Segal's clever but esoteric use of non-Hausdorff spaces. The latter uniqueness result is enabled by our own development of equivariant ANR theory for noncompact Lie $G$. Applications include the existence part of classification for unstructured fiber bundles with locally compact Hausdorff fiber and with locally connected base or fiber, as well as for equivariant principal bundles which in certain cases via other models is due to Lashof-May (1986) and to L\"uck-Uribe (2014). From a categorical perspective, our general model $E_\mathcal{F}^\kappa G$ is a final object inspired by the formulation of the Baum-Connes conjecture (1994). | mathematics |
This paper aims at studying the Iwasawa $\lambda$-invariant of the $p$-primary Selmer group. We study the growth behaviour of $p$-primary Selmer groups in $p$-power degree extensions over non-cyclotomic $\mathbb{Z}_p$-extensions of a number field. We prove a generalization of Kida's formula in such a case. Unlike the cyclotomic $\mathbb{Z}_p$-extension, where all primes are finitely decomposed; in the $\mathbb{Z}_p$-extensions we consider, primes may be infinitely decomposed. In the second part of the paper, we study the relationship for Iwasawa invariants with respect to congruences, obtaining refinements of the results of R. Greenberg-V. Vatsal and K. Kidwell. As an application, we provide an algorithm for constructing elliptic curves with large anticyclotomic $\lambda$-invariant. Our results are illustrated by explicit computation. | mathematics |
We report results from the X-ray and optical monitoring of the black hole candidate MAXI J1820+070 (=ASSASN-18ey) over the entire period of its outburst from March to October 2018.In this outburst, the source exhibited two sets of `fast rise and slow decay'-type long-term flux variations. We found that the 1--100 keV luminosities at two peaks were almost the same, although a significant spectral softening was only seen in the second flux rise. This confirms that the state transition from the low/hard state to the high/soft state is not determined by the mass accretion rate alone. The X-ray spectrum was reproduced with the disk blackbody emission and its Comptonization, and the long-term spectral variations seen in this outburst were consistent with a disk truncation model. The Comptonization component, with a photon index of 1.5-1.9 and electron temperature of ~>40 keV, was dominant during the low/hard state periods, and its contribution rapidly decreased (increased) during the spectral softening (hardening). During the high/soft state period, in which the X-ray spectrum became dominated by the disk blackbody component, the inner disk radius was almost constant, suggesting that the standard disk was present down to the inner most stable circular orbit. The long-term evolution of optical and X-ray luminosities and their correlation suggest that the jets substantially contributed to the optical emission in the low/hard state, while they are quenched and the outer disk emission dominated the optical flux in the intermediate state and the high/soft state. | astrophysics |
We give a simple proof of the fact that a finite measure $\mu$ on the unit disk is a Carleson measure for the Dirichlet space if it satisfies the Carleson one-box condition $\mu(S(I))=O(\phi(|I|))$, where $\phi:(0,2\pi]\to(0,\infty)$ is an increasing function such that $\int_0^{2\pi}(\phi(x)/x)\,dx<\infty$. We further show that the integral condition on $\phi$ is sharp. | mathematics |
Proietti et al. (arXiv:1902.05080) reported on an experiment designed to settle, or at least to throw light upon, the paradox of Wigner's friend. Without questioning the rigor or ingenuity of the experimental protocol, I argue that its relevance to the paradox itself is rather limited. | quantum physics |
This paper investigates the impacts of competition in autonomous mobility-on-demand systems. By adopting a network-flow based formulation, we first determine the optimal strategies of profit-maximizing platform operators in monopoly and duopoly markets, including the optimal prices of rides. Furthermore, we characterize the platform operator's profits and the consumer surplus. We show that for the duopoly, the equilibrium prices for rides have to be symmetric between the firms. Then, in order to study the benefits of introducing competition in the market, we derive universal theoretical bounds on the ratio of prices for rides, aggregate demand served, profits of the firms, and consumer surplus between the monopolistic and the duopolistic setting. We discuss how consumers' firm loyalty affects each of the aforementioned metrics. Finally, using the Manhattan network and demand data, we quantify the efficacy of static pricing and routing policies and compare it to real-time model predictive policies. | mathematics |
On the product elliptic threefold $X = C \times S$ where $C$ is an elliptic curve and $S$ is a K3 surface of Picard rank 1, we define a notion of limit tilt stability, which satisfies the Harder-Narasimhan property. We show that under the Fourier-Mukai transform $\Phi$ on $D^b(X)$ induced by the classical Fourier-Mukai transform on $D^b(C)$, a slope stable torsion-free sheaf satisfying a vanishing condition in codimension 2 (e.g. a reflexive sheaf) is taken to a limit tilt stable object. We also show that a limit tilt semistable object on $X$ is taken by $\Phi$ to a slope semistable sheaf, up to modification by the transform of a codimension 2 sheaf. | mathematics |
Physical contact between hands and objects plays a critical role in human grasps. We show that optimizing the pose of a hand to achieve expected contact with an object can improve hand poses inferred via image-based methods. Given a hand mesh and an object mesh, a deep model trained on ground truth contact data infers desirable contact across the surfaces of the meshes. Then, ContactOpt efficiently optimizes the pose of the hand to achieve desirable contact using a differentiable contact model. Notably, our contact model encourages mesh interpenetration to approximate deformable soft tissue in the hand. In our evaluations, our methods result in grasps that better match ground truth contact, have lower kinematic error, and are significantly preferred by human participants. Code and models are available online. | computer science |
We construct new examples of Einstein metrics by perturbing the conformal infinity of geometrically finite hyperbolic metrics and by applying the inverse function theorem in suitable weighted H\"older spaces. | mathematics |
We systematically study the perturbative anomaly inflow by the bulk Chern-Simons (CS) theory in five-dimensional anti-de Sitter spacetime ($\text{AdS}_5$). When the bulk geometry is chosen to be AdS, along with the standard bulk-boundary interplay, an additional, holographic dual description emerges. Introduction of UV and IR 3-branes makes the anomaly story remarkably rich and many interesting aspects can be obtained. With Neumann boundary conditions (BC) on the IR brane, the dual CFT has an unbroken symmetry, which then can either be weakly gauged by choosing Neumann UV-BC, or kept as a purely global symmetry with Dirichlet UV-BC. This corresponds to the holographic realization of `t Hooft anomaly matching, either for ABJ or `t Hooft anomalies, or both. On the other hand, when the IR-BC breaks the bulk gauge group $G$ down to a subgroup $H_1$, the dual 4D CFT has a spontaneously broken symmetry. In this case, we describe how the (gauged) Wess-Zumino-Witten action emerges naturally from the bulk CS action. In particular, we discuss that, unlike in the case of Neumann IR-BC where 5D gauge invariance is restored by IR brane-localized fermions, with $G/H_1$ IR-BC no localized modes are required. Nevertheless, anomaly matching is fulfilled by delocalized modes, namely Wilson lines along the fifth dimension, and these are Goldstone bosons (GB) in the dual 4D theory. When some part of $G$ is weakly gauged we show that, thanks to a proper field redefinition of the corresponding source fields, the "would-be" GBs can be completely removed, consistently with our standard expectation. We demonstrate how the most general case, a typical situation occurring in models of dynamical symmetry breaking, may be analyzed with our formalism. Finally, we discuss the quantization condition of the CS level, both with Neumann and Dirichlet BC. | high energy physics theory |
A long-lived multi-mode qubit register is an enabling technology for modular quantum computing architectures. For interfacing with superconducting qubits, such a quantum memory should be able to store incoming quantum microwave fields at the single-photon level for long periods of time, and retrieve them on-demand. Here, we demonstrate the partial absorption of a train of weak microwave fields in an ensemble of bismuth donor spins in silicon, their storage for 100 ms, and their retrieval, using a Hahn-echo-like protocol. The long storage time is obtained by biasing the bismuth donors at a clock transition. Phase coherence and quantum statistics are preserved in the storage. | quantum physics |
In this paper, we show that a multi-mode antenna (MMA) is an interesting alternative to a conventional phased antenna array for direction-of-arrival (DoA) estimation. By MMA we mean a single physical radiator with multiple ports, which excite different characteristic modes. In contrast to phased arrays, a closed-form mathematical model of the antenna response, like a steering vector, is not straightforward to define for MMAs. Instead one has to rely on calibration measurement or electromagnetic field (EMF) simulation data, which is discrete. To perform DoA estimation, array interpolation technique (AIT) and wavefield modeling (WM) are suggested as methods with inherent interpolation capabilities, fully taking antenna nonidealities like mutual coupling into account. We present a non-coherent DoA estimator for low-cost receivers and show how coherent DoA estimation and joint DoA and polarization estimation can be performed with MMAs. Utilizing these methods, we assess the DoA estimation performance of an MMA prototype in simulations for both 2D and 3D cases. The results show that WM outperforms AIT for high SNR. Coherent estimation is superior to non-coherent, especially in 3D, because non-coherent suffers from estimation ambiguities. In conclusion, DoA estimation with a single MMA is feasible and accurate. | electrical engineering and systems science |
For substitutional crystalline solids typically referred to classical discrete system under constant composition, macroscopic structure in thermodynamically equilibrium state can be typically obtained through canonical average, where a set of microscopic structure dominantly contributing to the average should depend on temperature and many-body interaction through Boltzmann factor, exp(-bE). Despite these facts, our recent study reveals that based on configurational geometry, a few specially-selected microscopic structure (called projection state PS) independent of temperature and many-body interaction can reasonably characterize temperature dependence of macroscopic structure. Here we further modify representation of canonical average by using the same PSs, based on (i) transformation of multivariate 3-order moment matrix by one of the PS, and (ii) Pade approximation. We prove that the former can always results in better representation of canonical average than non-transformation one, confirmed by performing hypershere integration, while the latter approximation can provide better representaion except for e.g., inclusion of its own singular point within considered temperature, which can be known a priori. | condensed matter |
Spatial symmetries of quantum systems leads to important effects in spectroscopy, such as selection rules and dark states. Motivated by the increasing strength of light-matter interaction achieved in recent experiments, we investigate a set of dynamically-generalized symmetries for quantum systems, which are subject to a strong periodic driving. Based on Floquet response theory, we study rotational, particle-hole, chiral and time-reversal symmetries and their signatures in spectroscopy, including symmetry-protected dark states (spDS), a Floquet band selection rule (FBSR), and symmetry-induced transparency (siT). Specifically, a dynamical rotational symmetry establishes dark state conditions, as well as selection rules for inelastic light scattering processes; a particle-hole symmetry introduces dark states for symmetry related Floquet states and also a transparency effect at quasienergy crossings; chiral symmetry and time-reversal symmetry alone do not imply dark state conditions, but can be combined to the particle-hole symmetry. Our predictions reveal new physical phenomena when a quantum system reaches the strong light-matter coupling regime, important for superconducting qubits, atoms and molecules in optical or plasmonic field cavities, and optomechanical systems. | quantum physics |
Tensors are becoming prevalent in modern applications such as medical imaging and digital marketing. In this paper, we propose a sparse tensor additive regression (STAR) that models a scalar response as a flexible nonparametric function of tensor covariates. The proposed model effectively exploits the sparse and low-rank structures in the tensor additive regression. We formulate the parameter estimation as a non-convex optimization problem, and propose an efficient penalized alternating minimization algorithm. We establish a non-asymptotic error bound for the estimator obtained from each iteration of the proposed algorithm, which reveals an interplay between the optimization error and the statistical rate of convergence. We demonstrate the efficacy of STAR through extensive comparative simulation studies, and an application to the click-through-rate prediction in online advertising. | statistics |
A system of smooth "frozen" Janus-type disks is studied. Such disks cannot rotate and are divided by their diameter into two sides of different inelasticities. Taking as a reference a system of colored elastic disks, we find differences in the behavior of the collisions once the anisotropy is included. A homogeneous state, akin to the homogeneous cooling state of granular gases, is seen to arise and the singular behavior of both the collisions and the precollisional correlations are highlighted. | condensed matter |
The quantum Hall effect is studied in a spherical geometry using the Dirac operator for non-interacting fermions in a background magnetic field, which is supplied by a Wu-Yang magnetic monopole at the centre of the sphere. Wave functions are cross-section of a non-trivial $U(1)$ bundle, the zero point energy then vanishes and no perturbations can lower the energy. The Atiyah-Singer index theorem constrains the degeneracy of the ground state. The fractional quantum Hall effect is also studied in the composite Fermion model. Vortices of the statistical gauge field are supplied by Dirac strings associated with the monopole field. A unique ground state is attained only if the vortices have an even number of flux units and act to counteract the background field, reducing the effective field seen by the composite fermions. There is a unique gapped ground state and, for large particle numbers, fractions $\nu=\frac{1}{2 k+1}$ are recovered. | high energy physics theory |
Weyl's unitary matrices, which were introduced in Weyl's 1927 paper on group theory and quantum mechanics, are $p\times p$ unitary matrices given by the diagonal matrix whose entries are the $p$-th roots of unity and the cyclic shift matrix. Weyl's unitaries, which we denote by $\mathfrak u$ and $\mathfrak v$, satisfy $\mathfrak u^p=\mathfrak v^p=1_p$ (the $p\times p$ identity matrix) and the commutation relation $\mathfrak u\mathfrak v=\zeta \mathfrak v\mathfrak u$, where $\zeta$ is a primitive $p$-th root of unity. We prove that Weyl's unitary matrices are universal in the following sense: if $u$ and $v$ are any $d\times d$ unitary matrices such that $u^p= v^p=1_d$ and $ u v=\zeta vu$, then there exists a unital completely positive linear map $\phi:\mathcal M_p(\mathbb C)\rightarrow\mathcal M_d(\mathbb C)$ such that $\phi(\mathfrak u)= u$ and $\phi(\mathfrak v)=v$. We also show, moreover, that any two pairs of $p$-th order unitary matrices that satisfy the Weyl commutation relation are completely order equivalent. When $p=2$, the Weyl matrices are two of the three Pauli matrices from quantum mechanics. It was recently shown that $g$-tuples of Pauli-Weyl-Brauer unitaries are universal for all $g$-tuples of anticommuting selfadjoint unitary matrices; however, we show here that the analogous result fails for positive integers $p>2$. Finally, we show that the Weyl matrices are extremal in their matrix range, using recent ideas from noncommutative convexity theory. | mathematics |
We study the entropy production in non-equilibrium quantum systems without dissipation, which is generated exclusively by the spontaneous breaking of time-reversal invariance. Systems which preserve the total energy and particle number and are in contact with two heat reservoirs are analysed. Focussing on point-like interactions, we derive the probability distribution induced by the entropy production operator. We show that all its moments are positive in the zero frequency limit. The analysis covers both Fermi and Bose statistics. | condensed matter |
We apply the Tremaine-Weinberg method to 19 nearby galaxies using stellar mass surface densities and velocities derived from the PHANGS-MUSE survey, to calculate (primarily bar) pattern speeds ($\Omega_{\rm P}$). After quality checks, we find that around half (10) of these stellar mass-based measurements are reliable. For those galaxies, we find good agreement between our results and previously published pattern speeds, and use rotation curves to calculate major resonance locations (co-rotation radii and Lindblad resonances). We also compare these stellar-mass derived pattern speeds with H$\alpha$ (from MUSE) and CO($J=2{-}1$) emission from the PHANGS-ALMA survey. We find that in the case of these clumpy ISM tracers, this method erroneously gives a signal that is simply the angular frequency at a representative radius set by the distribution of these clumps ($\Omega_{\rm clump}$), and that this $\Omega_{\rm clump}$ is significantly different to $\Omega_{\rm P}$ ($\sim$20% in the case of H$\alpha$, and $\sim$50% in the case of CO). Thus, we conclude that it is inadvisable to use "pattern speeds" derived from ISM kinematics. Finally, we compare our derived pattern speeds and co-rotation radii, along with bar properties, to the global parameters of these galaxies. Consistent with previous studies, we find that galaxies with a later Hubble type have a larger ratio of co-rotation radius to bar length, more molecular-gas rich galaxies have higher $\Omega_{\rm P}$, and more bulge-dominated galaxies have lower $\Omega_{\rm P}$. Unlike earlier works, however, there are no clear trends between the bar strength and $\Omega_{\rm P}$, nor between the total stellar mass surface density and the pattern speed. | astrophysics |
Stimulated Raman adiabatic passage (STIRAP) is a widely-used technique of coherent state-to-state manipulation for many applications in physics, chemistry, and beyond. The adiabatic evolution of the state involved in STIRAP, called adiabatic passage, guarantees its robustness against control errors, but also leads to problems of low efficiency and decoherence. Here we propose and experimentally demonstrate an alternative approach, termed stimulated Raman "user-defined" passage (STIRUP), where a parameterized state is employed for constructing desired evolutions to replace the adiabatic passage in STIRAP. The user-defined passages can be flexibly designed for optimizing different objectives for different tasks, e.g. minimizing leakage error. To experimentally benchmark its performance, we apply STIRUP to the task of coherent state transfer in a superconducting Xmon qutrit. We found that STIRUP completed the transfer more then four times faster than STIRAP with enhanced robustness, and achieved a fidelity of 99.5%, which is the highest among all recent experiments based on STIRAP and its variants. In practice, STIRUP differs from STIRAP only in the design of driving pulses; therefore, most existing applications of STIRAP can be readily implemented with STIRUP. | quantum physics |
We propose a general Bayesian approach to modeling epidemics such as COVID-19. The approach grew out of specific analyses conducted during the pandemic, in particular an analysis concerning the effects of non-pharmaceutical interventions (NPIs) in reducing COVID-19 transmission in 11 European countries. The model parameterizes the time varying reproduction number $R_t$ through a regression framework in which covariates can e.g be governmental interventions or changes in mobility patterns. This allows a joint fit across regions and partial pooling to share strength. This innovation was critical to our timely estimates of the impact of lockdown and other NPIs in the European epidemics, whose validity was borne out by the subsequent course of the epidemic. Our framework provides a fully generative model for latent infections and observations deriving from them, including deaths, cases, hospitalizations, ICU admissions and seroprevalence surveys. One issue surrounding our model's use during the COVID-19 pandemic is the confounded nature of NPIs and mobility. We use our framework to explore this issue. We have open sourced an R package epidemia implementing our approach in Stan. Versions of the model are used by New York State, Tennessee and Scotland to estimate the current situation and make policy decisions. | statistics |
Although environmental radioactivity is all around us, the collective public imagination often associates a negative feeling to this natural phenomenon. To increase the familiarity with this phenomenon we have designed, implemented, and tested an interdisciplinary educational activity for pre-collegiate students in which nuclear engineering and computer science are ancillary to the comprehension of basic physics concepts. Teaching and training experiences are performed by using a 4" x 4" NaI(Tl) detector for in-situ and laboratory {\gamma}-ray spectroscopy measurements. Students are asked to directly assemble the experimental setup and to manage the data-taking with a dedicated Android app, which exploits a client-server system that is based on the Bluetooth communication protocol. The acquired {\gamma}-ray spectra and the experimental results are analyzed using a multiple-platform software environment and they are finally shared on an open access Web-GIS service. These all-round activities combining theoretical background, hands-on setup operations, data analysis, and critical synthesis of the results were demonstrated to be effective in increasing students' awareness in quantitatively investigating environmental radioactivity. Supporting information to the basic physics concepts provided in this article can be found at http://www.fe.infn.it/radioactivity/educational. | physics |
Some popular functions used to test global optimization algorithms have multiple local optima, all with the same value. That is all local optima are also global optima. This paper suggests that such functions are easily fortified by adding a localized bump at the location of one of the optima, making the functions more difficult to optimize due to the multiple competing local optima. This process is illustrated here for the Branin-Hoo function, which has three global optima. We use the popular Python SciPy differential evolution (DE) optimizer for the illustration. DE also allows the use of the gradient-based BFGS local optimizer for final convergence. By making a large number of replicate runs we establish the probability of reaching a global optimum with the original and fortified Branin-Hoo. With the original function we find 100% probability of success with a moderate number of function evaluations. With the fortified version, the probability of getting trapped in a non-global optimum could be made small only with a much larger number of function evaluations. However, since the probability of ending up at the global optimum is usually 1/3 or more, it may be beneficial to perform multiple inexpensive optimizations rather than one expensive optimization. Then the probability of one of them hitting the global optimum can be made high. We found that for the most challenging global optimum, multiple runs reduced substantially the extra cost for the fortified function compared to the original Branin-Hoo. | mathematics |
We consider the Big Bang Nucleosynthesis (BBN) bounds on light dark matter whose cross section off nucleons is sufficiently large to enable acceleration by scattering off of cosmic rays in the local galaxy. Such accelerated DM could then deposit energy in terrestrial detectors. Since this signal involves DM of mass ~ keV - 100 MeV and requires large cross sections > 10^-31 cm^2 in a relativistic kinematic regime, we find that the DM population in this scenario is generically equilibrated with Standard Model particles in the early universe. For sufficiently low DM masses < 10 MeV, corresponding to the bulk of the favored region of many cosmic-ray upscattering studies, this equilibrated DM population adds an additional component to the relativistic energy density around T ~ few MeV and thereby spoils the successful predictions of BBN. In the remaining ~ 10-100 MeV mass range, the large couplings required in this scenario are either currently excluded or within reach of current or future accelerator-based searches. | high energy physics phenomenology |
We prove a class of modified paraboloid restriction estimates with a loss of angular derivatives for the full set of paraboloid restriction conjecture indices. This result generalizes the paraboloid restriction estimate in radial case from [Shao, Rev. Mat. Iberoam. 25(2009), 1127-1168], as well as the result from [Miao et al. Proc. AMS 140(2012), 2091-2102]. As an application, we show a local smoothing estimate for a solution of the linear Schr\"odinger equation under the assumption that the initial datum has additional angular regularity. | mathematics |
Observational evidence and theoretical arguments postulate that outflows may play a significant role in the advection-dominated accretion discs (ADAFs). While the azimuthal viscosity is the main focus of most previous studies in this context, recent studies indicated that disc structure can also be affected by the radial viscosity. In this work, we incorporate these physical ingredients and the toroidal component of the magnetic field to explore their roles in the steady-state structure of ADAFs. We thereby present a set of similarity solutions where outflows contribute to the mass loss, angular momentum removal, and the energy extraction. Our solutions indicate that the radial viscosity causes the disc to rotate with a slower rate, whereas the radial gas velocity increases. For strong winds, the infall velocity may be of order the Keplerian speed if the radial viscosity is considered and the saturated conduction parameter is high enough. We show that the strength of magnetic field and of wind can affect the effectiveness of radial viscosity. | astrophysics |
The electrical characteristics and microstructures of $\beta$-Ga$_2$O$_3$ Schottky barrier diode (SBD) devices irradiated with swift heavy ions (2096 MeV Ta ions) have been studied. It was found that $\beta$-Ga$_2$O$_3$ SBD devices showed the reliability degradation after irradiation, including turn-on voltage Von, on-resistance Ron, ideality factor n and the reverse leakage current density Jr. In addition, the carrier concentration of the drift layer was decreased significantly and the calculated carrier removal rates were 5*106 - 1.3*107 cm-1. Latent tracks induced by swift heavy ions were observed visually in the whole $\beta$-Ga$_2$O$_3$ matrix. Furthermore, crystal structure of tracks was amorphized completely. The latent tracks induced by Ta ions bombardments were found to be the reason for the decrease in carrier mobility and carrier concentration. Eventually, these defects caused the degradation of electrical characteristics of the devices. By comparing the carrier removal rates, the $\beta$-Ga$_2$O$_3$ SBD devices were more sensitive to swift heavy ions irradiation than SiC and GaN devices. | condensed matter |
We consider a cognitive radio-based Internet-of-Things (CR-IoT) network consisting of one primary IoT (PIoT) system and one secondary IoT (SIoT) system. The IoT devices of both the PIoT and the SIoT respectively monitor one physical process and send randomly generated status updates to their associated access points (APs). The timeliness of the status updates is important as the systems are interested in the latest condition (e.g., temperature, speed and position) of the IoT device. In this context, two natural questions arise: (1) How to characterize the timeliness of the status updates in CR-IoT systems? (2) Which scheme, overlay or underlay, is better in terms of the timeliness of the status updates. To answer these two questions, we adopt a new performance metric, named the age of information (AoI). We analyze the average peak AoI of the PIoT and the SIoT for overlay and underlay schemes, respectively. Simple asymptotic expressions of the average peak AoI are also derived when the PIoT operates at high signal-to-noise ratio (SNR). Based on the asymptotic expressions, we characterize a critical generation rate of the PIoT system, which can determine the superiority of overlay and underlay schemes in terms of the average peak AoI of the SIoT. Numerical results validate the theoretical analysis and uncover that the overlay and underlay schemes can outperform each other in terms of the average peak AoI of the SIoT for different system setups. | electrical engineering and systems science |
In this paper we study compacta Y that are resolvable by a free p-adic action on a compactum of a lower dimension and focus on compacta Y whose cohomological dimension with respect to the group Z[1/p] is 1. | mathematics |
Organic ferroelectric materials are in demand in the growing field of environmentally friendly, lightweight electronics. Donor-Acceptor charge transfer crystals have been recently proposed as a new class of organic ferroelectrics, which may possess a new kind of ferroelectricity, the so-called electronic ferroelectricity, larger and with faster polarity switching in comparison with conventional, inorganic or organic, ferroelectrics. The current research aimed at achieving ambient conditions electronic ferroelectricity in organic charge transfer crystals is shortly reviewed, in such a way to evidence the emerging criteria that have to be fulfilled to reach this challenging goal. | condensed matter |
Recent worldwide events shed light on the need of human-centered systems engineering in the healthcare domain. These systems must be prepared to evolve quickly but safely, according to unpredicted environments and ever-changing pathogens that spread ruthlessly. Such scenarios suffocate hospitals' infrastructure and disable healthcare systems that are not prepared to deal with unpredicted environments without costly re-engineering. In the face of these challenges, we offer the SA-BSN -- Self-Adaptive Body Sensor Network -- prototype to explore the rather dynamic patient's health status monitoring. The exemplar is focused on self-adaptation and comes with scenarios that hinder an interplay between system reliability and battery consumption that is available after each execution. Also, we provide: (i) a noise injection mechanism, (ii) file-based patient profiles' configuration, (iii) six healthcare sensor simulations, and (iv) an extensible/reusable controller implementation for self-adaptation. The artifact is implemented in ROS (Robot Operating System), which embraces principles such as ease of use and relies on an active open source community support. | computer science |
In recent years, cosmic shear has emerged as a powerful tool to study the statistical distribution of matter in our Universe. Apart from the standard two-point correlation functions, several alternative methods like peak count statistics offer competitive results. Here we show that persistent homology, a tool from topological data analysis, can extract more cosmological information than previous methods from the same dataset. For this, we use persistent Betti numbers to efficiently summarise the full topological structure of weak lensing aperture mass maps. This method can be seen as an extension of the peak count statistics, in which we additionally capture information about the environment surrounding the maxima. We first demonstrate the performance in a mock analysis of the KiDS+VIKING-450 data: we extract the Betti functions from a suite of $w$CDM $N$-body simulations and use these to train a Gaussian process emulator that provides rapid model predictions; we next run a Markov-Chain Monte Carlo analysis on independent mock data to infer the cosmological parameters and their uncertainty. When comparing our results, we recover the input cosmology and achieve a constraining power on $S_8 \equiv \sigma_8\sqrt{\Omega_\mathrm{m}/0.3}$ that is 5% tighter than that of peak count statistics. Performing the same analysis on 100 deg$^2$ of Euclid-like simulations, we are able to improve the constraints on $S_8$ and $\Omega_\mathrm{m}$ by 18% and 10%, respectively, while breaking some of the degeneracy between $S_8$ and the dark energy equation of state. To our knowledge, the methods presented here are the most powerful topological tools to constrain cosmological parameters with lensing data. | astrophysics |
Cardiovascular diseases are the leading cause of death worldwide, accounting for 17.3 million deaths per year. The electrocardiogram (ECG) is a non-invasive technique widely used for the detection of cardiac diseases. To increase diagnostic sensitivity, ECG is acquired during exercise stress tests or in an ambulatory way. Under these acquisition conditions, the ECG is strongly affected by some types of noise, mainly by baseline wander (BLW). In this work were implemented nine methods widely used for the elimination of BLW, which are: interpolation using cubic splines, FIR filter, IIR filter, least mean square adaptive filtering, moving-average filter, independent component analysis, interpolation and successive subtraction of median values in RR interval, empirical mode decomposition and wavelet filtering. For the quantitative evaluation, the following similarity metrics were used: absolute maximum distance, the sum of squares of distances and percentage root-mean-square difference. Several experiments were performed using synthetic ECG signals generated by ECGSYM software, real ECG signals from QT Database, artificial BLW generated by software and real BLW from the Noise Stress Test Database. The best results were obtained by the method based on FIR high-pass filter with a cut-off frequency of 0.67 Hz. | electrical engineering and systems science |
In this work, we prove a novel one-shot multi-sender decoupling theorem generalising Dupuis result. We start off with a multipartite quantum state, say on A1 A2 R, where A1, A2 are treated as the two sender systems and R is the reference system. We apply independent Haar random unitaries in tensor product on A1 and A2 and then send the resulting systems through a quantum channel. We want the channel output B to be almost in tensor with the untouched reference R. Our main result shows that this is indeed the case if suitable entropic conditions are met. An immediate application of our main result is to obtain a one-shot simultaneous decoder for sending quantum information over a k-sender entanglement unassisted quantum multiple access channel (QMAC). The rate region achieved by this decoder is the natural one-shot quantum analogue of the pentagonal classical rate region. Assuming a simultaneous smoothing conjecture, this one-shot rate region approaches the optimal rate region of Yard, Dein the asymptotic iid limit. Our work is the first one to obtain a non-trivial simultaneous decoder for the QMAC with limited entanglement assistance in both one-shot and asymptotic iid settings; previous works used unlimited entanglement assistance. | quantum physics |
Cosmic rays, along with stellar radiation and magnetic fields, are known to make up a significant fraction of the energy density of galaxies such as the Milky Way. When cosmic rays interact in the interstellar medium, they produce gamma-ray emission which provides an important indication of how the cosmic rays propagate. Gamma rays from the Andromeda Galaxy (M31), located 785 kpc away, provide a unique opportunity to study cosmic-ray acceleration and diffusion in a galaxy with a structure and evolution very similar to the Milky Way. Using 33 months of data from the High Altitude Water Cherenkov Observatory, we search for TeV gamma rays from the galactic plane of M31. We also investigate past and present evidence of galactic activity in M31 by searching for Fermi Bubble-like structures above and below the galactic nucleus. No significant gamma-ray emission is observed, so we use the null result to compute upper limits on the energy density of cosmic rays $>10$ TeV in M31. The computed upper limits are approximately ten times higher than expected from the extrapolation of the Fermi LAT results. | astrophysics |
Single top quark production through weak interactions is an important source of charged Higgs in the Minimal Super Symmetric Standard Model. In the s-channel single top production ,charged Higgs having largest cross-section may appear as a propagator in the form of heavy resonance state decaying to a pair of top and bottom quark. In this paper the channel under consideration is $pp \rightarrow H^{\pm}\rightarrow tb \rightarrow b\bar{b}W^{\pm} \rightarrow b\bar{b}\tau^{\pm} \nu_{\tau}$, where top quark exclusively decays into a pair of b quark and W boson while W boson subsequently decays to $\tau$ jet and neutrino. So the final state is characterized by the presence of two b jets, hadronic $\tau$ decay and missing transverse energy.It has been demonstrated that the charged Higgs signal observability is possible within the available MSSM parameter space (tan$\beta$, $m_{H^{\pm}})$ respecting all experimental and theoretical constraints due to presence of QCD multijet and electroweak background events at LHC. In order to show the observability potential of charged Higgs, the charged Higgs signal are well observed or excluded in a wide range of phase space particular ($\beta > 35$ at 500 $fb^{-1}$ , $\beta > 25$ at 1000 $fb^{-1}$ , $\beta > 15$ at 3000 $fb^{-1}$ ) at $\sqrt{s}=$14 TeV. | high energy physics phenomenology |
The relationship between a screening tests' positive predictive value, $\rho$, and its target prevalence, $\phi$, is proportional - though not linear in all but a special case. In consequence, there is a point of local extrema of curvature defined only as a function of the sensitivity $a$ and specificity $b$ beyond which the rate of change of a test's $\rho$ drops precipitously relative to $\phi$. Herein, we show the mathematical model exploring this phenomenon and define the $prevalence$ $threshold$ ($\phi_e$) point where this change occurs as: $\phi_e=\frac{\sqrt{a\left(-b+1\right)}+b-1}{(\varepsilon-1)}$ where $\varepsilon$ = $a$+$b$. From the prevalence threshold we deduce a more generalized relationship between prevalence and positive predictive value as a function of $\varepsilon$, which represents a fundamental theorem of screening, herein defined as: $\displaystyle\lim_{\varepsilon \to 2}{\displaystyle \int_{0}^{1}}{\rho(\phi)d\phi} = 1$ Understanding the concepts described in this work can help contextualize the validity of screening tests in real time, and help guide the interpretation of different clinical scenarios in which screening is undertaken. | statistics |
A novel inverse relaxation technique for supercapacitor characterization is developed, modeled numerically, and experimentally tested on a number of commercial supercapacitors. It consists in shorting a supercapacitor for a short time $\tau$, then switching to the open circuit regime and measuring an initial rebound and long-time relaxation. The results obtained are: the ratio of "easy" and "hard" to access capacitance and the dependence $C(\tau)$, that determines what the capacitance the system responds at time-scale $\tau$; it can be viewed as an alternative to used by some manufacturers approach to characterize a supercapacitor by fixed capacitance and time-scale dependent internal resistance. Among the advantages of proposed technique is that it does not require a source of fixed current, what simplifies the setup and allows a high discharge current regime. The approach can be used as a replacement of low-frequency impedance measurements and the ones of IEC 62391 type, it can be effectively applied to characterization of supercapacitors and other relaxation type systems with porous internal structure. The technique can be completely automated by a microcontroller to measure, analyze, and output the results. | physics |
Some publications indicate that poly(methyl methacrylate) (PMMA) and polytetrafluoroethylene (PTFE) exhibit low levels of photoluminesence (fluorescence and/or phosphorescence) when irradiated with photons in the ultraviolet (UV) to visible range. PMMA (also known as acrylic) and PTFE are commonly used to contain the liquid argon (LAr) or xenon (LXe) target material in rare-event search experiments. LAr and LXe scintillate in the vacuum UV region, and the PMMA and PTFE can be directly illuminated by these photons. Photoluminescence from support materials could cause unexpected signals in these detectors. We investigate photoluminesence in the 400 nm to 550 nm region in response to excitation with UV light between 130 nm and 250 nm at levels relevant to rare-event search experiments. Measurements are done at room temperature and the signal intensity is time-integrated over several minutes. We tested PMMA and PTFE samples from the batches used in the DEAP-3600 and LUX experiments and observed no photoluminescence signal. We put limits on the efficiency of the plastics to shift UV photons to a wavelengths region of 400 nm to 550 nm at 0.05% to 0.35% relative to the wavelength shifting efficiency of tetraphenyl-butadiene. | astrophysics |
We consider polynomials of degree $d$ with only real roots and a fixed value of discriminant, and study the problem of minimizing the absolute value of polynomials at a fixed point off the real line. There are two explicit families of polynomials that turn out to be extremal in terms of this problem. The first family has a particularly simple expression as a linear combination of $d$-th powers of two linear functions. Moreover, if the value of the discriminant is not too small, then the roots of the extremal polynomial and the smallest absolute value in question can be found explicitly. The second family is related to generalized Jacobi (or Gegenbauer) polynomials, which helps us to find the associated discriminants. We also investigate the dual problem of maximizing the value of discriminant, while keeping the absolute value of polynomials at a point away from the real line fixed. Our results are then applied to problems on the largest disks contained in lemniscates, and to the minimum energy problems for discrete charges on the real line. | mathematics |
We consider scalar-tensor gravity with nonminimal derivative coupling and Born-Infeld electromagnetic field which is minimally coupled to gravity. Since cosmological constant is taken into account it allowed us not only derive static black hole with spherical horizon but also to obtain topological solutions with non-spherical horizons. The obtained metrics are thoroughly analyzed, namely for different distances and types of topology of horizon. To investigate singularities of the metrics Kretschmann scalar is used and it is shown that the character of singularity depends on the type of topology of horizon and dimension of space. We also investigate black hole's thermodynamics, namely we obtain and examine black hole's temperature. To derive the first law of black hole's thermodynamics Wald's approach is applied. Nonetheless this approach is well established, there is ambiguity in definition of black hole's entropy which can be resolved just by virtue of some independent approach. | high energy physics theory |
The method for testing equal predictive accuracy for pairs of forecasting models proposed by Giacomini and White (2006) has found widespread use in empirical work. The procedure assumes that the parameters of the underlying forecasting models are estimated using a rolling window of fixed width and incorporates the effect of parameter estimation in the null hypothesis that two forecasts have identical conditionally expected loss. We show that this null hypothesis cannot be valid under a rolling window estimation scheme and even fails in the absence of parameter estimation for many types of stochastic processes in common use. This means that the approach does not guarantee appropriate comparisons of predictive accuracy of forecasting models. We also show that the Giacomini-White approach can lead to substantial size distortions in tests of equal unconditional predictive accuracy and propose an alternative procedure with better properties. | statistics |
Air pollution has altered the Earth radiation balance, disturbed the ecosystem and increased human morbidity and mortality. Accordingly, a full-coverage high-resolution air pollutant dataset with timely updates and historical long-term records is essential to support both research and environmental management. Here, for the first time, we develop a near real-time air pollutant database known as Tracking Air Pollution in China (TAP, tapdata.org) that combines information from multiple data sources, including ground measurements, satellite retrievals, dynamically updated emission inventories, operational chemical transport model simulations and other ancillary data. Daily full-coverage PM2.5 data at a spatial resolution of 10 km is our first near real-time product. The TAP PM2.5 is estimated based on a two-stage machine learning model coupled with the synthetic minority oversampling technique and a tree-based gap-filling method. Our model has an averaged out-of-bag cross-validation R2 of 0.83 for different years, which is comparable to those of other studies, but improves its performance at high pollution levels and fills the gaps in missing AOD on daily scale. The full coverage and near real-time updates of the daily PM2.5 data allow us to track the day-to-day variations in PM2.5 concentrations over China in a timely manner. The long-term records of PM2.5 data since 2000 will also support policy assessments and health impact studies. The TAP PM2.5 data are publicly available through our website for sharing with the research and policy communities. | physics |
We propose an expression for a local planetesimal formation rate proportional to the instantaneous radial pebble flux. The result --- a radial planetesimal distribution --- can be used as initial condition to study the formation of planetary embryos. We follow the idea that one needs particle traps to locally enhance the dust-to-gas ratio sufficiently such that particle gas interactions can no longer prevent planetesimal formation on small scales. The location of these traps can emerge everywhere in the disk. Their occurrence and lifetime is subject of ongoing research, thus they are implemented via free parameters. This enables us to study the influence of the disk properties on the formation of planetesimals, predicting their time dependent formation rates and location of primary pebble accretion. We show that large $\alpha$-values of $0.01$ (strong turbulence) prevent the formation of planetesimals in the inner part of the disk, arguing for lower values of around $0.001$ (moderate turbulence), at which planetesimals form quickly at all places where they are needed for proto-planets. Planetesimals form as soon as dust has grown to pebbles ($\sim\mathrm{mm}$ to $\mathrm{dm}$) and the pebble flux reaches a critical value, which is after a few thousand years at $2-3\,$AU and after a few hundred thousand years at $20-30\,$AU. Planetesimal formation lasts until the pebble supply has decreased below a critical value. The final spatial planetesimal distribution is steeper compared to the initial dust and gas distribution which helps to explain the discrepancy between the minimum mass solar nebula and viscous accretion disks. | astrophysics |
We report on a possible cloud-cloud collision in the DR 21 region, which we found through molecular observations with the Nobeyama 45-m telescope. We mapped an area of 8'x12' around the region with twenty molecular lines including the 12CO(J=1-0) and 13CO(J=1-0) emission lines, and sixteen of them were significantly detected. Based on the 12CO and 13CO data, we found five distinct velocity components in the observed region, and we call molecular gas associated with these components -42, -22, -3, 9, and 17 km/s clouds taking after their typical radial velocities. The -3 km/s cloud is the main filamentary cloud (31,000 Mo) associated with young massive stars such as DR21 and DR21(OH), and the 9 km/s cloud is a smaller cloud (3,400 Mo) which may be an extension of the W75 region in the north. The other clouds are much smaller. We found a clear anticorrelation in the distributions of the -3 and 9 km/s clouds, and detected faint 12CO emission having intermediate velocities bridging the two clouds at their intersection. These facts strongly indicate that the two clouds are colliding against each other. In addition, we found that DR21 and DR21(OH) are located in the periphery of the densest part of the 9 km/s cloud, which is consistent with results of recent numerical simulations of cloud-cloud collisions. We therefore suggest that the -3 and 9 km/s clouds are colliding, and that the collision induced the massive star formation in the DR21 cloud. The interaction of the -3 and 9 km/s clouds was previously suggested by Dickel et al. (1978), and our results strongly support their hypothesis of the interaction. | astrophysics |
We provide an alternative proof of the expression of the Bellman function of the dyadic maximal operator in connection with the Dyadic Carleson Imbedding Theorem, which appears in [10]. We also state and prove a sharp integral inequality for this operator in connection with the above Bellman function, and give an application. | mathematics |
We propose a novel supersymmetry-inspired scheme for achieving robust single mode lasing in arrays of coupled microcavities, based on factorizing a given array Hamiltonian into its "supercharge" partner array. Pumping a single sublattice of the partner array preferentially induces lasing of an unpaired zero mode. A chiral symmetry protects the zero mode similar to 1D topological arrays, but it need not be localized to domain walls or edges. We demonstrate single mode lasing over a wider parameter regime by designing the zero mode to have a uniform intensity profile. | physics |
Although the segmentation of brain structures in ultrasound helps initialize image based registration, assist brain shift compensation, and provides interventional decision support, the task of segmenting grey and white matter in cranial ultrasound is very challenging and has not been addressed yet. We train a multi-scale fully convolutional neural network simultaneously for two classes in order to segment real clinical 3D ultrasound data. Parallel pathways working at different levels of resolution account for high frequency speckle noise and global 3D image features. To ensure reproducibility, the publicly available RESECT dataset is utilized for training and cross-validation. Due to the absence of a ground truth, we train with weakly annotated label. We implement label transfer from MRI to US, which is prone to a residual but inevitable registration error. To further improve results, we perform transfer learning using synthetic US data. The resulting method leads to excellent Dice scores of 0.7080, 0.8402 and 0.9315 for grey matter, white matter and background. Our proposed methodology sets an unparalleled standard for white and grey matter segmentation in 3D intracranial ultrasound. | computer science |
Topological heterogeneities of social networks have a strong impact on the individuals embedded in those networks. One of the interesting phenomena driven by such heterogeneities is the friendship paradox (FP), stating that the mean degree of one's neighbors is larger than the degree of oneself. Alternatively, one can use the median degree of neighbors as well as the fraction of neighbors having a higher degree than oneself. Each of these reflects on how people perceive their neighborhoods, i.e., their perception models, hence how they feel peer pressure. In our paper, we study the impact of perception models on the FP by comparing three versions of the perception model in networks generated with a given degree distribution and a tunable degree-degree correlation or assortativity. The increasing assortativity is expected to decrease network-level peer pressure, while we find a nontrivial behavior only for the mean-based perception model. By simulating opinion formation, in which the opinion adoption probability of an individual is given as a function of individual peer pressure, we find that it takes the longest time to reach consensus when individuals adopt the median-based perception model, compared to other versions. Our findings suggest that one needs to consider the proper perception model for better modeling human behaviors and social dynamics. | physics |
In this work we study the scattering and transfer matrices for electric fields defined with respect to an angular spectrum of plane waves. For these matrices, we derive the constraints that are enforced by conservation of energy, reciprocity and time reversal symmetry. Notably, we examine the general case of vector fields in three dimensions and allow for evanescent field components. Moreover, we consider fields described by both continuous and discrete angular spectra, the latter being more relevant to practical applications, such as optical scattering experiments. We compare our results to better-known constraints, such as the unitarity of the scattering matrix for far-field modes, and show that previous results follow from our framework as special cases. Finally, we demonstrate our results numerically with a simple example of wave propagation at a planar glass-air interface, including the effects of total internal reflection. Our formalism makes minimal assumptions about the nature of the scattering medium and is thus applicable to a wide range of scattering problems. | physics |
We consider a spin-$1$ resonance produced with an arbitrary spectrum of velocities and decaying into a pair of massless leptons, and we study the probability density function of the energy of the leptons in the laboratory frame. A special case is represented by the production of $W$ bosons in proton-proton collisions, for which the energy of the charged lepton from the decaying $W$ can be measured with sufficient accuracy for a high-precision measurement of $M_W$. We find that half of the resonance mass is a special value of the lepton energy, since the probability density function at this point is in general not analytic for a narrow-width resonance. In particular, the higher-order derivatives of the density function are likely to develop singularities, such as cusps or poles. A finite width of the resonance restores the regularity, for example by smearing cusps and poles into local stationary points. The quest for such points offers a handle to estimate the resonance mass with much reduced dependence on the underlying production and decay dynamics of the resonance. | high energy physics phenomenology |
Many technological applications depend on the response of materials to electric fields, but available databases of such responses are limited. Here, we explore the infrared, piezoelectric and dielectric properties of inorganic materials by combining high-throughput density functional perturbation theory and machine learning approaches. We compute {\Gamma}-point phonons, infrared intensities, Born-effective charges, piezoelectric, and dielectric tensors for 5015 non-metallic materials in the JARVIS-DFT database. We find 3230 and 1943 materials with at least one far and mid-infrared mode, respectively. We identify 577 high-piezoelectric materials, using a threshold of 0.5 C/m2. Using a threshold of 20, we find 593 potential high-dielectric materials. Importantly, we analyze the chemistry, symmetry, dimensionality, and geometry of the materials to find features that help explain variations in our datasets. Finally, we develop high-accuracy regression models for the highest infrared frequency and maximum Born-effective charges, and classification models for maximum piezoelectric and average dielectric tensors to accelerate discovery. | condensed matter |
Training in supervised deep learning is computationally demanding, and the convergence behavior is usually not fully understood. We introduce and study a second-order stochastic quasi-Gauss-Newton (SQGN) optimization method that combines ideas from stochastic quasi-Newton methods, Gauss-Newton methods, and variance reduction to address this problem. SQGN provides excellent accuracy without the need for experimenting with many hyper-parameter configurations, which is often computationally prohibitive given the number of combinations and the cost of each training process. We discuss the implementation of SQGN with TensorFlow, and we compare its convergence and computational performance to selected first-order methods using the MNIST benchmark and a large-scale seismic tomography application from Earth science. | computer science |
Coronal Mass Ejections (CMEs) are one of the primary drivers of extreme space weather. They are large eruptions of mass and magnetic field from the solar corona and can travel the distance between Sun and Earth in half a day to a few days. Predictions of CMEs at 1 Astronomical Unit (AU), in terms of both its arrival time and magnetic field configuration, are very important for predicting space weather. Magnetohydrodynamic (MHD) modeling of CMEs, using flux-rope-based models is a promising tool for achieving this goal. In this study, we present one such model for CME simulations, based on spheromak magnetic field configuration. We have modified the spheromak solution to allow for independent input of poloidal and toroidal fluxes. The motivation for this is a possibility to estimate these fluxes from solar magnetograms and extreme ultraviolet (EUV) data from a number of different approaches. We estimate the poloidal flux of CME using post eruption arcades (PEAs) and toroidal flux from the coronal dimming. In this modified spheromak, we also have an option to control the helicity sign of flux ropes, which can be derived from the solar disk magnetograms using the magnetic tongue approach. We demonstate the applicability of this model by simulating the 12 July 2012 CME in the solar corona. | astrophysics |
We introduce the notion of genus-one data for theories in (1+1)-dimensions with an anomalous finite group global symmetry. We outline the groups for which genus-one data is effective in detecting the anomaly, and also show that genus-one data is insufficient to detect the anomaly for dicyclic groups. | high energy physics theory |
The view exists that Bell-tests would only be about local incompatibility of quantum observables and that quantum non-locality would be an unnecessary concept in physics. In this note, we emphasize that it is not incompatibility at the local level that is important for the violation of Bell-CHSH inequality, but incompatibility at the non-local level of the joint measurements. Hence, non-locality remains a necessary concept to properly interpret the outcomes of certain joint quantum measurements. | quantum physics |
Despite the virtues of Jones and Mueller formalisms for the representation of the polarimetric properties, for some purposes in both Optics and SAR Polarimetry, the concept of coherency vector associated with a nondepolarizing medium has proven to be an useful mathematical structure that inherits certain symmetries underlying the nature of linear polarimetric transformations of the states of polarization of light caused by its interaction with material media. While the Jones and Mueller matrices of a serial combination of devices are given by the respective conventional matrix products, the composition of coherency vectors of such serial combinations requires a specific and unconventional mathematical rule. In this work, a vector product of coherency vectors is presented that satisfies, in a meaningful and consistent manner, the indicated requirements. As a result, a new algebraic formalism is built where the representation of polarization states of electromagnetic waves through Stokes vectors is preserved, while nondepolarizing media are represented by coherency vectors and general media are represented by coherency matrices generated by partially coherent compositions of the coherency vectors of the components. | physics |
HD 81817 is known as a hybrid star. Hybrid stars have both cool stellar wind properties and Ultraviolet (UV) or even X-ray emission features of highly ionized atoms in their spectra. A white dwarf companion has been suggested as the source of UV or X-ray features. HD 81817 has been observed since 2004 as a part of a radial velocity (RV) survey program to search for exoplanets around K giant stars using the Bohyunsan Observatory Echelle Spectrograph at the 1.8 m telescope of Bohyunsan Optical Astronomy Observatory in Korea. We obtained 85 RV measurements between 2004 and 2019 for HD 81817 and found two periodic RV variations. The amplitudes of RV variations are around 200 m s^-1, which are significantly lower than that expected from a closely orbiting white dwarf companion. Photometric data and relevant spectral lines were also analyzed to help determine the origin of the periodic RV variations. We conclude that 627.4-day RV variations are caused by intrinsic stellar activities such as long-term pulsations or rotational modulations of surface activities based on H{\alpha} equivalent width (EW) variations of a similar period. On the other hand, 1047.1-day periodic RV variations are likely to be caused by a brown dwarf or substellar companion, which is corroborated by a recent GAIA proper motion anomaly for HD 81817. The Keplerian fit yields a minimum mass of 27.1 M_Jup, a semimajor axis of 3.3 AU, and an eccentricity of 0.17 for the stellar mass of 4.3 M_sun for HD 81817. The inferred mass puts HD 81817 b in the brown dwarf desert. | astrophysics |
In this work, we first consider distributed convex constrained optimization problems where the objective function is encoded by multiple local and possibly nonsmooth objectives privately held by a group of agents, and propose a distributed subgradient method with double averaging (abbreviated as ${\rm DSA_2}$) that only requires peer-to-peer communication and local computation to solve the global problem. The algorithmic framework builds on dual methods and dynamic average consensus; the sequence of test points is formed by iteratively minimizing a local dual model of the overall objective where the coefficients, i.e., approximated subgradients of the objective, are supplied by the dynamic average consensus scheme. We theoretically show that ${\rm DSA_2}$ enjoys non-ergodic convergence properties, i.e., the local minimizing sequence itself is convergent, a distinct feature that cannot be found in existing results. Specifically, we establish a convergence rate of $O(\frac{1}{\sqrt{t}})$ in terms of objective function error. Then, extensions are made to tackle distributed optimization problems with coupled functional constraints by combining ${\rm DSA_2}$ and dual decomposition. This is made possible by Lagrangian relaxation that transforms the coupling in constraints of the primal problem into that in cost functions of the dual, thus allowing us to solve the dual problem via ${\rm DSA_2}$. Both the dual objective error and the quadratic penalty for the coupled constraint are proved to converge at a rate of $O(\frac{1}{\sqrt{t}})$, and the primal objective error asymptotically vanishes. Numerical experiments and comparisons are conducted to illustrate the advantage of the proposed algorithms and validate our theoretical findings. | mathematics |
Galactic-scale structure is of particular interest since it provides important clues to dark matter properties and its observation is improving. Weakly interacting massive particles (WIMPs) behave as cold dark matter on galactic scales, while beyond-WIMP candidates suppress galactic-scale structure formation. Suppression in the linear matter power spectrum has been conventionally characterized by a single parameter, the thermal warm dark matter mass. On the other hand, the shape of suppression depends on the underlying mechanism. It is necessary to introduce multiple parameters to cover a wide range of beyond-WIMP models. Once multiple parameters are introduced, it becomes harder to share results from one side to the other. In this work, we propose adopting neural network technique to facilitate the communication between the two sides. To demonstrate how to work out in a concrete manner, we consider a simplified model of light feebly interacting massive particles. | astrophysics |
For $k \in \mathbb{N}$, write $S(k)$ for the largest natural number such that there is a $k$-colouring of $\{1,\dots,S(k)\}$ with no monochromatic solution to $x-y=z^2$. That $S(k)$ exists is a result of Bergelson, and a simple example shows that $S(k) \geq 2^{2^{k-1}}$. The purpose of this note is to show that $S(k)\leq 2^{2^{2^{O(k)}}}$. | mathematics |
Parametric models for galaxy star-formation histories (SFHs) are widely used, though they are known to impose strong priors on physical parameters. This has consequences for measurements of the galaxy stellar-mass function (GSMF), star-formation-rate density (SFRD) and star-forming main sequence (SFMS). We investigate the effects of the exponentially declining, delayed exponentially declining, lognormal and double power law SFH models using BAGPIPES. We demonstrate that each of these models imposes strong priors on specific star-formation rates (sSFRs), potentially biasing the SFMS, and also imposes a strong prior preference for young stellar populations. We show that stellar mass, SFR and mass-weighted age inferences from high-quality mock photometry vary with the choice of SFH model by at least 0.1, 0.3 and 0.2 dex respectively. However the biases with respect to the true values depend more on the true SFH shape than the choice of model. We also demonstrate that photometric data cannot discriminate between SFH models, meaning it is important to perform independent tests to find well-motivated priors. We finally fit a low-redshift, volume-complete sample of galaxies from the Galaxy and Mass Assembly (GAMA) Survey with each model. We demonstrate that our stellar masses and SFRs at redshift, $z\sim0.05$ are consistent with other analyses. However, our inferred cosmic SFRDs peak at $z\sim0.4$, approximately 6 Gyr later than direct observations suggest, meaning our mass-weighted ages are significantly underestimated. This makes the use of parametric SFH models for understanding mass assembly in galaxies challenging. In a companion paper we consider non-parametric SFH models. | astrophysics |
We apply concepts from manifold regularization to develop new regularization techniques for training locally stable deep neural networks. Our regularizers are based on a sparsification of the graph Laplacian which holds with high probability when the data is sparse in high dimensions, as is common in deep learning. Empirically, our networks exhibit stability in a diverse set of perturbation models, including $\ell_2$, $\ell_\infty$, and Wasserstein-based perturbations; in particular, we achieve 40% adversarial accuracy on CIFAR-10 against an adaptive PGD attack using $\ell_\infty$ perturbations of size $\epsilon = 8/255$, and state-of-the-art verified accuracy of 21% in the same perturbation model. Furthermore, our techniques are efficient, incurring overhead on par with two additional parallel forward passes through the network. | statistics |
We determine the quantum Cram\'er-Rao bound for the precision with which the oscillator frequency and damping constant of a damped quantum harmonic oscillator in an arbitrary Gaussian state can be estimated. This goes beyond standard quantum parameter estimation of a single mode Gaussian state for which typically a mode of fixed frequency is assumed. We present a scheme through which the frequency estimation can nevertheless be based on the known results for single-mode quantum parameter estimation with Gaussian states. Based on these results, we investigate the optimal measurement time. For measuring the oscillator frequency, our results unify previously known partial results and constitute an explicit solution for a general single-mode Gaussian state. Furthermore, we show that with existing carbon nanotube resonators (see J. Chaste et al.~Nature Nanotechnology 7, 301 (2012)) it should be possible to achieve a mass sensitivity of the order of an electron mass $\text{Hz}^{-1/2}$. | quantum physics |
Semi-supervised machine learning models learn from a (small) set of labeled training examples, and a (large) set of unlabeled training examples. State-of-the-art models can reach within a few percentage points of fully-supervised training, while requiring 100x less labeled data. We study a new class of vulnerabilities: poisoning attacks that modify the unlabeled dataset. In order to be useful, unlabeled datasets are given strictly less review than labeled datasets, and adversaries can therefore poison them easily. By inserting maliciously-crafted unlabeled examples totaling just 0.1% of the dataset size, we can manipulate a model trained on this poisoned dataset to misclassify arbitrary examples at test time (as any desired label). Our attacks are highly effective across datasets and semi-supervised learning methods. We find that more accurate methods (thus more likely to be used) are significantly more vulnerable to poisoning attacks, and as such better training methods are unlikely to prevent this attack. To counter this we explore the space of defenses, and propose two methods that mitigate our attack. | computer science |
We derive the masses acquired at one loop by massless scalars in the Neumann-Dirichlet sector of open strings, when supersymmetry is spontaneously broken. It is done by computing two-point functions of "boundary-changing vertex operators" inserted on the boundaries of the annulus and M\"obius strip. This requires the evaluation of correlators of "excited boundary-changing fields," which are analogous to excited twist fields for closed strings. We work in the type IIB orientifold theory compactified on $T^2\times T^4/\mathbb{Z}_2$, where $\mathcal{N}=2$ supersymmetry is broken to $\mathcal{N}=0$ by the Scherk-Schwarz mechanism implemented along $T^2$. Even though the full expression of the squared masses is complicated, it reduces to a very simple form when the lowest scale of the background is the supersymmetry breaking scale $M_{3/2}$. We apply our results to analyze in this regime the stability at the quantum level of the moduli fields arising in the Neumann-Dirichlet sector. This completes the study of Ref. [32], where the quantum masses of all other types of moduli arising in the open- or closed-string sectors are derived. Ultimately, we identify all brane configurations that produce backgrounds without tachyons at one loop and yield an effective potential exponentially suppressed, or strictly positive with runaway behavior of $M_{3/2}$. | high energy physics theory |
Elliptic nozzle geometry is attractive for mixing enhancement of supersonic jets. However, jet dynamics, such as flapping, gives rise to high-intensity tonal sound. We experimentally manipulate the supersonic elliptic jet morphology by using two sharp-tipped lobes. The lobes are placed on either end of the minor axis in an elliptic nozzle. The design Mach number and the aspect ratio of the elliptic nozzle and the lobed nozzle are 2.0 and 1.65. The supersonic jet is exhausted into ambient at almost perfectly expanded conditions. Time-resolved schlieren imaging, longitudinal and cross-sectional planar laser Mie-scattering imaging, planar Particle Image Velocimetry, and near-field microphone measurements are performed to assess the fluidic behavior of the two nozzles. Dynamic Mode and Proper Orthogonal Decomposition (DMD and POD) analysis are carried out on the schlieren and the Mie-scattering images. Mixing characteristics are extracted from the Mie-scattering images through the image processing routines. The flapping elliptic jet consists of two dominant DMD modes, while the lobed nozzle has only one dominant mode, and the flapping is suppressed. Microphone measurements show the associated noise reduction. The jet column bifurcates in the lobed nozzle enabling a larger surface contact area with the ambient fluid and higher mixing rates in the near-field of the nozzle exit. The jet width growth rate of the two-lobed nozzle is about twice as that of the elliptic jet in the near-field, and there is a 40\% reduction in the potential core length. Particle Image Velocimetry (PIV) contours substantiate the results. | physics |
We discuss consistent truncations of eleven-dimensional supergravity on a six-dimensional manifold $M$, preserving minimal $\mathcal{N}=2$ supersymmetry in five dimensions. These are based on $G_S \subseteq USp(6)$ structures for the generalised $E_{6(6)}$ tangent bundle on $M$, such that the intrinsic torsion is a constant $G_S$ singlet. We spell out the algorithm defining the full bosonic truncation ansatz and then apply this formalism to consistent truncations that contain warped AdS$_5 \times_{\rm w}M$ solutions arising from M5-branes wrapped on a Riemann surface. The generalised $U(1)$ structure associated with the $\mathcal{N}=2$ solution of Maldacena-Nu\~nez leads to five-dimensional supergravity with four vector multiplets, one hypermultiplet and $SO(3)\times U(1)\times \mathbb{R}$ gauge group. The generalised structure associated with "BBBW" solutions yields two vector multiplets, one hypermultiplet and an abelian gauging. We argue that these are the most general consistent truncations on such backgrounds. | high energy physics theory |
In this paper we present BioFaceNet, a deep CNN that learns to decompose a single face image into biophysical parameters maps, diffuse and specular shading maps as well as estimating the spectral power distribution of the scene illuminant and the spectral sensitivity of the camera. The network comprises a fully convolutional encoder for estimating the spatial maps with a fully connected branch for estimating the vector quantities. The network is trained using a self-supervised appearance loss computed via a model-based decoder. The task is highly underconstrained so we impose a number of model-based priors. Skin spectral reflectance is restricted to a biophysical model, we impose a statistical prior on camera spectral sensitivities, a physical constraint on illumination spectra, a sparsity prior on specular reflections and direct supervision on diffuse shading using a rough shape proxy. We show convincing qualitative results on in-the-wild data and introduce a benchmark for quantitative evaluation on this new task. | computer science |
We introduce a new generalization of the Pauli channels using the mutually unbiased measurement operators. The resulting channels are bistochastic but their eigenvectors are not unitary. We analyze the channel properties, such as complete positivity, entanglement breaking, and multiplicativity of maximal output purity. We illustrate our results with the maps constructed from the Gell-Mann matrices and the Heisenberg-Weyl observables. | quantum physics |
In surface mount technology (SMT), mounted components on soldered pads are subject to move during reflow process. This capability is known as self-alignment and is the result of fluid dynamic behaviour of molten solder paste. This capability is critical in SMT because inaccurate self-alignment causes defects such as overhanging, tombstoning, etc. while on the other side, it can enable components to be perfectly self-assembled on or near the desire position. The aim of this study is to develop a machine learning model that predicts the components movement during reflow in x and y-directions as well as rotation. Our study is composed of two steps: (1) experimental data are studied to reveal the relationships between self-alignment and various factors including component geometry, pad geometry, etc. (2) advanced machine learning prediction models are applied to predict the distance and the direction of components shift using support vector regression (SVR), neural network (NN), and random forest regression (RFR). As a result, RFR can predict components shift with the average fitness of 99%, 99%, and 96% and with average prediction error of 13.47 (um), 12.02 (um), and 1.52 (deg.) for component shift in x, y, and rotational directions, respectively. This enhancement provides the future capability of the parameters' optimization in the pick and placement machine to control the best placement location and minimize the intrinsic defects caused by the self-alignment. | electrical engineering and systems science |
Crossing symmetry asserts that particles are indistinguishable from anti-particles traveling back in time. In quantum field theory, this statement translates to the long-standing conjecture that probabilities for observing the two scenarios in a scattering experiment are described by one and the same function. Why could we expect it to be true? In this work we examine this question in a simplified setup and take steps towards illuminating a possible physical interpretation of crossing symmetry. To be more concrete, we consider planar scattering amplitudes involving any number of particles with arbitrary spins and masses to all loop orders in perturbation theory. We show that by deformations of the external momenta one can smoothly interpolate between pairs of crossing channels without encountering singularities or violating mass-shell conditions and momentum conservation. The analytic continuation can be realized using two types of moves. The first one makes use of an $i\varepsilon$ prescription for avoiding singularities near the physical kinematics and allows us to adjust the momenta of the external particles relative to one another within their lightcones. The second, more violent, step involves a rotation of subsets of particle momenta via their complexified lightcones from the future to the past and vice versa. We show that any singularity along such a deformation would have to correspond to two beams of particles scattering off each other. For planar Feynman diagrams, these kinds of singularities are absent because of the particular flow of energies through their propagators. We prescribe a five-step sequence of such moves that combined together proves crossing symmetry for planar scattering amplitudes in perturbation theory, paving a way towards settling this question for more general scattering processes in quantum field theories. | high energy physics theory |
In the area of military simulations, a multitude of different approaches is available. Close Combat Tactical Trainer, Joint Tactical Combat Training System, Battle Force Tactical Training or Warfighter's Simulation 2000 are just some examples within the history of the large DoD Development Program in Modelling and Simulation, representing just a small piece of the variety of diverse solutions. Very often, individual simulators are very unique and so it is often difficult to classify military simulations even for experienced users. This circumstance is further boosted due to the fact that in the field of military simulations - unlike in other areas - no general classification for military simulations exists. To address this shortcoming, this publication is dedicated to the idea of providing a first contribution to the development of a commonly accepted taxonomy in the area of military simulations. To this end, the problem field is structured into three main categories (general functional requirements for simulators, special military requirements for simulators and non-functional requirements for simulators). Based upon that, individual categories are provided with appropriate classes. For a better understanding, the taxonomy is also applied to a concrete example (NetLogo Rebellion). | computer science |
What does Cyberpunk and AI Ethics have to do with each other? Cyberpunk is a sub-genre of science fiction that explores the post-human relationships between human experience and technology. One similarity between AI Ethics and Cyberpunk literature is that both seek a dialogue in which the reader may inquire about the future and the ethical and social problems that our technological advance may bring upon society. In recent years, an increasing number of ethical matters involving AI have been pointed and debated, and several ethical principles and guides have been suggested as governance policies for the tech industry. However, would this be the role of AI Ethics? To serve as a soft and ambiguous version of the law? I would like to promote in this article a more Cyberpunk way of doing AI Ethics, whit a more anarchic way of governance. In this study, I will seek to expose some of the deficits of the underlying power structures of our society and suggest that AI governance be subject to public opinion, so that good AI can become good AI for all. | computer science |
The nanoparticle packed bed (NPB) as a kind of promising thermal insulation materials has drawn widely concern because of their quite low thermal conductivities (k). In this paper, based on the concept that NPB morphology enable low k, we further proposed a common method, a hybrid strategy, to further reduce k and enhance mechanical strength, simultaneously. The lowest effective thermal conductivity (k_e) of hybrid NPB can even as low as 0.018Wm-1K-1, which is much lower than the k of the free air and most common thermal insulation materials, due to the quite low solid-phase thermal conductivity (k_s), negligible thermal conductivity of the confined air (k_a) and small radiative thermal conductivity (k_r). Neglecting the k_a, the minimum k_e occurs at the porosity where domination role changes from k_s to k_r. In addition, an excellent mechanical strength, nearly 40-50 % compared to bulk silica, was also harvested in silica hybrid NPB. This study is expected to supply some information for thermal insulation material design. | physics |
The arms race between attacks and defenses for machine learning models has come to a forefront in recent years, in both the security community and the privacy community. However, one big limitation of previous research is that the security domain and the privacy domain have typically been considered separately. It is thus unclear whether the defense methods in one domain will have any unexpected impact on the other domain. In this paper, we take a step towards resolving this limitation by combining the two domains. In particular, we measure the success of membership inference attacks against six state-of-the-art defense methods that mitigate the risk of adversarial examples (i.e., evasion attacks). Membership inference attacks determine whether or not an individual data record has been part of a model's training set. The accuracy of such attacks reflects the information leakage of training algorithms about individual members of the training set. Adversarial defense methods against adversarial examples influence the model's decision boundaries such that model predictions remain unchanged for a small area around each input. However, this objective is optimized on training data. Thus, individual data records in the training set have a significant influence on robust models. This makes the models more vulnerable to inference attacks. To perform the membership inference attacks, we leverage the existing inference methods that exploit model predictions. We also propose two new inference methods that exploit structural properties of robust models on adversarially perturbed data. Our experimental evaluation demonstrates that compared with the natural training (undefended) approach, adversarial defense methods can indeed increase the target model's risk against membership inference attacks. | statistics |
The directionality of optical signals provides an opportunity for efficient space reuse of optical links in visible light communication (VLC). Space reuse in VLC can enable multiple access communication from multiple light emitting transmitters. However, traditional VLC system design using photo-receptors requires at least one receiving photodetector element for each light emitter, thus constraining VLC to always require a light emitter-to-light receiving element pair. In this paper, we propose, design and evaluate a novel architecture for VLC that can enable multiple-access reception using a photoreceptor receiver that uses only a single photodiode. The novel design includes a liquid-crystal-display (LCD) based shutter system that can be automated to control and enable selective reception of light beams from multiple transmitters. We evaluate the feasibility of multiple access on a single photodiode from two light emitting diode (LED) transmitters and the performance of the communication link using bit-error-rate (BER) and packet-error-rate (PER) metrics. Our experiment and trace based evaluation reveals the feasibility of multiple LED reception on a single photodiode and estimated throughput of the order of Mbps. | electrical engineering and systems science |
Gradient-based adversarial attacks on neural networks can be crafted in a variety of ways by varying either how the attack algorithm relies on the gradient, the network architecture used for crafting the attack, or both. Most recent work has focused on defending classifiers in a case where there is no uncertainty about the attacker's behavior (i.e., the attacker is expected to generate a specific attack using a specific network architecture). However, if the attacker is not guaranteed to behave in a certain way, the literature lacks methods in devising a strategic defense. We fill this gap by simulating the attacker's noisy perturbation using a variety of attack algorithms based on gradients of various classifiers. We perform our analysis using a pre-processing Denoising Autoencoder (DAE) defense that is trained with the simulated noise. We demonstrate significant improvements in post-attack accuracy, using our proposed ensemble-trained defense, compared to a situation where no effort is made to handle uncertainty. | computer science |
A liquid scintillator base experiment KamLAND-Zen has set a lower limit on neutrinoless double beta decay half-life, and upgrade project KamLAND-Zen 800 has started in 2019. Unfortunately, this project expects some backgrounds, and one of the main backgrounds is beta/gamma-ray from 214Bi in container of xenon loaded liquid scintillator (mini-balloon). In order to reject the background, we suggest using scintillation film for the future mini-balloon. If we can tag alpha-ray from 214Po by scintillation detection, we can eliminate 214Bi events by delayed coincidence analysis. Recently, it was reported that polyethylene naphthalate (PEN) can be used as a scintillator with blue photon emission. PEN has chemical compatibility for strong solvent, thus it has a possibility to use in liquid scintillator. In this presentation, we will mention the results for feasibility studies about transparency and emission spectra, light yield, radioactivity, strength of film etc.. We also show the test-sized scintillation balloon with an 800-mm diameter and discussions about how to use the scintillation balloon in KamLAND. | physics |
Plant reflectance spectra - the profile of light reflected by leaves across different wavelengths - supply the spectral signature for a species at a spatial location to enable estimation of functional and taxonomic diversity for plants. We consider leaf spectra as "responses" to be explained spatially. These spectra/reflectances are functions over a wavelength band that respond to the environment. Our motivating data are gathered for several families from the Cape Floristic Region (CFR) in South Africa and lead us to develop rich novel spatial models that can explain spectra for genera within families. Wavelength responses for an individual leaf are viewed as a function of wavelength, leading to functional data modeling. Local environmental features become covariates. We introduce wavelength - covariate interaction since the response to environmental regressors may vary with wavelength, so may variance. Formal spatial modeling enables prediction of reflectances for genera at unobserved locations with known environmental features. We incorporate spatial dependence, wavelength dependence, and space-wavelength interaction (in the spirit of space-time interaction). We implement out-of-sample validation to select a best model, discovering that the model features listed above are all informative for the functional data analysis. We then supply interpretation of the results under the selected model. | statistics |
We consider the effects of weak gravitational lensing on observations of 196 spectroscopically confirmed Type Ia Supernovae (SNe Ia) from years 1 to 3 of the Dark Energy Survey (DES). We simultaneously measure both the angular correlation function and the non-Gaussian skewness caused by weak lensing. This approach has the advantage of being insensitive to the intrinsic dispersion of SNe Ia magnitudes. We model the amplitude of both effects as a function of $\sigma_8$, and find $\sigma_8 = 1.2^{+0.9}_{-0.8}$. We also apply our method to a subsample of 488 SNe from the Joint Light-curve Analysis (JLA) (chosen to match the redshift range we use for this work), and find $\sigma_8 = 0.8^{+1.1}_{-0.7}$. The comparable uncertainty in $\sigma_8$ between DES-SN and the larger number of SNe from JLA highlights the benefits of homogeneity of the DES-SN sample, and improvements in the calibration and data analysis. | astrophysics |
With the ease of deployment, capabilities of evading the jammers and obscuring their existence, unmanned aerial vehicles (UAVs) are one of the most suitable candidates to perform surveillance. There exists a body of literature in which the inspectors follow a deterministic trajectory to conduct surveillance, which results in a predictable environment for malicious entities. Thus, introducing randomness to the surveillance is of particular interest. In this work, we propose a novel framework for stochastic UAV-assisted surveillance that i) inherently considers the battery constraints of the UAVs, ii) proposes random moving patterns modeled via random walks, and iii) adds another degree of randomness to the system via considering probabilistic inspections. We formulate the problem of interest, i.e., obtaining the energy-efficient random walk and inspection policies of the UAVs subject to probabilistic constraints on inspection criteria of the sites and battery consumption of the UAVs, which turns out to be signomial programming that is highly non-convex. To solve it, we propose a centralized and a distributed algorithm along with their performance guarantee. This work contributes to both UAV-assisted surveillance and classic random walk literature by designing random walks with random inspection policies on weighted graphs with energy limited random walkers. | electrical engineering and systems science |
In this paper, we address an optimal management problem of community energy storage in the real-time electricity market under a stochastic renewable environment. In a real-time electricity market, complete market information may not be assessable for a strategic participant, hence we propose a paradigm that uses partial information including the forecast of real-time prices and slopes of the aggregate supply curve to model the price impact of storage use in the price-maker storage management problem. As a price maker, the community energy storage can not only earn profits through energy arbitrage but also smooth price trajectories and further influence social welfare. We formulate the problem as a finite-horizon Markov decision process that aims to maximize the energy arbitrage and social welfare of the prosumer-based community. The advance of the management scheme is that the optimal policy has a threshold structure. The structure has an analytic form that can guide the energy storage to charge/discharge by comparing its current marginal value and the expected future marginal value. Case studies indicate that welfare-maximizing storage earns more benefits than profit-maximizing storage. The proposed threshold-based algorithm can guarantee optimality and largely decrease the computational complexity of standard stochastic dynamic programming. | electrical engineering and systems science |
Decoherence in quantum searches, and in the Grover search in particular, has already been extensively studied, leading very quickly to the loss of the quadratic speedup over the classical case, when searching for some target (marked) element within a set of size $N$. The noise models used were, however, global. In this paper we study Grover search under the influence of localized partially dephasing noise of rate $p$. We find, that in the case when the size $k$ of the affected subspace is much smaller than $N$, and the target is unaffected by the noise, namely when $kp\ll\sqrt{N}$, the quadratic speedup is retained. Once these restrictions are not met, the quadratic speedup is lost. In particular, if the target is affected by the noise, the noise rate needs to scale as $1/\sqrt{N}$ in order to keep the speedup. We observe also an intermediate region, where if $k\sim N^\mu$ and the target is unaffected, the speedup seems to obey $N^\mu$, which for $\mu>0.5$ is worse than the quantum, but better than the classical case. We put obtained results for quantum searches also into perspective of quantum walks and searches on graphs. | quantum physics |
While most approaches in formal methods address system correctness, ensuring robustness has remained a challenge. In this paper we introduce the logic rLTL which provides a means to formally reason about both correctness and robustness in system design. Furthermore, we identify a large fragment of rLTL for which the verification problem can be efficiently solved, i.e., verification can be done by using an automaton, recognizing the behaviors described by the rLTL formula $\varphi$, of size at most $\mathcal{O} \left( 3^{ |\varphi|} \right)$, where $|\varphi|$ is the length of $\varphi$. This result improves upon the previously known bound of $\mathcal{O}\left(5^{|\varphi|} \right)$ for rLTL verification and is closer to the LTL bound of $\mathcal{O}\left( 2^{|\varphi|} \right)$. The usefulness of this fragment is demonstrated by a number of case studies showing its practical significance in terms of expressiveness, the ability to describe robustness, and the fine-grained information that rLTL brings to the process of system verification. Moreover, these advantages come at a low computational overhead with respect to LTL verification. | computer science |
In the graph signal processing (GSP) literature, it has been shown that signal-dependent graph Laplacian regularizer (GLR) can efficiently promote piecewise constant (PWC) signal reconstruction for various image restoration tasks. However, for planar image patches, like total variation (TV), GLR may suffer from the well-known "staircase" effect. To remedy this problem, we generalize GLR to gradient graph Laplacian regularizer (GGLR) that provably promotes piecewise planar (PWP) signal reconstruction for the image interpolation problem -- a 2D grid with randomly missing pixels that requires completion. Specifically, we first construct two higher-order gradient graphs to connect local horizontal and vertical gradients. Each local gradient is estimated using structure tensor, which is robust using known pixels in a small neighborhood, mitigating the problem of larger noise variance when computing gradient of gradients. Moreover, unlike total generalized variation (TGV), GGLR retains the quadratic form of GLR, leading to an unconstrained quadratic programming (QP) problem per iteration that can be solved quickly using conjugate gradient (CG). We derive the means-square-error minimizing weight parameter for GGLR, trading off bias and variance of the signal estimate. Experiments show that GGLR outperformed competing schemes in interpolation quality for severely damaged images at a reduced complexity. | electrical engineering and systems science |
Agreement protocols have been typically deployed at small scale, e.g., using three to five machines. This is because these protocols seem to suffer from a sharp performance decay. More specifically, as the size of a deployment---i.e., degree of replication---increases, the protocol performance greatly decreases. There is not much experimental evidence for this decay in practice, however, notably for larger system sizes, e.g., beyond a handful of machines. In this paper we execute agreement protocols on up to 100 machines and observe on their performance decay. We consider well-known agreement protocols part of mature systems, such as Apache ZooKeeper, etcd, and BFT-Smart, as well as a chain and a novel ring-based agreement protocol which we implement ourselves. We provide empirical evidence that current agreement protocols execute gracefully on 100 machines. We observe that throughput decay is initially sharp (consistent with previous observations); but intriguingly---as each system grows beyond a few tens of replicas---the decay dampens. For chain- and ring-based replication, this decay is slower than for the other systems. The positive takeaway from our evaluation is that mature agreement protocol implementations can sustain out-of-the-box 300 to 500 requests per second when executing on 100 replicas on a wide-area public cloud platform. Chain- and ring-based replication can reach between 4K and 11K (up to 20x improvements) depending on the fault assumptions. | computer science |
In a recent paper by Jafarov, Nagiyev, Oste and Van der Jeugt (2020 {\sl J.\ Phys.\ A} {\bf 53} 485301), a confined model of the non-relativistic quantum harmonic oscillator, where the effective mass and the angular frequency are dependent on the position, was constructed and it was shown that the confinement parameter gets quantized. By using a point canonical transformation starting from the constant-mass Schr\"odinger equation for the Rosen-Morse II potential, it is shown here that similar results can be easily obtained without quantizing the confinement parameter. In addition, an extension to a confined shifted harmonic oscillator directly follows from the same point canonical transformation. | quantum physics |
Investigation of eigenvector localization properties of complex networks is not only important for gaining insight into fundamental network problems such as network centrality measure, spectral partitioning, development of approximation algorithms, but also is crucial for understanding many real-world phenomena such as disease spreading, criticality in brain network dynamics. For a network, an eigenvector is said to be localized when most of its components take value near to zero, with a few components taking very high values. In this article, we devise a methodology to construct a principal eigenvector (PEV) localized network from a given input network. The methodology relies on adding a small component having a wheel graph to the given input network. By extensive numerical simulation and an analytical formulation based on the largest eigenvalue of the input network, we compute the size of the wheel graph required to localize the PEV of the combined network. Using the susceptible-infected-susceptible model, we demonstrate the success of this method for various models and real-world networks consider as input networks. We show that on such PEV localized networks, the disease gets localized within a small region of the network structure before the outbreaks. The study is relevant in controlling spreading processes on complex systems represented by networks. | physics |
Modern text-to-speech systems are able to produce natural and high-quality speech, but speech contains factors of variation (e.g. pitch, rhythm, loudness, timbre)\ that text alone cannot contain. In this work we move towards a speech synthesis system that can produce diverse speech renditions of a text by allowing (but not requiring) explicit control over the various factors of variation. We propose a new neural vocoder that offers control of such factors of variation. This is achieved by employing differentiable digital signal processing (DDSP) (previously used only for music rather than speech), which exposes these factors of variation. The results show that the proposed approach can produce natural speech with realistic timbre, and individual factors of variation can be freely controlled. | electrical engineering and systems science |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.