text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
We describe how categorical BPS data including chain complexes of solitons, CPT pairings, and interior amplitudes jump across a wall of marginal stability in two-dimensional $\mathcal{N}=(2,2)$ models. We show that our jump formulas hold if and only if the $A_{\infty}$-categories of half-BPS branes constructed on either side of the wall are homotopy equivalent. These results can be viewed as categorical enhancements of the Cecotti-Vafa wall-crossing formula.
|
high energy physics theory
|
We analytically solve the Sudakov suppressed Balitsky-Kovchegov evolution equation with the fixed and running coupling constants in the saturation region. The analytic solution of the $S$-matrix shows the $\exp(\mathcal{O}(\eta^2))$ rapidity dependence of the solution with the fixed coupling constant is replaced by $\exp(\mathcal{O}(\eta^{3/2}))$ dependence in the smallest dipole running coupling case rather than obeying the law found in our previous publication, in which all the solutions of the next-to-leading order evolution equations comply with $\exp(\mathcal{O}(\eta))$ rapidity dependence once the QCD coupling is switched from the fixed coupling to the smallest dipole running coupling prescription. This finding indicates that the corrections of the sub-leading double logarithms in the Sudakov suppressed evolution equation are significant, which compensate part of the evolution decrease of the dipole amplitude made by running coupling effect. To test the analytic findings, we calculate the numerical solutions of the Sudakov suppressed evolution equation, the numerical results confirm the analytic outcomes. Moreover, we use the numerical solutions of the evolution equation to fit the HERA data. It shows that the Sudakove suppressed evolution equation can give good quality fit to the data.
|
high energy physics phenomenology
|
The magneto-transport properties of phosphorene are investigated by employing the generalized tight-binding model to calculate the energy bands. For bilayer phosphorene, a composite magnetic and electric field is shown to induce a feature-rich Landau level (LL) spectrum which includes two subgroups of low-lying LLs. The two subgroups possess distinct features in level spacings, quantum numbers, as well as field dependencies. These together lead to anomalous quantum Hall (QH) conductivities which include a well-shape, staircase and composite quantum structures with steps having varying heights and widths. The Fermi energy-magnetic field-Hall conductivity ($E_{F}-B_{z}-\sigma_{xy}$) and Fermi energy-electric field-Hall conductivity ($E_{F}-E_{z}-\sigma_{xy}$) phase diagrams clearly exhibit oscillatory behaviors and cross-over from integer to half-integer QH effect. The predicted results should be verifiable by magneto-transport measurements in a dual-gated system.
|
condensed matter
|
Baker devised a technique to obtain approximation schemes for many optimization problems restricted to planar graphs; her technique was later extended to more general graph classes. In particular, using the Baker's technique and the minor structure theorem, Dawar et al. gave Polynomial-Time Approximation Schemes (PTAS) for all monotone optimization problems expressible in the first-order logic when restricted to a proper minor-closed class of graphs. We define a Baker game formalizing the notion of repeated application of Baker's technique interspersed with vertex removal, prove that monotone optimization problems expressible in the first-order logic admit PTAS when restricted to graph classes in which the Baker game can be won in a constant number of rounds, and prove without use of the minor structure theorem that all proper minor-closed classes of graphs have this property.
|
computer science
|
Beamforming is traditionally associated with coherent summation of signals from antenna elements of the same polarization, here referred to as single polarization beamforming (SPBF). In this paper we focus on a new method, called dual polarization beamforming (DPBF), to design beam patterns. Instead of using only a single element-polarization, as for SPBF, a desired beam pattern is designed as the sum of powers for two orthogonal element-polarizations. So, with DPBF the focus is on total radiated power beam patterns. The DPBF technique provides additional degrees of freedom to form a desired beam pattern such that amplitude variations in the beamforming vector can often be significantly reduced, potentially to uniform amplitude. Phase-only beamforming, possibly using only minor amplitude variations, is a very interesting property, especially for active antennas, since it offers the potential of full power amplifier utilization. In this paper we apply DPBF to uniform linear arrays (ULAs) as well as uniform rectangular arrays (URAs). In communication systems, it is often desired to have a second beam with identical beam pattern but orthogonal polarization in all directions. We show how such beams are designed with DPBF, both for ULAs and URAs.
|
electrical engineering and systems science
|
Motivated by the problem of classifying individuals with a disease versus controls using functional genomic attributes as input, we encode the input as a string of 1s (presence) or 0s (absence) of the genomic attribute across the genome. Blocks of physical regions in the subdivided genome serve as the feature dimensions, which takes full advantage of remaining in the computational basis of a quantum computer. Given that a natural distance between two binary strings is the Hamming distance and that this distance shares properties with an inner product between qubits, we developed two Hamming-distance-like classifiers which apply two different kinds of inner products ("active" and "symmetric") to directly and efficiently classify a test input into either of two training classes. To account for multiple disease and control samples, each train class can be composed of an arbitrary number of bit strings (i.e., number of samples) that can be compressed into one input string per class. Thus, our circuits require the same number of qubits regardless of the number of training samples. These algorithms, which implement a training bisection decision plane, were simulated and implemented on IBMQX4 and IBMQX16. The latter allowed for encoding of $64$ training features across the genome for $2$ (disease and control) training classes.
|
quantum physics
|
We address the personalized policy learning problem using longitudinal mobile health application usage data. Personalized policy represents a paradigm shift from developing a single policy that may prescribe personalized decisions by tailoring. Specifically, we aim to develop the best policy, one per user, based on estimating random effects under generalized linear mixed model. With many random effects, we consider new estimation method and penalized objective to circumvent high-dimension integrals for marginal likelihood approximation. We establish consistency and optimality of our method with endogenous app usage. We apply our method to develop personalized push ("prompt") schedules in 294 app users, with a goal to maximize the prompt response rate given past app usage and other contextual factors. We found the best push schedule given the same covariates varied among the users, thus calling for personalized policies. Using the estimated personalized policies would have achieved a mean prompt response rate of 23% in these users at 16 weeks or later: this is a remarkable improvement on the observed rate (11%), while the literature suggests 3%-15% user engagement at 3 months after download. The proposed method compares favorably to existing estimation methods including using the R function "glmer" in a simulation study.
|
statistics
|
Yellowballs (YBs) were first discovered during the Milky Way Project citizen-science initiative (MWP; Simpson et al. 2012). MWP users noticed compact, yellow regions in Spitzer Space Telescope mid-infrared (MIR) images of the Milky Way plane and asked professional astronomers to explain these "yellow balls." Follow-up work by Kerton et al. (2015) determined that YBs likely trace compact photo-dissociation regions associated with massive and intermediate-mass star formation. YBs were included as target objects in a version of the Milky Way Project launched in 2016 (Jayasinghe et al. 2016), which produced a listing of over 6000 YB locations. We have measured distances, cross-match associations, physical properties, and MIR colors of ~500 YBs within a pilot region covering the l= 30 - 40 degrees, b= +/- 1 degree region of the Galactic plane. We find 20-30% of YBs in our pilot region contain high-mass star formation capable of becoming expanding H II regions that produce MIR bubbles. A majority of YBs represent intermediate-mass star-forming regions whose placement in evolutionary diagrams suggest they are still actively accreting, and may be precursors to optically-revealed Herbig Ae/Be nebulae. Many of these intermediate-mass YBs were missed by surveys of massive star-formation tracers and thus this catalog provides information for many new sites of star formation. Future work will expand this pilot region analysis to the entire YB catalog.
|
astrophysics
|
We study the effect of strong disorder on topology and entanglement in quench dynamics. Although disorder-induced topological phases have been well studied in equilibrium, the disorder-induced topology in quench dynamics has not been explored. In this work, we predict a disorder-induced topology of post-quench states characterized by the quantized dynamical Chern number and the crossings in the entanglement spectrum in (1 + 1) dimensions. The dynamical Chern number undergoes transitions from zero to unity, and back to zero when increasing disorder strength. The boundaries between different dynamical Chern numbers are determined by delocalized critical points in the post-quench Hamiltonian with the strong disorder. An experimental realization in quantum walks is discussed.
|
condensed matter
|
Given a $2$-vertex-twinless connected directed graph $G=(V,E)$, the minimum $2$-vertex-twinless connected spanning subgraph problem is to find a minimum cardinality edge subset $E^{t} \subseteq E$ such that the subgraph $(V,E^{t})$ is $2$-vertex-twinless connected. Let $G^{1}$ be a minimal $2$-vertex-connected subgraph of $G$. In this paper we present a $(2+a_{t}/2)$-approximation algorithm for the minimum $2$-vertex-twinless connected spanning subgraph problem, where $a_{t}$ is the number of twinless articulation points in $G^{1}$.
|
computer science
|
The analysis of single-mode photon fluctuations and their counting statistics at the superradiant phase transition is presented. The study concerns the equilibrium Dicke model in a regime where the Rabi frequency, related to a coupling of the photon mode with a finite-number qubit environment, plays a role of the transition's control parameter. We use the effective Matsubara action formalism based on the representation of Pauli operators as bilinear forms with complex and Majorana fermions. Then, we address fluctuations of superradiant order parameter and quasiparticles. The average photon number, the fluctuational Ginzburg-Levanyuk region of the phase transition and Fano factor are evaluated. We determine the cumulant generating function which describes a full counting statistics of equilibrium photon number. Exact numerical simulation of the superradiant transition demonstrates quantitative agreement with analytical calculations.
|
condensed matter
|
Paper materials are natural composite materials and well-known to be hydrophilic unless chemical and mechanical processing treatments are undertaken. The relative humidity impacts the fiber elasticity, the fiber-fiber bonds and the failure mechanism. In this work, we present a comprehensive experimental and computational study on the mechanical and failure behaviour of the fiber and the fiber network under humidity influence. The manually extracted cellulose fiber is exposed to different levels of humidity, and then mechanically characterized using Atomic Force Microscopy, which delivers the humidity dependent longitudinal Young's modulus. The obtained relationship allows calculation of fiber elastic modulus at any humidity level. Moreover, by using Confoncal Laser Scanning Microscopy, the coefficient of hygroscopic expansion of the fibers is determined. On the other hand, we present a finite element model to simulate the deformation and the failure of the fiber network. The model includes the fiber anisotropy and the hygroscopic expansion using the experimentally determined constants. In addition, it regards the fiber-fiber bonding and damage by using a humidity dependent cohesive zone interface model. Finite element simulations on exemplary fiber network samples are performed to demonstrate the influence of different aspects including relative humidity and fiber-fiber bonding parameters on the mechanical features such as force-elongation curves, wet strength, extensiability and the local fiber-fiber debonding. In meantime, fiber network failure in a locally wetted region is revealed by tracking of individually stained fibers using in-situ imaging techniques. Both the experimental data and the cohesive finite element simulations demonstrate the pull-out of fibers and imply the significant role of the fiber-fiber debonding in the failure process of the wet paper.
|
physics
|
Using samples drawn from the Sloan Digital Sky Survey, we study for the first time the relation between large-scale environments (Clusters, Groups and Voids) and the stellar Initial Mass Function (IMF). We perform an observational approach based on the comparison of IMF-sensitive indices of quiescent galaxies with similar mass in varying environments. These galaxies are selected within a narrow redshift interval ($ 0.020 < z < 0.055 $) and spanning a range in velocity dispersion from 100 to 200 kms$^{-1}$. The results of this paper are based upon analysis of composite spectra created by stacking the spectra of galaxies, binned by their velocity dispersion and redshift. The trends of spectral indices as measured from the stacked spectra, with respect to velocity dispersion, are compared in different environments. We find a lack of dependence of the IMF on the environment for intermediate-mass galaxy regime. We verify this finding by providing a more quantitative measurement of the IMF variations among galactic environments using MILES stellar population models with a precision of $\Delta\Gamma_{b}\sim0.2$.
|
astrophysics
|
Given functional data from a survival process with time-dependent covariates, we derive a smooth convex representation for its nonparametric log-likelihood functional and obtain its functional gradient. From this we devise a generic gradient boosting procedure for estimating the hazard function nonparametrically. An illustrative implementation of the procedure using regression trees is described to show how to recover the unknown hazard. We show that the generic estimator is consistent if the model is correctly specified; alternatively an oracle inequality can be demonstrated for tree-based models. To avoid overfitting, boosting employs several regularization devices. One of them is step-size restriction, but the rationale for this is somewhat mysterious from the viewpoint of consistency. Our work brings some clarity to this issue by revealing that step-size restriction is a mechanism for preventing the curvature of the risk from derailing convergence.
|
statistics
|
The cost of wind energy can be reduced by using SCADA data to detect faults in wind turbine components. Normal behavior models are one of the main fault detection approaches, but there is a lack of consensus in how different input features affect the results. In this work, a new taxonomy based on the causal relations between the input features and the target is presented. Based on this taxonomy, the impact of different input feature configurations on the modelling and fault detection performance is evaluated. To this end, a framework that formulates the detection of faults as a classification problem is also presented.
|
electrical engineering and systems science
|
The pandemic caused by Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) has affected countries all across the world, heavily burdening the medical infrastructure with the growing number of patients affected by the coronavirus disease (COVID-19). With ventilators in limited supply, this public health emergency highlights the need for safe, fast, reliable, and economical alternatives to high-end commercial devices and has prompted the development of easy-to-use and mass-producible ventilators. Here, we detail the design of the HDvent Emergency Ventilator System. The device performs ventilation through mechanical compression of manual resuscitators and includes control electronics, flow and pressure sensors, and an external data visualization and monitoring unit. We demonstrate its suitability for open loop, pressure- and volume-controlled ventilation. The system has not undergone clinical testing and has not been approved for use as a medical device. The project documentation needed to reproduce the prototype is freely available and will contribute to the development of open source ventilation systems.
|
physics
|
These results represent the solution for the historical problem of the contamination by matter effects on the CPV Asymmetry for neutrino oscillations. Vacuum is CPT-symmetric and matter is T-symmetric, the goal is accomplished by using this guiding principle. Independent of the theoretical framework for the dynamics of the active neutrino flavors, we prove the Disentanglement Theorem A(CP)=A(CP, T)+A(CP, CPT) for the experimental CPV Asymmetry, with A(CP, T) genuine T-odd and A(CP, CPT) fake CPT-odd. For the effective Hamiltonian written as the sum of free mass propagation plus the matter potential for electron-neutrinos, the two components have definite parities under the baseline L, the matter potential "a", the imaginary part $\sin\delta$ of the PMNS mixing matrix and the hierarchy "h"=$\pm 1$ in the neutrino mass ordering: A(CP, T) is odd in L and $\sin\delta$ plus even in a and h; A(CP, CPT) is even in L and $\sin\delta$ plus odd in a and almost odd in h. For present terrestrial accelerator sources of muon-neutrinos and antineutrinos, the two components of the appearance CPV asymmetry A(CP) can be disentangled by either baseline dependence (HKK) or energy dependence (DUNE). At the DUNE baseline, the higher energy region above the first oscillation node provides a dominant matter-induced A(CP, CPT) component and the sign of the experimental asymmetry A(CP) gives the hierarchy in the neutrino mass ordering. On the contrary, there is a "magic energy" E around the second oscillation maximum in which the fake A(CP, CPT) component has a first-rank zero whereas the genuine A(CP, T) component has a maximum proportional to sin{\delta}. With a modest energy resolution $\Delta E \sim 200$ MeV an effective zero remains in the matter-induced A(CP, CPT).
|
high energy physics phenomenology
|
The vast majority of stars with mass similar to the Sun are expected to only destroy lithium over the course of their lives, via low-temperature nuclear burning. This has now been supported by observations of hundreds of thousands of red giant stars (Brown et al. 1989, Kumar et al. 2011, Deepak et al. 2019, Singh et al. 2019, Casey et al. 2019). Here we perform the first large-scale systematic investigation into the Li content of stars in the red clump phase of evolution, which directly follows the red giant branch phase. Surprisingly we find that all red clump stars have high levels of lithium for their evolutionary stage. On average the lithium content increases by a factor of 40 after the end of the red giant branch stage. This suggests that all low-mass stars undergo a lithium production phase between the tip of the red giant branch and the red clump. We demonstrate that our finding is not predicted by stellar theory, revealing a stark tension between observations and models. We also show that the heavily studied (Brown et al. 1989, Reddy et al. 2005, Kumar et al. 2011, Singh et al. 2019, Casey et al. 2019) very Li-rich giants, with A(Li) $> +1.5$ dex, represent only the extreme tail of the lithium enhancement distribution, comprising 3% of red clump stars. Our findings suggest a new definition limit for Li-richness in red clump stars, A(Li) $> -0.9$ dex, which is much lower than the limit of A(Li) $> +1.5$ dex used over many decades (Brown et al. 1989, Castilho et al. 1995, Reddy et al. 2005, Carlberg et al. 2016, Casey et al. 2019, Holanda et al. 2020).
|
astrophysics
|
Inverse design arises in a variety of areas in engineering such as acoustic, mechanics, thermal/electronic transport, electromagnetism, and optics. Topology optimization is a major form of inverse design, where we optimize a designed geometry to achieve targeted properties and the geometry is parameterized by a density function. This optimization is challenging, because it has a very high dimensionality and is usually constrained by partial differential equations (PDEs) and additional inequalities. Here, we propose a new deep learning method -- physics-informed neural networks with hard constraints (hPINNs) -- for solving topology optimization. hPINN leverages the recent development of PINNs for solving PDEs, and thus does not rely on any numerical PDE solver. However, all the constraints in PINNs are soft constraints, and hence we impose hard constraints by using the penalty method and the augmented Lagrangian method. We demonstrate the effectiveness of hPINN for a holography problem in optics and a fluid problem of Stokes flow. We achieve the same objective as conventional PDE-constrained optimization methods based on adjoint methods and numerical PDE solvers, but find that the design obtained from hPINN is often simpler and smoother for problems whose solution is not unique. Moreover, the implementation of inverse design with hPINN can be easier than that of conventional methods.
|
physics
|
The present work aims to compute the computational complexity of hairy Anti-de-Sitter black holes in four dimensions according to the Complexity equals Action (CA) conjecture on a generalized scalar-tensor theory. To perform this, the system contains a particle moving on the boundary of AdS$_{4}$, corresponding to the insertion of a fundamental string in the bulk. The effect string is given by the Nambu-Goto term, analyzing the time development of this system, which is affected by the parameters of this class of theory.
|
high energy physics theory
|
We study a transition between a homogeneous and an inhomogeneous phase in a system of one-dimensional, Raman tunnel-coupled Bose gases. The homogeneous phase shows a flat density and phase profile, whereas the inhomogeneous ground state is characterized by periodic density ripples, and a soliton staircase in the phase difference. We show that under experimentally viable conditions the transition can be tuned by the wavevector difference $Q$ of the Raman beams and can be described by the Pokrovsky-Talapov model for the relative phase between the two condensates. Local imaging available in atom chip experiments allows to observe the soliton lattice directly, while modulation spectroscopy can be used to explore collective modes, such as the phonon mode arising from breaking of translation symmetry by the soliton lattice. In addition, we investigate regimes where the cold atom experiment deviates from the Pokrovsky-Talapov field theory. We predict unusual mesoscopic effects arising from the finite size of the system, such as quantized injection of solitons upon increasing $Q$, or the system size. For moderate values of $Q$ above criticality, we find that the density modulations in the two gases interplay with the relative phase profile and introduce novel features in the spatial structure of the mode wave-functions. Using an inhomogeneous Bogoliubov theory, we show that spatial quantum fluctuations are intertwined with the emerging soliton staircase. Finally, we comment on the prospects of the ultra-cold atom setup as a tunable platform studying quantum aspects of the Pokrovsky-Talapov theory in and out-of-equilibrium.
|
condensed matter
|
Although models for count data with over-dispersion have been widely considered in the literature, models for under-dispersion -- the opposite phenomenon -- have received less attention as it is only relatively common in particular research fields such as biodosimetry and ecology. The Good distribution is a flexible alternative for modelling count data showing either over-dispersion or under-dispersion, although no R packages are still available to the best of our knowledge. We aim to present in the following the R package good that computes the standard probabilistic functions (i.e., probability density function, cumulative distribution function, and quantile function) and generates random samples from a population following a Good distribution. The package also considers a function for Good regression, including covariates in a similar way to that of the standard glm function. We finally show the use of such a package with some real-world data examples addressing both over-dispersion and especially under-dispersion.
|
statistics
|
Deep generative models are attracting great attention as a new promising approach for molecular design. All models reported so far are based on either variational autoencoder (VAE) or generative adversarial network (GAN). Here we propose a new type model based on an adversarially regularized autoencoder (ARAE). It basically uses latent variables like VAE, but the distribution of the latent variables is obtained by adversarial training like in GAN. The latter is intended to avoid both inappropriate approximation of posterior distribution in VAE and difficulty in handling discrete variables in GAN. Our benchmark study showed that ARAE indeed outperformed conventional models in terms of validity, uniqueness, and novelty per generated molecule. We also demonstrated successful conditional generation of drug-like molecules with ARAE for both cases of single and multiple properties control. As a potential real-world application, we could generate EGFR inhibitors sharing the scaffolds of known active molecules while satisfying drug-like conditions simultaneously.
|
physics
|
The closure problem in fluid modeling is a well-known challenge to modelers aiming to accurately describe their system of interest. Over many years, analytic formulations in a wide range of regimes have been presented but a practical, generalized fluid closure for magnetized plasmas remains an elusive goal. In this study, as a first step towards constructing a novel data based approach to this problem, we apply ever-maturing machine learning methods to assess the capability of neural network architectures to reproduce crucial physics inherent in popular magnetized plasma closures. We find encouraging results, indicating the applicability of neural networks to closure physics but also arrive at recommendations on how one should choose appropriate network architectures for given locality properties dictated by underlying physics of the plasma.
|
physics
|
Cybersecurity of discrete event systems (DES) has been gaining more and more attention recently, due to its high relevance to the so-called 4th industrial revolution that heavily relies on data communication among networked systems. One key challenge is how to ensure system resilience to sensor and/or actuator attacks, which may tamper data integrity and service availability. In this paper we focus on some key decidability issues related to smart sensor attacks. We first present a sufficient and necessary condition that ensures the existence of a smart sensor attack, which reveals a novel demand-supply relationship between an attacker and a controlled plant, represented as a set of risky pairs. Each risky pair consists of a damage string desired by the attacker and an observable sequence feasible in the supervisor such that the latter induces a sequence of control patterns, which allows the damage string to happen. It turns out that each risky pair can induce a smart weak sensor attack. Next, we show that, when the plant, supervisor and damage language are regular, it is computationally feasible to remove all such risky pairs from the plant behaviour, via a genuine encoding scheme, upon which we are able to establish our key result that the existence of a nonblocking supervisor resilient to smart sensor attacks is decidable. To the best of our knowledge, this is the first result of its kind in the DES literature on cyber attacks. The proposed decision process renders a specific synthesis procedure that guarantees to compute a resilient supervisor whenever it exists, which so far has not been achieved in the literature.
|
computer science
|
How does system-level information impact the ability of an adversary to degrade performance in a networked control system? How does the complexity of an adversary's strategy affect its ability to degrade performance? This paper focuses on these questions in the context of graphical coordination games where an adversary can influence a given fraction of the agents in the system, and the agents follow log-linear learning, a well-known distributed learning algorithm. Focusing on a class of homogeneous ring graphs of various connectivity, we begin by demonstrating that minimally connected ring graphs are the most susceptible to adversarial influence. We then proceed to characterize how both (i) the sophistication of the attack strategies (static vs dynamic) and (ii) the informational awareness about the network structure can be leveraged by an adversary to degrade system performance. Focusing on the set of adversarial policies that induce stochastically stable states, our findings demonstrate that the relative importance between sophistication and information changes depending on the the influencing power of the adversary. In particular, sophistication far outweighs informational awareness with regards to degrading system-level damage when the adversary's influence power is relatively weak. However, the opposite is true when an adversary's influence power is more substantial.
|
electrical engineering and systems science
|
The values of Hubble constant between the direct measurements from various independent local observations and that inferred from the cosmic microwave background with $\Lambda$-cold-dark-matter model are in tension with persistent significance. In this \textit{Letter}, we propose a late-time inhomogeneous resolution that a chameleon field coupled to a local overdensity of matter could be trapped at a higher potential energy density as an effective cosmological constant driving the local expansion rate faster than that of the background with lower matter density. We illustrate this mechanism in a toy model that a region with only $20\%$ overdensity of matter is sufficient to resolve the Hubble tension, and the Hubble constant measured by the local distance ladders could be accommodated by the chameleon coupled to the observed overdensities from the large scale structure surveys.
|
astrophysics
|
Motivation: Agent-based modeling is an indispensable tool for studying complex biological systems. However, existing simulators do not always take full advantage of modern hardware and often have a field-specific software design. Results: We present a novel simulation platform called BioDynaMo that alleviates both of these problems. BioDynaMo features a general-purpose and high-performance simulation engine. We demonstrate that BioDynaMo can be used to simulate use cases in: neuroscience, oncology, and epidemiology. For each use case we validate our findings with experimental data or an analytical solution. Our performance results show that BioDynaMo performs up to three orders of magnitude faster than the state-of-the-art baseline. This improvement makes it feasible to simulate each use case with one billion agents on a single server, showcasing the potential BioDynaMo has for computational biology research. Availability: BioDynaMo is an open-source project under the Apache 2.0 license and is available at www.biodynamo.org. Instructions to reproduce the results are available in supplementary information. Contact: [email protected], [email protected], [email protected], [email protected] Supplementary information: Available at https://doi.org/10.5281/zenodo.4501515
|
computer science
|
Exciton dynamics, lifetimes and scattering are directly related to the exciton dispersion, or bandstructure. While electron and phonon bandstructures are well understood and can be easily calculated from first principles, the exciton bandstructure is commonly conflated with the underlying electronic bandstructure, where the exciton dispersion is assumed to follow the same dispersion as the electron and hole bands from which it is composed (i.e., the effective mass model). Here, we present a general theory of exciton bandstructure within both ab initio and model Hamiltonian approaches. We show that contrary to common assumption, the exciton bandstructure contains non-analytical discontinuities -- a feature which is impossible to obtain from the electronic bandstructure alone. These discontinuities are purely quantum phenomena, arising from the exchange scattering of electron-hole pairs. We show that the degree of these discontinuities depends on materials' symmetry and dimensionality, with jump discontinuities occurring in 3D and different orders of removable discontinuities in 2D and 1D. We connect these unexpected features to the early stages of exciton dynamics, which shows remarkable correspondence with recent experimental observations and suggests that the measured diffusion patterns are influenced by the underlying exciton bandstructure.
|
condensed matter
|
We analyze the low-$Q^2$ behavior of the axial form factor $G_A(Q^2)$, the induced pseudoscalar form factor $G_P(Q^2)$, and the axial nucleon-to-$\Delta$ transition form factors $C^A_5(Q^2)$ and $C^A_6(Q^2)$. Building on the results of chiral perturbation theory, we first discuss $G_A(Q^2)$ in a chiral effective-Lagrangian model including the $a_1$ meson and determine the relevant coupling parameters from a fit to experimental data. With this information, the form factor $G_P(Q^2)$ can be predicted. For the determination of the transition form factor $C^A_5(Q^2)$ we make use of an SU(6) spin-flavor quark-model relation to fix two coupling constants such that only one free parameter is left. Finally, the transition form factor $C^A_6(Q^2)$ can be predicted in terms of $G_P(Q^2)$, the mean-square axial radius $\langle r^2_A\rangle$, and the mean-square axial nucleon-to-$\Delta$ transition radius $\langle r^2_{AN\Delta}\rangle$.
|
high energy physics phenomenology
|
Reversible jump Markov chain Monte Carlo (RJMCMC) is a Bayesian model estimation method which has been used for trans-dimensional sampling. In this study, we propose utilization of RJMCMC beyond trans-dimensional sampling. This new interpretation, which we call trans-space RJMCMC, reveals the undiscovered potential of RJMCMC by exploiting the original formulation to explore spaces of different classes or structures. This provides flexibility in using different types of candidate classes in the combined model space such as spaces of linear and nonlinear models or of various distribution families. As an application for the proposed method, we have performed a special case of trans-space sampling, namely trans-distributional RJMCMC in impulsive data modeling. In many areas such as seismology, radar, image, using Gaussian models is a common practice due to analytical ease. However, many noise processes do not follow a Gaussian character and generally exhibit events too impulsive to be successfully described by the Gaussian model. We test the proposed method to choose between various impulsive distribution families to model both synthetically generated noise processes and real-life measurements on power line communications (PLC) impulsive noises and 2-D discrete wavelet transform (2-D DWT) coefficients.
|
electrical engineering and systems science
|
The pursuit of superconducting-based quantum computers has advanced the fabrication of and experimentation with custom lattices of qubits and resonators. Here, we describe a roadmap to use present experimental capabilities to simulate an interacting many-body system of bosons and measure quantities that are exponentially difficult to calculate numerically. We focus on the two-dimensional hard-core Bose-Hubbard model implemented as an array of floating transmon qubits. We describe a control scheme for such a lattice that can perform individual qubit readout and show how the scheme enables the preparation of a highly-excited many-body state, in contrast with atomic implementations restricted to the ground state or thermal equilibrium. We discuss what observables could be accessed and how they could be used to better understand the properties of many-body systems, including the observation of the transition of eigenstate entanglement entropy scaling from area law behavior to volume law behavior
|
quantum physics
|
In this paper we study a hysteric phase transition from weak localization phase to hysteric magnetoconductance phase using gauge/gravity duality. This hysteric phase is triggered by a spontaneous magnetization related to $\mathbb{Z}_2$ symmetry and time reversal symmetry in a 2+1 dimensional system with momentum relaxation. We derive thermoelectric conductivity formulas describing non-hysteric and hysteric phases. At low temperatures, this magnetoconductance shows similar phase transitions of topological insulator surface states. We also obtain hysteresis curves of Seebeck coefficient and Nernst signal. It turns out that our impurity parameter plays a role of magnetic impurity. This is justified by showing increasing susceptibility and the spontaneous magnetization with increasing impurity parameter.
|
high energy physics theory
|
We introduce a quantum interferometric scheme that uses states that are sharp in frequency and delocalized in position. The states are frequency modes of a quantum field that is trapped at all times in a finite volume potential, such as a small box potential. This allows for significant miniaturization of interferometric devices. Since the modes are in contact at all times, it is possible to estimate physical parameters of global multi-mode channels. As an example, we introduce a three-mode scheme and calculate precision bounds in the estimation of parameters of two-mode Gaussian channels. This scheme can be implemented in several systems, including superconducting circuits, cavity-QED and cold atoms. We consider a concrete implementation using the ground state and two phononic modes of a trapped Bose-Einstein condensate. We apply this to show that frequency interferometry can improve the sensitivity of phononic gravitational waves detectors by several orders of magnitude, even in the case that squeezing is much smaller than assumed previously and that the system suffers from short phononic lifetimes. Other applications range from magnetometry, gravimetry and gradiometry to dark matter/energy searches.
|
quantum physics
|
We analyze the cold dark matter density profiles of 54 galaxy halos simulated with FIRE-2 galaxy formation physics, each resolved within $0.5\%$ of the halo virial radius. These halos contain galaxies with masses that range from ultra-faint dwarfs ($M_\star \simeq 10^{4.5} M_{\odot}$) to the largest spirals ($M_\star \simeq 10^{11} M_{\odot}$) and have density profiles that are both cored and cuspy. We characterize our results using a new analytic density profile that extends the standard Einasto form to allow for a pronounced constant-density core in the resolved innermost radius. With one additional core-radius parameter, $r_{c}$, this "core-Einasto" profile is able to characterize the shape and normalization of our feedback-impacted dark matter halos. In order to enable comparisons with observations, we provide fitting functions for $r_{c}$ and other profile parameters as a function of both $M_\star$ and $M_{\star}/M_{\rm halo}$. In agreement with similar studies done in the literature, we find that dark matter core formation is most efficient at the characteristic stellar-mass to halo-mass ratio $M_\star/M_{\rm halo} \simeq 5 \times 10^{-3}$, or $M_{\star} \sim 10^9 \, M_{\odot}$, with cores that are roughly the size of the galaxy half-light radius, $r_{c} \simeq 1-5$ kpc. Furthermore, we find no evidence for core formation at radii $\gtrsim 100\ \rm pc$ in galaxies with $M_{\star}/M_{\rm halo} < 5\times 10^{-4}$ or $M_\star \lesssim 10^6 \, M_{\odot}$. For Milky Way-size galaxies, baryonic contraction often makes halos significantly more concentrated and dense at the stellar half-light radius than dark matter only runs. However, even at the Milky Way scale, FIRE-2 galaxy formation still produces small dark matter cores of $\simeq 0.5-2$ kpc in size. Recent evidence for a ${\sim} 2$ kpc core in the Milky Way's dark matter halo is consistent with this expectation.
|
astrophysics
|
Understanding the physics of strongly correlated electronic systems has been a central issue in condensed matter physics for decades. In transition metal oxides, strong correlations characteristic of narrow $d$ bands is at the origin of such remarkable properties as the Mott gap opening, enhanced effective mass, and anomalous vibronic coupling, to mention a few. SrVO$_3$, with V$^{4+}$ in a $3d^1$ electronic configuration is the simplest example of a 3D correlated metallic electronic system. Here, we focus on the observation of a (roughly) quadratic temperature dependence of the inverse electron mobility of this seemingly simple system, which is an intriguing property shared by other metallic oxides. The systematic analysis of electronic transport in SrVO$_3$ thin films discloses the limitations of the simplest picture of e-e correlations in a Fermi liquid; instead, we show that the quasi-2D topology of the Fermi surface and a strong electron-phonon coupling, contributing to dress carriers with a phonon cloud, play a pivotal role on the reported electron spectroscopic, optical, thermodynamic and transport data. The picture that emerges is not restricted to SrVO$_3$ but can be shared with other $3d$ and $4d$ metallic oxides.
|
condensed matter
|
This study proposes different approaches for static and dynamic gesture analysis and imitation with the social robot Nao
|
computer science
|
One of the cornerstones of extremal graph theory is a result of F\"uredi, later reproved and given due prominence by Alon, Krivelevich and Sudakov, saying that if $H$ is a bipartite graph with maximum degree $r$ on one side, then there is a constant $C$ such that every graph with $n$ vertices and $C n^{2 - 1/r}$ edges contains a copy of $H$. This result is tight up to the constant when $H$ contains a copy of $K_{r,s}$ with $s$ sufficiently large in terms of $r$. We conjecture that this is essentially the only situation in which F\"uredi's result can be tight and prove this conjecture for $r = 2$. More precisely, we show that if $H$ is a $C_4$-free bipartite graph with maximum degree $2$ on one side, then there are positive constants $C$ and $\delta$ such that every graph with $n$ vertices and $C n^{3/2 - \delta}$ edges contains a copy of $H$. This answers a question of Erd\H{o}s from 1988. The proof relies on a novel variant of the dependent random choice technique which may be of independent interest.
|
mathematics
|
Federated learning (FL) is a machine learning technique that aims at training an algorithm across decentralized entities holding their local data private. Wireless mobile networks allow users to communicate with other fixed or mobile users. The road traffic network represents an infrastructure-based configuration of a wireless mobile network where the Connected and Automated Vehicles (CAV) represent the communicating entities. Applying FL in a wireless mobile network setting gives rise to a new threat in the mobile environment that is very different from the traditional fixed networks. The threat is due to the intrinsic characteristics of the wireless medium and is caused by the characteristics of the vehicular networks such as high node-mobility and rapidly changing topology. Most cyber defense techniques depend on highly reliable and connected networks. This paper explores falsified information attacks, which target the FL process that is ongoing at the RSU. We identified a number of attack strategies conducted by the malicious CAVs to disrupt the training of the global model in vehicular networks. We show that the attacks were able to increase the convergence time and decrease the accuracy the model. We demonstrate that our attacks bypass FL defense strategies in their primary form and highlight the need for novel poisoning resilience defense mechanisms in the wireless mobile setting of the future road networks.
|
computer science
|
Xenon time projection chambers (TPCs) have become a well-established detection technology for neutrinoless double beta decay searches in $^{136}$Xe. I discuss the motivations for this choice. I describe the status and prospects of both liquid and gaseous xenon TPC projects for double beta decay.
|
physics
|
Recurrent neural networks (RNNs) are powerful tools for sequential modeling, but typically require significant overparameterization and regularization to achieve optimal performance. This leads to difficulties in the deployment of large RNNs in resource-limited settings, while also introducing complications in hyperparameter selection and training. To address these issues, we introduce a "fully tensorized" RNN architecture which jointly encodes the separate weight matrices within each recurrent cell using a lightweight tensor-train (TT) factorization. This approach represents a novel form of weight sharing which reduces model size by several orders of magnitude, while still maintaining similar or better performance compared to standard RNNs. Experiments on image classification and speaker verification tasks demonstrate further benefits for reducing inference times and stabilizing model training and hyperparameter selection.
|
computer science
|
The pre-trained language models have achieved great successes in various natural language understanding (NLU) tasks due to its capacity to capture the deep contextualized information in text by pre-training on large-scale corpora. One of the fundamental components in pre-trained language models is the vocabulary, especially for training multilingual models on many different languages. In the technical report, we present our practices on training multilingual pre-trained language models with BBPE: Byte-Level BPE (i.e., Byte Pair Encoding). In the experiment, we adopted the architecture of NEZHA as the underlying pre-trained language model and the results show that NEZHA trained with byte-level subwords consistently outperforms Google multilingual BERT and vanilla NEZHA by a notable margin in several multilingual NLU tasks. We release the source code of our byte-level vocabulary building tools and the multilingual pre-trained language models.
|
computer science
|
In this paper, we propose a method called superpixel tensor pooling tracker which can fuse multiple midlevel cues captured by superpixels into sparse pooled tensor features. Our method first adopts the superpixel method to generate different patches (superpixels) from the target template or candidates. Then for each superpixel, it encodes different midlevel cues including HSI color, RGB color, and spatial coordinates into a histogram matrix to construct a new feature space. Next, these matrices are formed to a third order tensor. After that, the tensor is pooled into the sparse representation. Then the incremental positive and negative subspaces learning is performed. Our method has both good characteristics of midlevel cues and sparse representation hence is more robust to large appearance variations and can capture compact and informative appearance of the target object. To validate the proposed method, we compare it with state-of-the-art methods on 24 sequences with multiple visual tracking challenges. Experiment results demonstrate that our method outperforms them significantly.
|
electrical engineering and systems science
|
During the first few hundred days after the explosion, core-collapse supernovae (SNe) emit down-scattered X-rays and gamma-rays originating from radioactive line emissions, primarily from the $^{56}$Ni $\rightarrow$ $^{56}$Co $\rightarrow$ $^{56}$Fe chain. We use SN models based on three-dimensional neutrino-driven explosion simulations of single stars and mergers to compute this emission and compare the predictions with observations of SN 1987A. A number of models are clearly excluded, showing that high-energy emission is a powerful way of discriminating between models. The best models are almost consistent with the observations, but differences that cannot be matched by a suitable choice of viewing angle are evident. Therefore, our self-consistent models suggest that neutrino-driven explosions are able to produce, in principle, sufficient mixing, although remaining discrepancies may require small changes to the progenitor structures. The soft X-ray cutoff is primarily determined by the metallicity of the progenitor envelope. The main effect of asymmetries is to vary the flux level by a factor of ${\sim}$3. For the more asymmetric models, the shapes of the light curves also change. In addition to the models of SN 1987A, we investigate two models of Type II-P SNe and one model of a stripped-envelope Type IIb SN. The Type II-P models have similar observables as the models of SN 1987A, but the stripped-envelope SN model is significantly more luminous and evolves faster. Finally, we make simple predictions for future observations of nearby SNe.
|
astrophysics
|
We study the nucleon's partonic angular momentum (AM) content at peripheral transverse distances $b = \mathcal{O}(M_\pi^{-1})$, where the structure is governed by chiral dynamics. We compute the nucleon form factors of the energy-momentum tensor in chiral effective field theory (ChEFT) and construct the transverse densities of AM at fixed light-front time. In the periphery the spin density is suppressed, and the AM is predominantly orbital. In the first-quantized representation of ChEFT in light-front form, the field-theoretical AM density coincides with the quantum-mechanical orbital AM density of the soft pions in the nucleon's periphery.
|
high energy physics phenomenology
|
In this paper, curved fronts are constructed for spatially periodic bistable reaction-diffusion equations under the a priori assumption that there exist pulsating fronts in every direction. Some sufficient and some necessary conditions of the existence of curved fronts are given. Furthermore, the curved front is proved to be unique and stable. Finally, a curved front with varying interfaces is also constructed. Despite the effect of the spatial heterogeneity, the result shows the existence of curved fronts for spatially periodic bistable reaction-diffusion equations which is known for the homogeneous case.
|
mathematics
|
In this paper I examine snow crystal growth near -4 C in comparison with a comprehensive model that includes Structure-Dependent Attachment Kinetics (SDAK). Together with the previous paper in this series that investigated growth near 14 C, I show that a substantial body of experimental data now supports the existence of pronounced 'SDAK dips' on basal surfaces near -4 C and on prism surfaces near -14 C. In both cases, the model suggests that edge-associated surface diffusion greatly reduces the nucleation barrier on narrow facet surfaces relative to that found on broad facets. The remarkable quantitative similarities in the growth behaviors near -4 C and -14 C suggest that these two SDAK features arise from essentially the same physical mechanism occurring at different temperatures on the two principal facets. When applied to atmospheric snow crystal formation, this comprehensive model can explain the recurrent morphological transitions between platelike and columnar growth seen in the Nakaya diagram.
|
condensed matter
|
We study the problem of learning personalized decision policies from observational data while accounting for possible unobserved confounding. Previous approaches, which assume unconfoundedness, i.e., that no unobserved confounders affect both the treatment assignment as well as outcome, can lead to policies that introduce harm rather than benefit when some unobserved confounding is present, as is generally the case with observational data. Instead, since policy value and regret may not be point-identifiable, we study a method that minimizes the worst-case estimated regret of a candidate policy against a baseline policy over an uncertainty set for propensity weights that controls the extent of unobserved confounding. We prove generalization guarantees that ensure our policy will be safe when applied in practice and will in fact obtain the best-possible uniform control on the range of all possible population regrets that agree with the possible extent of confounding. We develop efficient algorithmic solutions to compute this confounding-robust policy. Finally, we assess and compare our methods on synthetic and semi-synthetic data. In particular, we consider a case study on personalizing hormone replacement therapy based on observational data, where we validate our results on a randomized experiment. We demonstrate that hidden confounding can hinder existing policy learning approaches and lead to unwarranted harm, while our robust approach guarantees safety and focuses on well-evidenced improvement, a necessity for making personalized treatment policies learned from observational data reliable in practice.
|
computer science
|
Accurate calculation of the ion-ion recombination rate coefficient has been of long-standing interest, as it controls the ion concentration in gas phase systems and in aerosols. We describe the development of a hybrid continuum-molecular dynamics approach to determine the ion-ion recombination rate coefficient. The approach is based on the limiting sphere method classically used for transition regime collision phenomena in aerosols. When ions are sufficiently far from one another, ion-ion relative motion is described by diffusion equations while within a critical distance, molecular dynamics (MD) simulations are used to model ion-ion motion. MD simulations are parameterized using the AMBER force-field as well as by considering partial charges on atoms. Ion-neutral gas collisions are modeled in two mutually exclusive cubic domains composed of 103 gas atoms each, which remain centered on the recombining ions throughout calculations. Example calculations are reported for NH4+ recombination with NO2- in He, across a pressure range from 10 kPa to 10,000 kPa. Excellent agreement is found in comparison of calculations to literature values for the 100 kPa recombination rate coefficient (1.0 x 10-12 m3 s-1) in He. We also recover the experimentally observed increase in recombination rate coefficient with pressure at sub-atmospheric pressures, and the observed decrease in recombination rate coefficient in the high pressure continuum limit. We additionally find that non-dimensionalized forms of rate coefficients are consistent with recently developed equations for the dimensionless charged particle-ion collision rate coefficient based on Langevin dynamics simulations.
|
physics
|
We consider the current operators of one dimensional integrable models. These operators describe the flow of the conserved charges of the models, and they play a central role in Generalized Hydrodynamics. We present the key statements about the mean currents in finite volume and in the thermodynamic limit, and we review the various proofs of the exact formulas. We also present a few new results in this review. New contributions include a computation of the currents of the Heisenberg spin chains using the string hypothesis, and simplified formulas in the thermodynamic limit. We also discuss implications of our results for the asymptotic behaviour of dynamical correlation functions.
|
condensed matter
|
Nanomechanical resonators are used in building ultra-sensitive mass and force sensors. In a widely used resonator based sensing paradigm, each modal resonance frequency is tracked with a phase-locked loop (PLL) based system. There is great interest in deciphering the fundamental sensitivity limitations due to inherent noise and fluctuations in PLL based resonant sensors to improve their performance. In this paper, we present a precise, first-principles based theory for the analysis of PLL based resonator tracking systems. Based on this theory, we develop a general, rigorously-derived noise analysis framework for PLL based sensors. We apply this framework to a setting where the sensor performance is mainly limited by the thermomechanical noise of the nanomechanical resonator. The results that are deduced through our analysis framework are in complete agreement with the ones we obtain from extensive, carefully run stochastic simulations of a PLL based sensor system. We compare the conclusions we derive with the recent results in the literature. Our theory and analysis framework can be used in assessing PLL based sensor performance with other sources of noise, e.g., from the electronic components, actuation and sensing mechanisms, and due to the signal generator, as well as for a variety of PLL based sensor configurations such as multi-mode and nonlinear sensing.
|
physics
|
We discuss the connection between the origin of neutrino masses and the properties of dark matter candidates in the context of gauge extensions of the Standard Model. We investigate minimal gauge theories for neutrino masses where the neutrinos are predicted to be Dirac or Majorana fermions. We find that the upper bound on the effective number of relativistic species provides a strong constraint in the scenarios with Dirac neutrinos. In the context of theories where the lepton number is a local gauge symmetry spontaneously broken at the low scale, the existence of dark matter is predicted from the condition of anomaly cancellation. Applying the cosmological bound on the dark matter relic density, we find an upper bound on the symmetry breaking scale in the multi-TeV region. These results imply we could hope to test simple gauge theories for neutrino masses at current or future experiments.
|
high energy physics phenomenology
|
The quantum anomalies at the edges correspond to the topological phases in the system, and the chiral edge states can reflect bulk bands' topological properties. In this paper, we demonstrate a simulation of Floquet system's chiral edge states in position shaken finite-size honeycomb optical lattice. Through the periodical shaking, we break the time reversal symmetry of the system, and get the topological non-trivial states with non-zero Chen number. At the topological non-trivial area, we find chiral edge states on different sides of the lattice, and the locations of chiral edge states change with the topological phase. Further, gapless boundary excitations are found to appear at the topological phase transition points. It provides a new scheme to simulate chiral edge states in the Floquet system, and promotes the study of gapless boundary excitations.
|
condensed matter
|
We introduce the problem of learning-based attacks in a simple abstraction of cyber-physical systems---the case of a discrete-time, linear, time-invariant plant that may be subject to an attack that overrides the sensor readings and the controller actions. The attacker attempts to learn the dynamics of the plant and subsequently override the controller's actuation signal, to destroy the plant without being detected. The attacker can feed fictitious sensor readings to the controller using its estimate of the plant dynamics and mimic the legitimate plant operation. The controller, on the other hand, is constantly on the lookout for an attack; once the controller detects an attack, it immediately shuts the plant off. In the case of scalar plants, we derive an upper bound on the attacker's deception probability for any measurable control policy when the attacker uses an arbitrary learning algorithm to estimate the system dynamics. We then derive lower bounds for the attacker's deception probability for both scalar and vector plants by assuming a specific authentication test that inspects the empirical variance of the system disturbance. We also show how the controller can improve the security of the system by superimposing a carefully crafted privacy-enhancing signal on top of the "nominal control policy." Finally, for nonlinear scalar dynamics that belong to the Reproducing Kernel Hilbert Space (RKHS), we investigate the performance of attacks based on nonlinear Gaussian-processes (GP) learning algorithms.
|
electrical engineering and systems science
|
A unique filament is identified in the {\it Herschel} maps of the Orion A giant molecular cloud. The filament, which, we name the Stick, is ruler-straight and at an early evolutionary stage. Transverse position-velocity diagrams show two velocity components closing in on the Stick. The filament shows consecutive rings/forks in C$^{18}$O(1-0) channel maps, which is reminiscent of structures generated by magnetic reconnection. We propose that the Stick formed via collision-induced magnetic reconnection (CMR). We use the magnetohydrodynamics (MHD) code Athena++ to simulate the collision between two diffuse molecular clumps, each carrying an anti-parallel magnetic field. The clump collision produces a narrow, straight, dense filament with a factor of $>$200 increase in density. The production of the dense gas is seven times faster than free-fall collapse. The dense filament shows ring/fork-like structures in radiative transfer maps. Cores in the filament are confined by surface magnetic pressure. CMR can be an important dense-gas-producing mechanism in the Galaxy and beyond.
|
astrophysics
|
The derivation of the complete anti-D3-brane low energy effective action in KKLT is reviewed. All worldvolume fields are included, together with the background moduli. The result is recast into a manifest supersymmetric form in terms of the three independent functions of $\mathcal{N}=1$ supergravity in four dimensions: the Kaehler potential, the superpotential and the gauge kinetic function. The latter differs from the expression one would expect by analogy with the D3-brane case.
|
high energy physics theory
|
In addition to ever-present thermal noise, various communication and sensor systems can contain significant amounts of interference with outlier (e.g. impulsive) characteristics. Such outlier noise can be efficiently mitigated in real-time using intermittently nonlinear filters. Depending on the noise nature and composition, improvements in the quality of the signal of interest will vary from "no harm" to substantial. In this paper, we explain in detail why the underlying outlier nature of interference often remains obscured, discussing the many challenges and misconceptions associated with state-of-art analog and/or digital nonlinear mitigation techniques, especially when addressing complex practical interference scenarios. We then focus on the methodology and tools for real-time outlier noise mitigation, demonstrating how the "excess band" observation of outlier noise enables its efficient in-band mitigation. We introduce the basic real-time nonlinear components that are used for outlier noise filtering, and provide examples of their implementation. We further describe complementary nonlinear filtering arrangements for wide- and narrow-band outlier noise reduction, providing several illustrations of their performance and the effect on channel capacity. Finally, we outline "effectively analog" digital implementations of these filtering structures, discuss their broader applications, and comment on the ongoing development of the platform for their demonstration and testing.
|
electrical engineering and systems science
|
The measurements of the ratios $R_{K^{(*)}}$ along with $R_{D^{(*)}}$ hint towards lepton flavor non universality which is in disagreement with the standard model. In this work, we reanalyze the four new physics models, which are widely studied in the literature as a candidates for the simultaneous explanations of these measurements. These are, standard model like vector boson (VB), $SU(2)_L$-singlet vector leptoquark ($U_1$), $SU(2)_L$-triplet scalar leptoquark ($S_3$) and $SU(2)_L$ triplet vector leptoquark ($U_3$) models. We assume a coupling only to the third generation in the weak basis, so that the $b \to s \mu^+ \mu^-$ transition is generated only via mixing effects. Preforming a global fit to all relevant data, we show that the vector boson model violates the current upper bound on ${Br}(\tau \to 3\mu)$ and hence is inconsistent with the present data. Further, we show that within this framework, the $U_1$ leptoquark model cannot simultaneously accommodate $R_{K^{(*)}}$ and $R_{D^{(*)}}$ measurements. We emphasize that this conclusion is independent of the additional constraints coming from renormaliztion group running effects and high-$p_T$ searches. In addition, we show that the $S_3$ and $U_3$ models are highly disfavored by the constraints coming from $b\to s \nu \bar \nu$ data. Finally, we find a that hypothesis of two LQ particles is also challenged by $b\to s \bar \nu \nu$ data.
|
high energy physics phenomenology
|
Soon, the combination of electromagnetic and gravitational signals will open the door to a new era of gravitational-wave (GW) cosmology. It will allow us to test the propagation of tensor perturbations across cosmic time and study the distribution of their sources over large scales. In this work, we show how machine learning techniques can be used to reconstruct new physics by leveraging the spatial correlation between GW mergers and galaxies. We explore the possibility of jointly reconstructing the modified GW propagation law and the linear bias of GW sources, as well as breaking the slight degeneracy between them by combining multiple techniques. We show predictions roughly based on a network of Einstein Telescopes combined with a high-redshift galaxy survey ($z\lesssim3$). Moreover, we investigate how these results can be re-scaled to other instrumental configurations. In the long run, we find that obtaining accurate and precise luminosity distance measurements (extracted directly from the individual GW signals) will be the most important factor to consider when maximizing constraining power.
|
astrophysics
|
In this paper, we present an overview of different types of random walk strategies with local and non-local transitions on undirected connected networks. We present a general approach to analyzing these strategies by defining the dynamics as a discrete time Markovian process with probabilities of transition expressed in terms of a symmetric matrix of weights. In the first part, we describe the matrices of weights that define local random walk strategies like the normal random walk, biased random walks, random walks in the context of digital image processing and maximum entropy random walks. In addition, we explore non-local random walks like L\'evy flights on networks, fractional transport and applications in the context of human mobility. Explicit relations for the stationary probability distribution, the mean first passage time and global times to characterize the random walk strategies are obtained in terms of the elements of the matrix of weights and its respective eigenvalues and eigenvectors. Finally, we apply the results to the analysis of particular local and non-local random walk strategies; we discuss their efficiency and capacity to explore different types of structures. Our results allow to study and compare on the same basis the global dynamics of different types of random walk strategies.
|
condensed matter
|
The realization of air-stable 2D metals epitaxial to SiC and capped by graphene creates a potentially immense chemical space of 2D metals and alloys that could expand the variety of solid-state excitations unique to 2D metals beyond what is known for graphene and niobium/tantalum chalcogenides. We perform a high-throughput computational survey from first-principles predicting the structures and stabilities of all metals in the periodic table when they intercalate graphene/SiC. Our results not only agrees with all experimentally known metal/SiC structures explored so far, but also reveals conspicuous trends related to metal cohesive energies and metal-silicon bonding. For special groups of metals, a small bandgap opens, relying on appropriate electron filling and substrate-induced symmetry breaking. From this gapping stabilization, we derive alloying rules unique to 2D metals.
|
condensed matter
|
In recent months the world has been surprised by the rapid advance of COVID-19. In order to face this disease and minimize its socio-economic impacts, in addition to surveillance and treatment, diagnosis is a crucial procedure. However, the realization of this is hampered by the delay and the limited access to laboratory tests, demanding new strategies to carry out case triage. In this scenario, deep learning models are being proposed as a possible option to assist the diagnostic process based on chest X-ray and computed tomography images. Therefore, this research aims to automate the process of detecting COVID-19 cases from chest images, using convolutional neural networks (CNN) through deep learning techniques. The results can contribute to expand access to other forms of detection of COVID-19 and to speed up the process of identifying this disease. All databases used, the codes built, and the results obtained from the models' training are available for open access. This action facilitates the involvement of other researchers in enhancing these models since this can contribute to the improvement of results and, consequently, the progress in confronting COVID-19.
|
electrical engineering and systems science
|
In scientific machine learning, regression networks have been recently applied to approximate solution maps (e.g., potential-ground state map of Schr\"odinger equation). In this paper, we aim to reduce the generalization error without spending more time in generating training samples. However, to reduce the generalization error, the regression network needs to be fit on a large number of training samples (e.g., a collection of potential-ground state pairs). The training samples can be produced by running numerical solvers, which takes much time in many applications. In this paper, we aim to reduce the generalization error without spending more time in generating training samples. Inspired by few-shot learning techniques, we develop the Multi-Level Fine-Tuning algorithm by introducing levels of training: we first train the regression network on samples generated at the coarsest grid and then successively fine-tune the network on samples generated at finer grids. Within the same amount of time, numerical solvers generate more samples on coarse grids than on fine grids. We demonstrate a significant reduction of generalization error in numerical experiments on challenging problems with oscillations, discontinuities, or rough coefficients. Further analysis can be conducted in the Neural Tangent Kernel regime and we provide practical estimators to the generalization error. The number of training samples at different levels can be optimized for the smallest estimated generalization error under the constraint of budget for training data. The optimized distribution of budget over levels provides practical guidance with theoretical insight as in the celebrated Multi-Level Monte Carlo algorithm.
|
mathematics
|
We establish a margin based data dependent generalization error bound for a general family of deep neural networks in terms of the depth and width, as well as the Jacobian of the networks. Through introducing a new characterization of the Lipschitz properties of neural network family, we achieve significantly tighter generalization bounds than existing results. Moreover, we show that the generalization bound can be further improved for bounded losses. Aside from the general feedforward deep neural networks, our results can be applied to derive new bounds for popular architectures, including convolutional neural networks (CNNs) and residual networks (ResNets). When achieving same generalization errors with previous arts, our bounds allow for the choice of larger parameter spaces of weight matrices, inducing potentially stronger expressive ability for neural networks. Numerical evaluation is also provided to support our theory.
|
computer science
|
We investigate the maximum purity that can be achieved by k-uniform mixed states of N parties. Such N-party states have the property that all their k-party reduced states are maximally mixed. A scheme to construct explicitly k-uniform states using a set of specific N-qubit Pauli matrices is proposed. We provide several different examples of such states and demonstrate that in some cases the state corresponds to a particular orthogonal array. The obtained states, despite being mixed, reveal strong non-classical properties such as genuine multipartite entanglement or violation of Bell inequalities.
|
quantum physics
|
Numerous antenna design approaches for wearable applications have been investigated in the literature. As on-body wearable communications become more ingrained in our daily activities, the necessity to investigate the impacts of these networks burgeons as a major requirement. In this study, we investigate the human electromagnetic field (EMF) exposure effect from on-body wearable devices at 2.4 GHz and 60 GHz, and compare the results to illustrate how the technology evolution to higher frequencies from wearable communications can impact our health. Our results suggest the average specific absorption rate (SAR) at 60 GHz can exceed the regulatory guidelines within a certain separation distance between a wearable device and the human skin surface. To the best of authors' knowledge, this is the first work that explicitly compares the human EMF exposure at different operating frequencies for on-body wearable communications, which provides a direct roadmap in design of wearable devices to be deployed in the Internet of Battlefield Things (IoBT).
|
electrical engineering and systems science
|
Investigation of phosphate species adsorption/desorption processes was performed on Ag(100) and Ag(111) electrodes in H$_{3}$PO$_{4}$, KH$_{2}$PO$_{4}$ and K$_{3}$PO$_{4}$ solutions by Current-Potential ($j-V$) profiles and Electrochemical Impedance Spectroscopy ($EIS$). We used the equivalent circuit method to fit the impedance spectra. Different electrical equivalent circuits ($EECs$) were employed depending on the potential region that was analyzed. For potentials more negative than the onset of the hydrogen evolution reaction ($her$), a charge transfer resistance (R$_{ct}$) in parallel to the $(RC)$ branches was included. Peaks from $j-V$ profiles were integrated to estimate surface coverage. A reversible process was observed for Ag(hkl)/KH$_{2}$PO$_{4}$ systems, where a value of 0.07 ML was obtained. For Ag(111)/H$_{3}$PO$_{4}$, a coverage of about 0.024 ML was calculated from anodic/cathodic $j-V$ profiles, whereas for Ag(hkl)/K$_{3}$PO$_{4}$ systems different values were obtained from integration of anodic/cathodic peaks due to highly irreversible processes were observed. In the case of Ag(hkl)/K$_{3}$PO$_{4}$, the capacitance (C$_{(\phi)}$) plots are well differentiated for the two faces, and co-adsorption of OH$^{-}$ was evaluated from resistance parameters. Characteristic face-specific relaxation times are obtained for each electrode. In addition, it was found that the onset potential of $her$ for Ag(111) at pH=1.60 was about 100 mV more negative compare to Ag(100).
|
condensed matter
|
Statistical methods are required to evaluate and quantify the uncertainty in environmental processes, such as land and sea surface temperature, in a changing climate. Typically, annual harmonics are used to characterize the variation in the seasonal temperature cycle. However, an often overlooked feature of the climate seasonal cycle is the semi-annual harmonic, which can account for a significant portion of the variance of the seasonal cycle and varies in amplitude and phase across space. Together, the spatial variation in the annual and semi-annual harmonics can play an important role in driving processes that are tied to seasonality (e.g., ecological and agricultural processes). We propose a multivariate spatio-temporal model to quantify the spatial and temporal change in minimum and maximum temperature seasonal cycles as a function of the annual and semi-annual harmonics. Our approach captures spatial dependence, temporal dynamics, and multivariate dependence of these harmonics through spatially and temporally-varying coefficients. We apply the model to minimum and maximum temperature over North American for the years 1979 to 2018. Formal model inference within the Bayesian paradigm enables the identification of regions experiencing significant changes in minimum and maximum temperature seasonal cycles due to the relative effects of changes in the two harmonics.
|
statistics
|
We present a modelling framework for the investigation of supervised learning in non-stationary environments. Specifically, we model two example types of learning systems: prototype-based Learning Vector Quantization (LVQ) for classification and shallow, layered neural networks for regression tasks. We investigate so-called student teacher scenarios in which the systems are trained from a stream of high-dimensional, labeled data. Properties of the target task are considered to be non-stationary due to drift processes while the training is performed. Different types of concept drift are studied, which affect the density of example inputs only, the target rule itself, or both. By applying methods from statistical physics, we develop a modelling framework for the mathematical analysis of the training dynamics in non-stationary environments. Our results show that standard LVQ algorithms are already suitable for the training in non-stationary environments to a certain extent. However, the application of weight decay as an explicit mechanism of forgetting does not improve the performance under the considered drift processes. Furthermore, we investigate gradient-based training of layered neural networks with sigmoidal activation functions and compare with the use of rectified linear units (ReLU). Our findings show that the sensitivity to concept drift and the effectiveness of weight decay differs significantly between the two types of activation function.
|
computer science
|
Quantitative susceptibility mapping (QSM) is an MRI phase-based post-processing method that quantifies tissue magnetic susceptibility distributions. However, QSM acquisitions are relatively slow, even with parallel imaging. Incoherent undersampling and compressed sensing reconstruction techniques have been used to accelerate traditional magnitude-based MRI acquisitions; however, most do not recover the full phase signal due to its non-convex nature. In this study, a learning-based Deep Complex Residual Network (DCRNet) is proposed to recover both the magnitude and phase images from incoherently undersampled data, enabling high acceleration of QSM acquisition. Magnitude, phase, and QSM results from DCRNet were compared with two iterative and one deep learning methods on retrospectively undersampled acquisitions from six healthy volunteers, one intracranial hemorrhage and one multiple sclerosis patients, as well as one prospectively undersampled healthy subject using a 7T scanner. Peak signal to noise ratio (PSNR), structural similarity (SSIM) and region-of-interest susceptibility measurements are reported for numerical comparisons. The proposed DCRNet method substantially reduced artifacts and blurring compared to the other methods and resulted in the highest PSNR and SSIM on the magnitude, phase, local field, and susceptibility maps. It led to 4.0% to 8.8% accuracy improvements in deep grey matter susceptibility than some existing methods, when the acquisition was accelerated four times. The proposed DCRNet also dramatically shortened the reconstruction time by nearly 10 thousand times for each scan, from around 80 hours using conventional approaches to only 30 seconds.
|
electrical engineering and systems science
|
The ratio $P(S_n=x)/P(Z_n=x)$ is investigated for three cases: (a) when $S_n$ is a sum of 1-dependent non-negative integer-valued random variables (rvs), satisfying some moment conditions, and $Z_n$ is Poisson rv; (b) when $S_n$ is a statistic of 2-runs and $Z_n$ is negative binomial rv; and (c) when $S_n$ is statistic of $N(1,1)$-events and $Z_n$ is a binomial r.v. We also consider the approximation of $P(S_n\geqslant x)$ by Poisson distribution with parameter depending on $x$.
|
mathematics
|
Observational searches for dual active galactic nuclei (dAGNs) at kiloparsec separations are crucial for understanding the role of galaxy mergers in the evolution of galaxies. In addition, kpc-scale dAGNs may serve as the parent population of merging massive black hole (MBH) binaries, an important source of gravitational waves. We use a semi-analytical model to describe the orbital evolution of unequal mass MBH pairs under the influence of stellar and gaseous dynamical friction in post-merger galaxies. We quantify how the detectability of approximately 40,000 kpc-scale dAGNs depends on the structure of their host galaxies and the orbital properties of the MBH pair. Our models indicate that kpc-scale dAGNs are most likely to be detected in gas-rich post-merger galaxies with smaller stellar bulges and relatively massive, rapidly rotating gas disks. The detectability is also increased in systems with MBHs of comparable masses following low eccentricity prograde orbits. In contrast, dAGNs with retrograde, low eccentricity orbits are some of the least detectable systems among our models. The dAGNs in models in which the accreting MBHs are allowed to exhibit radiative feedback are characterized by a significantly lower overall detectability. The suppression in detectability is most pronounced in gas-rich merger remnant galaxies, where radiation feedback is more likely to arise. If so, then large, relatively gas poor galaxies may be the best candidates for detecting dAGNs.
|
astrophysics
|
We point out a new configuration in the Witten-Sakai-Sugimoto model, allowing baryons in the pointlike approximation to coexist with fundamental quarks. The resulting phase is a holographic realization of quarkyonic matter, which is predicted to occur in QCD at a large number of colors, and possibly plays a role in real-world QCD as well. We find that holographic quarkyonic matter is chirally symmetric and that, for large baryon chemical potentials, it is energetically preferred over pure nuclear matter and over pure quark matter. The zero-temperature transition from nuclear matter to the quarkyonic phase is of first order in the chiral limit and for a realistic pion mass. For pion masses far beyond the physical point we observe a quark-hadron continuity due to the presence of quarkyonic matter.
|
high energy physics theory
|
The recent discovery of a Higgs boson at the LHC, while establishing the Higgs mechanism as the way of electroweak symmetry breaking, started an era of precision measurements involving the Higgs boson. In an effective Lagrangian framework, we consider the $e^+e^-\rightarrow ZHH$ process at the ILC running at a centre of mass energy of 500 GeV to investigate the effect of the $ZZH$ and $ZZHH$ couplings on the sensitivity of $HHH$ coupling in this process. Our results show that the sensitivity of the trilinear Higgs self couplings on this process has somewhat strong dependence on the Higgs-gauge boson couplings. Single and two parameter reach of the ILC with an integrated luminosity of 1000 fb$^{-1}$ are obtained on all the effective couplings indicating how these limits are affected by the presence of anomalous $ZZH$ and $ZZHH$ couplings. The kinematic distributions studied to understand the effect of the anomalous couplings, again, show a strong influence of $Z$-$H$ couplings on the dependence of these distributions on $HHH$ coupling. Similar results are indicated in the case of the process, $e^+e^-\rightarrow \nu\bar \nu HH$, considered at a centre of mass energy of 2 TeV, where the cross section is large enough. The effect of $WWH$ and $WWHH$ couplings on the sensitivity of $HHH$ coupling is clearly established through our analyses in this process.
|
high energy physics phenomenology
|
Spatial-temporal data, that is information about objects that exist at a particular location and time period, are rich in value and, as a consequence, the target of so many initiative efforts. Clustering approaches aim at grouping datapoints based on similar properties for classification tasks. These approaches have been widely used in domains such as human mobility, ecology, health and astronomy. However, clustering approaches typically address only the static nature of a cluster, and do not take into consideration its dynamic aspects. A desirable approach needs to investigate relations between dynamic clusters and their elements that can be used to derive new insights about what happened to the clusters during their lifetimes. A fundamental step towards this goal is to provide a formal definition of spatial-temporal cluster relations. This report introduces, describes, and formalizes 14 novel spatial-temporal cluster relations that may occur during the existence of a cluster and involve both trajectory-cluster membership conditions and cluster-cluster comparisons. We evaluate the proposed relations with a discussion on how they are able to interpret complex cases that are difficult to be distinguished without a formal relation specification. We conclude the report by summarizing our results and describing avenues for further research.
|
computer science
|
Improving coherence times of quantum bits is a fundamental challenge in the field of quantum computing. With long-lived qubits it becomes, however, inefficient to wait until the qubits have relaxed to their ground state after completion of an experiment. Moreover, for error-correction schemes it is import to rapidly re-initialize ancilla parity-check qubits. We present a simple pulsed qubit reset protocol based on a two-pulse sequence. A first pulse transfers the excited state population to a higher excited qubit state and a second pulse into a lossy environment provided by a low-Q transmission line resonator, which is also used for qubit readout. We show that the remaining excited state population can be suppressed to $2.2\pm0.8\%$ and utilize the pulsed reset protocol to carry out experiments at enhanced rates.
|
quantum physics
|
Intelligent reflecting surface (IRS) technology offers more feasible propagation paths for millimeter-wave (mmWave) communication systems to overcome blockage than existing technologies. In this paper, we consider a downlink wireless system with the IRS and formulate a joint power allocation and beamforming design problem to maximize the weighted sum-rate, which is a multi-variable optimization problem. To solve the problem, we propose a novel alternating manifold optimization based beamforming algorithm. Simulation results show that our proposed optimization algorithm outperforms existing algorithms significantly in improving the weighted sum-rate of the wireless communication system.
|
electrical engineering and systems science
|
It has been proven in the literature that the main technological factors limiting the communication rates of quantum cryptography systems by single photon are mainly related to the choice of the encoding method. In fact, the efficiency of the used sources is very limited, at best of the order of a few percent for the single photon sources and the photon counters can not be operated beyond a certain speed and with a low order of detection efficiency. In order to overcome partially these drawbacks, it is advantageous to use continuous quantum states as an alternative to standard encodings based on quantum qubits. In this context, we propose a new reconciliation method based on Turbo codes. Our theoretical model assumptions are supported by experimental results. Indeed, our method leads to a significant improvement of the protocol security and a large decrease of the QBER. The gain is obtained with a reasonable complexity increase. Also, the novelty of our work is that it tested the reconciliation method on a real photonic system under VPItransmissionMaker.
|
quantum physics
|
We compute the spectrum of anomalous dimensions of non-derivative composite operators with an arbitrary number of fields $n$ in the $O(N)$ vector model with cubic anisotropy at the one-loop order in the $\epsilon$-expansion. The complete closed-form expression for the anomalous dimensions of the operators which do not undergo mixing effects is derived and the structure of the general solution to the mixing problem is outlined. As examples, the full explicit solution for operators with up to $n=6$ fields is presented and a sample of the OPE coefficients is calculated. The main features of the spectrum are described, including an interesting pattern pointing to the deeper structure.
|
high energy physics theory
|
Recommender systems have played an increasingly important role in providing users with tailored suggestions based on their preferences. However, the conventional offline recommender systems cannot handle the ubiquitous data stream well. To address this issue, Streaming Recommender Systems (SRSs) have emerged in recent years, which incrementally train recommendation models on newly received data for effective real-time recommendations. Focusing on new data only benefits addressing concept drift, i.e., the changing user preferences towards items. However, it impedes capturing long-term user preferences. In addition, the commonly existing underload and overload problems should be well tackled for higher accuracy of streaming recommendations. To address these problems, we propose a Stratified and Time-aware Sampling based Adaptive Ensemble Learning framework, called STS-AEL, to improve the accuracy of streaming recommendations. In STS-AEL, we first devise stratified and time-aware sampling to extract representative data from both new data and historical data to address concept drift while capturing long-term user preferences. Also, incorporating the historical data benefits utilizing the idle resources in the underload scenario more effectively. After that, we propose adaptive ensemble learning to efficiently process the overloaded data in parallel with multiple individual recommendation models, and then effectively fuse the results of these models with a sequential adaptive mechanism. Extensive experiments conducted on three real-world datasets demonstrate that STS-AEL, in all the cases, significantly outperforms the state-of-the-art SRSs.
|
computer science
|
Modern requirements for machine learning (ML) models include both high predictive performance and model interpretability. A growing number of techniques provide model interpretations, but can lead to wrong conclusions if applied incorrectly. We illustrate pitfalls of ML model interpretation such as bad model generalization, dependent features, feature interactions or unjustified causal interpretations. Our paper addresses ML practitioners by raising awareness of pitfalls and pointing out solutions for correct model interpretation, as well as ML researchers by discussing open issues for further research.
|
statistics
|
Governments and cities around the world are currently facing rapid growth in the use of Electric Vehicles and therewith the need for Charging Infrastructure. For these cities, the struggle remains how to further roll out charging infrastructure in the most efficient way, both in terms of cost and use. Forecasting models are not able to predict more long-term developments, and as such more complex simulation models offer opportunities to simulate various scenarios. Agent based simulation models provide insight into the effects of incentives and roll-out strategies before they are implemented in practice and thus allow for scenario testing. This paper describes the build up of an agent based model that enables policy makers to anticipate on charging infrastructure development. The model is able to simulate charging transactions of individual users and is both calibrated and validated using a dataset of charging transactions from the public charging infrastructure of the four largest cities in the Netherlands.
|
physics
|
Bayesian Decision Trees are known for their probabilistic interpretability. However, their construction can sometimes be costly. In this article we present a general Bayesian Decision Tree algorithm applicable to both regression and classification problems. The algorithm does not apply Markov Chain Monte Carlo and does not require a pruning step. While it is possible to construct a weighted probability tree space we find that one particular tree, the greedy-modal tree (GMT), explains most of the information contained in the numerical examples. This approach seems to perform similarly to Random Forests.
|
statistics
|
Text to speech (TTS) and automatic speech recognition (ASR) are two dual tasks in speech processing and both achieve impressive performance thanks to the recent advance in deep learning and large amount of aligned speech and text data. However, the lack of aligned data poses a major practical problem for TTS and ASR on low-resource languages. In this paper, by leveraging the dual nature of the two tasks, we propose an almost unsupervised learning method that only leverages few hundreds of paired data and extra unpaired data for TTS and ASR. Our method consists of the following components: (1) a denoising auto-encoder, which reconstructs speech and text sequences respectively to develop the capability of language modeling both in speech and text domain; (2) dual transformation, where the TTS model transforms the text $y$ into speech $\hat{x}$, and the ASR model leverages the transformed pair $(\hat{x},y)$ for training, and vice versa, to boost the accuracy of the two tasks; (3) bidirectional sequence modeling, which addresses error propagation especially in the long speech and text sequence when training with few paired data; (4) a unified model structure, which combines all the above components for TTS and ASR based on Transformer model. Our method achieves 99.84% in terms of word level intelligible rate and 2.68 MOS for TTS, and 11.7% PER for ASR on LJSpeech dataset, by leveraging only 200 paired speech and text data (about 20 minutes audio), together with extra unpaired speech and text data.
|
electrical engineering and systems science
|
We study the moduli space of three-dimensional $\mathcal{N}=2$ SQCD with $SU(N)$ gauge group and $F<N$ massless flavors. In the case of an $SU(2)$ theory with a single massless flavor, we explicitly calculate the quantum constraint $YM=1$ and generalize the calculation to models with arbitrary $N$ and $F=N-1$ flavors. In theories with $F<N-1$ flavors, we find that analogous constraints exist in locally defined coordinate charts of the moduli space. The existence of such constraints allows us to show that the Coulomb branch superpotential generated by single monopole effects is equivalent to the superpotential generated by multi-monopole contributions on the mixed Higgs-Coulomb branch. As a check for our result, we implement the local constraints as Lagrange multiplier terms in the superpotential and verify that deformations of a theory by a large holomorphic mass term for the matter fields results in a flow of the superpotential from the $F$-flavor model to the superpotential of an $(F-1)$-flavor model.
|
high energy physics theory
|
String theory realisations of the QCD axion are often said to belong to the anthropic window where the decay constant is around the GUT scale and the initial misalignment angle has to be tuned close to zero. In this paper we revisit this statement by studying the statistics of axion physics in the string landscape. We take moduli stabilisation properly into account since the stabilisation of the saxions is crucial to determine the physical properties of the corresponding axionic partners. We focus on the model-independent case of closed string axions in type IIB flux compactifications and find that their decay constants and mass spectrum feature a logarithmic, instead of a power-law, distribution. In the regime where the effective field theory is under control, most of these closed string axions are ultra-light axion-like particles, while axions associated to blow-up modes can naturally play the role of the QCD axion. Hence, the number of type IIB flux vacua with a closed string QCD axion with an intermediate scale decay constant and a natural value of the misalignment angle is only logarithmically suppressed. In a recent paper we found that this correlates also with a logarithmic distribution of the supersymmetry breaking scale, providing the intriguing indication that most, if not all, of the phenomenologically interesting quantities in the string landscape might feature a logarithmic distribution.
|
high energy physics theory
|
Inspired by the semi-quantum protocols, this paper defines the lightweight quantum security protocols, in which lightweight participants can only operate two out of four very lightweight quantum operations. Subsequently, this study proposes a Lightweight Mediated Quantum Key Distribution (LMQKD) protocol as an example to disclose the feasibility and advantage of the lightweight quantum protocol. In the proposed protocol, a dishonest third party (TP) with complete quantum capabilities helps two lightweight quantum users establish a secure key. The lightweight quantum users are allowed to perform only: (1) unitary operations and (2) reflecting qubits without disturbance. The proposed protocol has been showed to be robust under the collective attack.
|
quantum physics
|
We prove that the Galerkin finite element solution $u_h$ of the Laplace equation in a convex polyhedron $\varOmega$, with a quasi-uniform tetrahedral partition of the domain and with finite elements of polynomial degree $r\ge 1$, satisfies the following weak maximum principle: \begin{align*} \left\|u_{h}\right\|_{L^{\infty}(\varOmega)} \le C\left\|u_{h}\right\|_{L^{\infty}(\partial \varOmega)} , \end{align*} with a constant $C$ independent of the mesh size $h$. By using this result, we show that the Ritz projection operator $R_h$ is stable in $L^\infty$ norm uniformly in $h$ for $r\geq 2$, i.e. \begin{align*} \|R_hu\|_{L^{\infty}(\varOmega)} \le C\|u\|_{L^{\infty}(\varOmega)} . \end{align*} Thus we remove a logarithmic factor appearing in the previous results for convex polyhedral domains.
|
mathematics
|
This paper delves into designing stabilizing feedback control gains for continuous linear systems with unknown state matrix, in which the control is subject to a general structural constraint. We bring forth the ideas from reinforcement learning (RL) in conjunction with sufficient stability and performance guarantees in order to design these structured gains using the trajectory measurements of states and controls. We first formulate a model-based framework using dynamic programming (DP) to embed the structural constraint to the Linear Quadratic Regulator (LQR) gain computation in the continuous-time setting. Subsequently, we transform this LQR formulation into a policy iteration RL algorithm that can alleviate the requirement of known state matrix in conjunction with maintaining the feedback gain structure. Theoretical guarantees are provided for stability and convergence of the structured RL (SRL) algorithm. The introduced RL framework is general and can be applied to any control structure. A special control structure enabled by this RL framework is distributed learning control which is necessary for many large-scale cyber-physical systems. As such, we validate our theoretical results with numerical simulations on a multi-agent networked linear time-invariant (LTI) dynamic system.
|
electrical engineering and systems science
|
We present a near-infrared study of the Seyfert 2 galaxy NGC\,6300, based on subarcsecond images and long slit spectroscopy obtained with Flamingos-2 at Gemini South. We have found that the peak of the nuclear continuum emission in the $K_s$ band and the surrounding nuclear disk are 25\,pc off-center with respect to the center of symmetry of the larger scale circumnuclear disk, suggesting that this black hole is still not fixed in the galaxy potential well. The molecular gas radial velocity curve yields a central black hole upper mass estimation of $M_{SMBH}^{upper}=(6\pm 2) \times 10^{7}\,\Msun$. The Pa$\beta$ emission line has a strongly asymmetric profile with a blueshifted broad component that we associate with a nuclear ionized gas outflow. We have found in the $K_s$-band spectra that the slope of the continuum becomes steeper with increasing radii, which can be explained as the presence of large amounts of hot dust not only in the nucleus but also in the circumnuclear region up to $r=27$\,pc. In fact, the nuclear red excess obtained after subtracting the stellar contribution resembles to that of a blackbody with temperatures around 1200\,K. This evidence supports the idea that absorbing material located around the nucleus, but not close enough to be the torus of the unified model, could be responsible for at least part of the nuclear obscuration in this Seyfert 2 nucleus.
|
astrophysics
|
The classification of imbalanced data has presented a significant challenge for most well-known classification algorithms that were often designed for data with relatively balanced class distributions. Nevertheless skewed class distribution is a common feature in real world problems. It is especially prevalent in certain application domains with great need for machine learning and better predictive analysis such as disease diagnosis, fraud detection, bankruptcy prediction, and suspect identification. In this paper, we propose a novel tree-based algorithm based on the area under the precision-recall curve (AUPRC) for variable selection in the classification context. Our algorithm, named as the "Precision-Recall Curve classification tree", or simply the "PRC classification tree" modifies two crucial stages in tree building. The first stage is to maximize the area under the precision-recall curve in node variable selection. The second stage is to maximize the harmonic mean of recall and precision (F-measure) for threshold selection. We found the proposed PRC classification tree, and its subsequent extension, the PRC random forest, work well especially for class-imbalanced data sets. We have demonstrated that our methods outperform their classic counterparts, the usual CART and random forest for both synthetic and real data. Furthermore, the ROC classification tree proposed by our group previously has shown good performance in imbalanced data. The combination of them, the PRC-ROC tree, also shows great promise in identifying the minority class.
|
statistics
|
Most tough hydrogels are reinforced by introducing sacrificial structures that can dissipate input energy. However, since the sacrificial damages cannot recover instantly, the toughness of these gels drops substantially during consecutive cyclic loadings. Here, we propose a new damageless reinforcement strategy for hydrogels utilizing strain-induced crystallization (SIC). In Slide-Ring (SR) gels with freely movable cross-links, crystalline repetitively forms and destructs with elongation and relaxation, resulting in both excellent toughness of 5.5 - 25.2 MJ/m$^3$ and 87% - 95% instant recovery of extension energy between two consecutive 11-fold loading-unloading cycles. Moreover, SIC occurs "on-demandly" at the crack-tip area where strain amplification and stress concentration take place and forces the crack to turn sideways. The instantly reversible tough hydrogels are promising candidates for applications in artificial connective tissues such as tendon and ligament.
|
condensed matter
|
We have studied the fundamental rotational relaxation and excitation collision of OH- J=0 <-> 1 with helium at different collision energies. Using state-selected photodetachment in a cryogenic ion trap, the collisional excitation of the first excited rotational state of OH- has been investigated and absolute inelastic collision rate coefficients have been extracted for collision temperatures between 20 and 35 K. The rates are compared with accurate quantum scattering calculations for three different potential energy surfaces. Good agreement is found within the experimental accuracy, but the experimental trend of increasing collision rates with temperature is only in part reflected in the calculations.
|
physics
|
The main alternatives of the recent XENON1T observation are solar axions, neutrino magnetic moment and tritium. In this short note we suggest to crosscheck whether the observation is related or not to dark matter (DM) streams, by searching for planetary dependence of the observed excess. If such a correlation is derived, this hint (<3.5sigma) can become the overlooked direct DM discovery. To do this it is necessary to analyze the time distribution of all the XENON1T data, and in particular the electronic events with their time stamp and energy. Notably, the velocities of the dark sector allow for planetary focusing effects towards the earth either by a single celestial body or combined by the whole solar system. Surprisingly, as yet this possibility has not been applied in the field of direct dark matter search, even though DM velocities fit-in well planetary gravitational lensing effects. The widely used signature of a direct dark matter search needs to be redefined, while, with luck, such an analysis might confirm or exclude the solar origin of the observed excess. Therefore, we suggest that XENON1T and DAMA release the data.
|
high energy physics phenomenology
|
This paper reports a comprehensive study of distributional uncertainty in a few socio-economic indicators across the various states of India over the years 2001-2011. We show that the DGB distribution, a typical rank order distribution, provide excellent fits to the district-wise empirical data for the population size, literacy rate (LR) and work participation rate (WPR) within every states in India, through its two distributional parameters. Moreover, taking resort to the entropy formulation of the DGB distribution, a proposed uncertainty percentage (UP) unveils the dynamics of the uncertainty of LR and WPR in all states of India. We have also commented on the changes in the estimated parameters and the UP values from the years 2001 to 2011. Additionally, a gender based analysis of the distribution of these important socio-economic variables within different states of India has also been discussed. Interestingly, it has been observed that, although the distributions of the numbers of literate and working people has a direct (linear) correspondence with that of the population size, the literacy and work-participation rates are distributed independently of the population distributions.
|
statistics
|
Double-Weyl fermions, as novel topological states of matter, have been mostly discussed in nonmagnetic materials. Here, based on density-functional theory and symmetry analysis, we propose the realization of fully spin-polarized double-Weyl fermions in a family ferromagnetic materials X2RhF6 (X= K, Rb, Cs). These materials have the half-metal ground states, where only the bands from the spin-down channel present near the Fermi energy. The spin-down bands form a pair of triply degenerate nodal points (TDNPs) if spin-orbit coupling (SOC) is not included. Under SOC, one TDNP splits into two double-Weyl points featuring quadratic dispersion along two momentum direction, and they are protected by the three-fold rotation (C3) symmetry. Unlike most double-Weyl semimetals, the Weyl points proposed here have the type-III dispersion with one of the crossing bands being saddle-shaped. An effective model is constructed, which describes well the nature of the Weyl points. These Weyl points are fully spin-polarized, and are characterized with double Fermi arcs on the surface spectrum. Breaking C3 symmetry by lattice strain could shift one double-Weyl point into a pair of type-II single-Weyl points. The X2RhF6 materials proposed here are excellent candidates to investigate the novel properties of type-III double-Weyl fermions in ferromagnetic system, as well as generate potential applications in spintronics.
|
condensed matter
|
Spiral arms have been observed in more than a dozen protoplanetary disks, yet the origin of nearly all systems is under debate. Multi-epoch monitoring of spiral arm morphology offers a dynamical way in distinguishing two leading arm formation mechanisms: companion-driven, and gravitational instability induction, since these mechanisms predict distinct motion patterns. By analyzing multi-epoch J-band observations of the SAO 206462 system using the SPHERE instrument on the Very Large Telescope (VLT) in 2015 and 2016, we measure the pattern motion for its two prominent spiral arms in polarized light. On one hand, if both arms are comoving, they can be driven by a planet at $86_{-13}^{+18}$ au on a circular orbit, with gravitational instability motion ruled out. On the other hand, they can be driven by two planets at $120_{-30}^{+30}$ au and $49_{-5}^{+6}$ au, offering a tentative evidence (3.0$\sigma$) that the two spirals are moving independently. The independent arm motion is possibly supported by our analysis of a re-reduction of archival observations using the NICMOS instrument onboard the Hubble Space Telescope (HST) in 1998 and 2005, yet artifacts including shadows can manifest spurious arm motion in HST observations. We expect future re-observations to better constrain the motion mechanism for the SAO 206462 spiral arms.
|
astrophysics
|
We discuss some finite homogeneous structures, addressing the question of universality of their automorphism groups. We also study the existence of so-called Kat\v{e}tov functors in finite categories of embeddings or homomorphisms.
|
mathematics
|
Di-lepton searches for Beyond the Standard Model (BSM) Z' bosons that rely on the analysis of the Breit-Wigner (BW) line shape are appropriate in the case of narrow resonances, but likely not sufficient in scenarios featuring Z' states with large widths. Conversely, alternative experimental strategies applicable to wide Z' resonances are much more dependent than the default bump search analyses on the modelling of QCD higher-order corrections to the production processes, for both signal and background. For heavy Z' boson searches in the di-lepton channel at the CERN Large Hadron Collider (LHC), the transverse momentum q_T of the di-lepton system peaks at q_T \ltap 10^{-2} M_{ll}, where M_{ll} is the di-lepton invariant mass. We exploit this to treat the QCD corrections by using the logarithmic resummation methods in M_{ll} / q_T to all orders in the strong coupling constant \alpha_s. We carry out studies of Z' states with large width at the LHC by employing the program {\tt reSolve}, which performs QCD transverse momentum resummation up to Next-to-Next-to-Leading Logarithmic (NNLL) accuracy. We consider two benchmark BSM scenarios, based on the Sequential Standard Model (SSM) and dubbed `SSM wide' and `SSM enhanced'. We present results for the shape and size of Z' boson signals at the differential level, mapped in both cross section (\sigma) and Forward-Backward Asymmetry (A_{\rm FB}), and perform numerical investigations of the experimental sensitivity at the LHC Run 3 and High-Luminosity LHC (HL-LHC).
|
high energy physics phenomenology
|
In this paper, we investigated the Pauli equation in a two-dimensional noncommutative phase-space by considering a constant magnetic field perpendicular to the plane. We mapped the noncommutative problem to the equivalent commutative one through a set of two-dimensional Bopp-shift transformation. The energy spectrum and the wave function of the two-dimensional noncommutative Pauli equation are found, where the problem in question has been mapped to the Landau problem. Further, within the classical limit, we have derived the noncommutative semi-classical partition function of the two-dimensional Pauli system of one-particle and N-particle systems. Consequently, we have studied its thermodynamic properties, i.e. the Helmholtz free energy, mean energy, specific heat and entropy in noncommutative and commutative phase-spaces. The impact of the phase-space noncommutativity on the Pauli system is successfully examined.
|
high energy physics theory
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.