text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
Given a principal bundle on an orientable closed surface with compact connected structure group, we endow the space of based gauge equivalence classes of smooth connections relative to smooth based gauge transformations with the structure of a Fr\'echet manifold. Using Wilson loop holonomies and a certain characteristic class determined by the topology of the bundle, we then impose suitable constraints on that Fr\'echet manifold that single out the based gauge equivalence classes of central Yang-Mills connections but do not directly involve the Yang-Mills equation. We also explain how our theory yields the based and unbased gauge equivalence classes of all Yang-Mills connections and deduce the stratified symplectic structure on the space of unbased gauge equivalence classes of central Yang-Mills connections. The crucial new technical tool is a slice analysis in the Fr\'echet setting.
|
mathematics
|
We consider abstraction-based design of output-feedback controllers for non-linear dynamical systems against specifications over state-based predicates in linear-time temporal logic (LTL). In this context, our contribution is two-fold: (I) we generalize feedback-refinement relations for abstraction-based output-feedback control to systems with arbitrary predicate and observation maps, and (II) we introduce a new algorithm for the synthesis of abstract output-feedback controllers w.r.t. LTL specifications over unobservable state-based predicates. Our abstraction-based output-feedback controller synthesis algorithm consists of two steps. First, we compute a finite state abstraction of the original system using existing techniques. This process typically leads to an abstract system with non-deterministic predicate and observation maps which are not necessarily related to each other. Second, we introduce an algorithm to compute an output-feedback controller for such abstract systems. Our algorithm is inspired by reactive synthesis under partial observation and utilizes bounded synthesis.
|
electrical engineering and systems science
|
We consider a generic Fibonacci topological wave function on a square lattice, and the norm of this wave function can be mapped into the partition function of two-coupled $\phi ^{2}$-state Potts models with $\phi =(\sqrt{5}+1)/2$ as the golden ratio. A global phase diagram is thus established to display non-abelian topological phase transitions. The Fibonacci topological phase corresponds to an emergent new phase of the two-coupled Potts models, and continuously change into two non-topological phases separately, which are dual each other and divided by a first-order phase transition line. Under the self-duality, the Fibonacci topological state enters into the first-order transition state at a quantum tricritical point, where two continuous quantum phase transitions bifurcate. All the topological phase transitions are driven by condensation of anyonic bosons consisting of Fibonacci anyon and its conjugate. However, a fractional supersymmetry is displayed at the quantum tricritical point, characterized by a coset conformal field theory.
|
condensed matter
|
We present a re-analysis of transit depths of KELT-19Ab, WASP-156b, and WASP-121b, including data from the Transiting Exoplanet Survey Satellite (TESS). The large $\sim21''$ TESS pixels and point spread function results in significant contamination of the stellar flux by nearby objects. We use Gaia data to fit for and remove this contribution, providing general-purpose software for this correction. We find all the three sources have a larger inclination, compared to earlier work. For WASP-121b, we find significantly smaller values (13.5 degrees) of the inclination when using the 30 minutes cadence data compared to the 2 minutes cadence data. We demonstrate using a simulation that 30-minute binning data in general, yields a smaller inclination than the true input value at 25$\sigma$. Also, we find that inclination and semi-major axis are biased small when applying a larger sampling time interval which is particularly important for deriving sub-percent transit differences between bands. If we constrain the inclination to previous work, we find the radius ratio of exoplanet to star ($R_{p}/R_{\ast}$) in the broad TESS band is 3.5$\sigma$ smaller than previous work for KELT-19Ab, and consistent to within $\sim$2$\sigma$ for WASP-156b and WASP-121b. The result for KELT-19Ab favors a haze-dominated atmosphere. We do not find statistically significant evidence for the $\sim$0.95\,$\mu$m water feature contaminating the transit depths in the TESS band for these stars but show that with photometric precision of 500ppm and with a sampling of about 200 observations across the entire transit, this feature could be detectable in a more narrow $z-$band. Furthermore, we find that with the inclusion of the TESS photometry, WASP-121b can be fitted just as well with a cloud-free atmosphere with water vapor as with an opaque featureless model.
|
astrophysics
|
Dynamics in AdS spacetimes is characterized by various time-periodicities. The most obvious of these is the time-periodic evolution of linearized fields, whose normal frequencies form integer-spaced ladders as a direct consequence of the structure of representations of the conformal group. There are also explicitly known time-periodic phenomena on much longer time scales inversely proportional to the coupling in the weakly nonlinear regime. We ask what would correspond to these long time periodicities in a holographic CFT, provided that such a CFT reproducing the AdS bulk dynamics in the large central charge limit has been found. The answer is a very large family of multiparticle operators whose conformal dimensions form simple ladders with spacing inversely proportional to the central charge. We give an explicit demonstration of these ideas in the context of a toy model holography involving a $\phi^4$ probe scalar field in AdS, but we expect the applicability of the underlying structure to be much more general.
|
high energy physics theory
|
LSST will open new vistas for cosmology in the next decade, but it cannot reach its full potential without data from other telescopes. Cosmological constraints can be greatly enhanced using wide-field ($>20$ deg$^2$ total survey area), highly-multiplexed optical and near-infrared multi-object spectroscopy (MOS) on 4-15m telescopes. This could come in the form of suitably-designed large surveys and/or community access to add new targets to existing projects. First, photometric redshifts can be calibrated with high precision using cross-correlations of photometric samples against spectroscopic samples at $0 < z < 3$ that span thousands of sq. deg. Cross-correlations of faint LSST objects and lensing maps with these spectroscopic samples can also improve weak lensing cosmology by constraining intrinsic alignment systematics, and will also provide new tests of modified gravity theories. Large samples of LSST strong lens systems and supernovae can be studied most efficiently by piggybacking on spectroscopic surveys covering as much of the LSST extragalactic footprint as possible (up to $\sim20,000$ square degrees). Finally, redshifts can be measured efficiently for a high fraction of the supernovae in the LSST Deep Drilling Fields (DDFs) by targeting their hosts with wide-field spectrographs. Targeting distant galaxies, supernovae, and strong lens systems over wide areas in extended surveys with (e.g.) DESI or MSE in the northern portion of the LSST footprint or 4MOST in the south could realize many of these gains; DESI, 4MOST, Subaru/PFS, or MSE would all be well-suited for DDF surveys. The most efficient solution would be a new wide-field, highly-multiplexed spectroscopic instrument in the southern hemisphere with $>6$m aperture. In two companion white papers we present gains from deep, small-area MOS and from single-target imaging and spectroscopy.
|
astrophysics
|
Protecting quantum states from the decohering effects of the environment is of great importance for the development of quantum computation devices and quantum simulators. Here, we introduce a continuous dynamical decoupling protocol that enables us to protect the entangling gate operation between two qubits from the environmental noise. We present a simple model that involves two qubits which interact with each other with a strength that depends on their mutual distance and generates the entanglement among them, as well as in contact with an environment. The nature of the environment, that is, whether it acts as an individual or common bath to the qubits, is also controlled by the effective distance of qubits. Our results indicate that the introduced continuous dynamical decoupling scheme works well in protecting the entangling operation. Furthermore, under certain circumstances, the dynamics of the qubits naturally led them into a decoherence-free subspace which can be used complimentary to the continuous dynamical decoupling.
|
quantum physics
|
This study investigated the problem posed by using ordinary least squares (OLS) to estimate parameters of simple linear regression under a specific context of special relativity, where an independent variable is restricted to an open interval, (-c, c). It is found that the OLS estimate for the slope coefficient is not invariant under Lorentz velocity transformation. Accordingly, an alternative estimator for the parameters of linear regression under special relativity is proposed. This estimator can be considered a generalization of the OLS estimator under special relativity; when c approaches to infinity, the proposed estimator and its variance converges to the OLS estimator and its variance, respectively. The variance of the proposed estimator is larger than that of the OLS estimator, which implies that hypothesis testing using the OLS estimator and its variance may result in a liberal test under special relativity.
|
statistics
|
The paper deals with a sharing economy system with various management factors by using a bulk input G/M/1 type queuing model. The effective management of operating costs is vital for controlling the sharing economy platform and this research builds the theoretical background to understand the sharing economy business model. Analytically, the techniques include a classical Markov process of the single channel queueing system, semi-Markov process and semi-regenerative process. It uses the stochastic congruent properties to find the probability distribution of the number of contractors in the sharing economy platform. The obtained explicit formulas demonstrate the usage of functional for the main stochastic characteristics including sharing expenses due to over contracted resources and optimization of their objective function.
|
mathematics
|
We investigate the existence of solutions to the fractional nonlinear Schr\"{o}dinger equation $(-\Delta)^s u = f(u)$ with prescribed $L^2$-norm $\int_{\mathbb{R}^N} |u|^2 \, dx =m$ in the Sobolev space $H^s(\mathbb{R}^N)$. Under fairly general assumptions on the nonlinearity $f$, we prove the existence of a ground state solution and a multiplicity result in the radially symmetric case.
|
mathematics
|
Outflows driven by large-scale magnetic fields likely play an important role in the evolution and dispersal of protoplanetary disks, and in setting the conditions for planet formation. We extend our 2-D axisymmetric non-ideal MHD model of these outflows by incorporating radiative transfer and simplified thermochemistry, with the twin aims of exploring how heating influences wind launching, and illustrating how such models can be tested through observations of diagnostic spectral lines. Our model disks launch magnetocentrifugal outflows primarily through magnetic tension forces, so the mass-loss rate increases only moderately when thermochemical effects are switched on. For typical field strengths, thermochemical and irradiation heating are more important than magnetic dissipation. We furthermore find that the entrained vertical magnetic flux diffuses out of the disk on secular timescales as a result of non-ideal MHD. Through post-processing line radiative transfer, we demonstrate that spectral line intensities and moment-1 maps of atomic oxygen, the HCN molecule, and other species show potentially observable differences between a model with a magnetically driven outflow and one with a weaker, photoevaporative outflow. In particular, the line shapes and velocity asymmetries in the moment-1 maps could enable the identification of outflows emanating from the disk surface.
|
astrophysics
|
Statistical modeling is a key component in the extraction of physical results from lattice field theory calculations. Although the general models used are often strongly motivated by physics, their precise form is typically ill-determined, and many model variations can be plausibly considered for the same lattice data. Model averaging, which amounts to a probability-weighted average over all model variations, can incorporate systematic errors associated with model choice without being overly conservative. We discuss the framework of model averaging from the perspective of Bayesian statistics, and give useful formulas and approximations for the particular case of least-squares fitting, commonly used in modeling lattice results. In addition, we frame the common problem of data subset selection (e.g. choice of minimum and maximum time separation for fitting a two-point correlation function) as a model selection problem, and study model averaging as a straightforward alternative to manual selection of fit ranges. Numerical examples involving both mock and real lattice data are given.
|
statistics
|
In this paper the problem of restoration of non-negative sparse signals is addressed in the Bayesian framework. We introduce a new probabilistic hierarchical prior, based on the Generalized Hyperbolic (GH) distribution, which explicitly accounts for sparsity. This new prior allows on the one hand, to take into account the non-negativity. And on the other hand, thanks to the decomposition of GH distributions as continuous Gaussian mean-variance mixture, allows us to propose a partially collapsed Gibbs sampler (PCGS), which is shown to be more efficient in terms of convergence time than the classical Gibbs sampler.
|
electrical engineering and systems science
|
We introduce methods for deriving analytic solutions from differential-algebraic systems of equations (DAEs), as well as methods for deriving governing equations for analytic characterization which is currently limited to very small systems as it is carried out by hand. Analytic solutions to the system and analytic characterization through governing equations provide insights into the behaviors of DAEs as well as the parametric regions of operation for each potential behavior. For each system (DAEs), and choice of dependent variable, there is a corresponding governing equation which is univariate ODE or PDE that is typically higher order than the constitutive equations of the system. We first introduce a direct formulation for representing systems of linear DAEs. Unlike state space formulations, our formulation follows very directly from the system of constitutive equations without the need for introducing state variables or singular matrices. Using this formulation for the system of constitutive equations (DAEs), we develop methods for deriving analytic expressions for the full solution (complementary and particular) for all dependent variables of systems that consist of constant coefficient ordinary-DAEs and special cases of partial-DAEs. We also develop methods for deriving the governing equation for a chosen dependent variable for the constant coefficient ordinary-DAEs and partial-DAEs as well as special cases of variable coefficient DAEs. The methods can be automated with symbolic coding environments thereby allowing for dealing with systems of any size while retaining analytic nature. This is relevant for interpretable modeling, analytic characterization and estimation, and engineering design in which the objective is to tune parameter values to achieve specific behavior. Such insights cannot directly be obtained using numerical simulations.
|
mathematics
|
Dimension reduction techniques for multivariate time series decompose the observed series into a few useful independent/orthogonal univariate components. We develop a spectral domain method for multivariate second-order stationary time series that linearly transforms the observed series into several groups of lower-dimensional multivariate subseries. These multivariate subseries have non-zero spectral coherence among components within a group but have zero spectral coherence among components across groups. The observed series is expressed as a sum of frequency components whose variances are proportional to the spectral matrices at the respective frequencies. The demixing matrix is then estimated using an eigendecomposition on the sum of the variance matrices of these frequency components and its asymptotic properties are derived. Finally, a consistent test on the cross-spectrum of pairs of components is used to find the desired segmentation into the lower-dimensional subseries. The numerical performance of the proposed method is illustrated through simulation examples and an application to modeling and forecasting wind data is presented.
|
statistics
|
A fundamental concept in two-arm non-parametric survival analysis is the comparison of observed versus expected numbers of events on one of the treatment arms (the choice of which arm is arbitrary), where the expectation is taken assuming that the true survival curves in the two arms are identical. This concept is at the heart of the counting-process theory that provides a rigorous basis for methods such as the log-rank test. It is natural, therefore, to maintain this perspective when extending the log-rank test to deal with non-proportional hazards, for example by considering a weighted sum of the "observed - expected" terms, where larger weights are given to time periods where the hazard ratio is expected to favour the experimental treatment. In doing so, however, one may stumble across some rather subtle issues, related to the difficulty in ascribing a causal interpretation to hazard ratios, that may lead to strange conclusions. An alternative approach is to view non-parametric survival comparisons as permutation tests. With this perspective, one can easily improve on the efficiency of the log-rank test, whilst thoroughly controlling the false positive rate. In particular, for the field of immuno-oncology, where researchers often anticipate a delayed treatment effect, sample sizes could be substantially reduced without loss of power.
|
statistics
|
For intelligent vehicles, sensing the 3D environment is the first but crucial step. In this paper, we build a real-time advanced driver assistance system based on a low-power mobile platform. The system is a real-time multi-scheme integrated innovation system, which combines stereo matching algorithm with machine learning based obstacle detection approach and takes advantage of the distributed computing technology of a mobile platform with GPU and CPUs. First of all, a multi-scale fast MPV (Multi-Path-Viterbi) stereo matching algorithm is proposed, which can generate robust and accurate disparity map. Then a machine learning, which is based on fusion technology of monocular and binocular, is applied to detect the obstacles. We also advance an automatic fast calibration mechanism based on Zhang's calibration method. Finally, the distributed computing and reasonable data flow programming are applied to ensure the operational efficiency of the system. The experimental results show that the system can achieve robust and accurate real-time environment perception for intelligent vehicles, which can be directly used in the commercial real-time intelligent driving applications.
|
computer science
|
Recently it was shown that continuous Matrix Product States (cMPS) cannot express the continuum limit state of any Matrix Product State (MPS), according to a certain natural definition of the latter. The missing element is a projector in the transfer matrix of the MPS. Here we provide a generalised ansatz of cMPS that is capable of expressing the continuum limit of any MPS. It consists of a sum of cMPS with different boundary conditions, each attached to an ancilla state. This new ansatz can be interpreted as the concatenation of a state which is at the closure of the set of cMPS together with a standard cMPS. The former can be seen as a cMPS in the thermodynamic limit, or with matrices of unbounded norm. We provide several examples and discuss the result.
|
quantum physics
|
An enhanced microgrid power flow (EMPF) is devised to incorporate hierarchical control effects. The new contributions are threefold: 1) an advanced-hierarchical-control-based Newton approach is established to accurately assess power sharing and voltage regulation effects; 2) a modified Jacobian matrix is derived to incorporate droop control and various secondary control modes; and 3) the secondary adjustment is calculated on top of the droop-control-based power flow results to ensure a robust Newton solution. Case studies validate that EMPF is efficacious and efficient and can serve as a powerful tool for microgrid operation and monitoring, especially for those highly meshed microgrids in urban areas.
|
electrical engineering and systems science
|
We present a compact \textit{in-situ} electromagnet with an active cooling system for the use in ultra-high vacuum environments. The active cooling enhances the thermal stability and increases the electric current that can be applied through the coil, promoting the generation of homogeneous magnetic fields, required for applications in real-time deposition experiments. The electromagnet has been integrated into a reflectance difference magneto-optic Kerr effect (RD-MOKE) spectroscopy system that allows the synchronous measurement of the optical anisotropy and the magneto-optic response in polar MOKE geometry. Proof of principle studies have been performed in real-time during the deposition of ultra-thin Ni films on Cu(110)-(2$\times$1)O surfaces, corroborating the extremely sharp spin reorientation transition above a critical coverage of 9 monolayers and demonstrating the potential of the applied setup for real-time and \textit{in-situ} investigations of magnetic thin films and interfaces.
|
physics
|
{\it $(N,\gamma)$-hyperelliptic} semigroups were introduced by Fernando Torres to encapsulate the most salient properties of Weierstrass semigroups associated to totally-ramified points of $N$-fold covers of curves of genus $\gamma$. Torres characterized $(2,\gamma)$-hyperelliptic semigroups of maximal weight whenever their genus is large relative to $\gamma$. Here we do the same for $(3,\gamma)$-hyperelliptic semigroups, and we formulate a conjecture about the general case whenever $N \geq 3$ is prime.
|
mathematics
|
For a classical group $G$ and a Coxeter element $c$ of the Weyl group, it is known that the coordinate ring $\mathbb{C}[G^{e,c^2}]$ of the double Bruhat cell $G^{e,c^2}:=B\cap B_-c^2B_-$ has a structure of cluster algebra of finite type, where $B$ and $B_-$ are opposite Borel subgroups. In this article, we consider the case $G$ is of type ${\rm B}_r$, ${\rm C}_r$ or ${\rm D}_r$ and describe all the cluster variables in $\mathbb{C}[G^{e,c^2}]$ as monomial realizations of certain Demazure crystals.
|
mathematics
|
The R software has become popular among researchers due to its flexibility and open-source nature. However, researchers in the fields of public health and epidemiological studies are more customary to commercial statistical softwares such as SAS, SPSS and Stata. This paper provides a comprehensive comparison on analysis of health survey data using the R survey package, SAS, SPSS and Stata. We describe detailed R codes and procedures for other software packages on commonly encountered statistical analyses, such as estimation of population means and regression analysis, using datasets from the Canadian Longitudinal Study on Aging (CLSA). It is hoped that the paper stimulates interest among health science researchers to carry data analysis using R and also serves as a cookbook for statistical analysis using different software packages.
|
statistics
|
Exploration of structure-property relationships as a function of dopant concentration is commonly based on mean field theories for solid solutions. However, such theories that work well for semiconductors tend to fail in materials with strong correlations, either in electronic behavior or chemical segregation. In these cases, the details of atomic arrangements are generally not explored and analyzed. The knowledge of the generative physics and chemistry of the material can obviate this problem, since defect configuration libraries as stochastic representation of atomic level structures can be generated, or parameters of mesoscopic thermodynamic models can be derived. To obtain such information for improved predictions, we use data from atomically resolved microscopic images that visualize complex structural correlations within the system and translate them into statistical mechanical models of structure formation. Given the significant uncertainties about the microscopic aspects of the material's processing history along with the limited number of available images, we combine model optimization techniques with the principles of statistical hypothesis testing. We demonstrate the approach on data from a series of atomically-resolved scanning transmission electron microscopy images of Mo$_x$Re$_{1-x}$S$_2$ at varying ratios of Mo/Re stoichiometries, for which we propose an effective interaction model that is then used to generate atomic configurations and make testable predictions at a range of concentrations and formation temperatures.
|
condensed matter
|
Quantum nonlocality and quantum steering are fundamental correlations of quantum systems which can not be created using classical resources only. Nonlocality describes the ability to influence the possible results of measurements carried out in distant systems, in quantum steering Alice remotely steers Bob's state. Research in nonlocality and steering possess a fundamental interest for the development of quantum information and in many applications requiring nonlocal resources like quantum key distribution. On the other hand, the Stern-Gerlach experiment holds an important place in the history, development and teaching of quantum mechanics and quantum information. In particular, the thought experiment of consecutive Stern-Gerlach Experiments is commonly used to exemplify the concept of non-commutativity between quantum operators. However, to the best of our knowledge, the consecutive Stern-Gerlach Experiments have not been treated in a fully quantum manner yet, and it is a widely accepted idea that atoms crossing consecutive Stern-Gerlach Experiments follow classical paths. Here we demonstrate that two consecutive Stern-Gerach Experiment generate nonlocality and steering, these nonlocal effects strongly modify our usual understanding of this experiment. Also, we discuss the implications of this result and its relation with the entanglement. This suggests the use of quantum correlations, of particles possessing mass, to generate nonlocal taks using this venerable experiment.
|
quantum physics
|
Rapid advancements in spatial technologies including Geographic Information Systems (GIS) and remote sensing have generated massive amounts of spatially referenced data in a variety of scientific and data-driven industrial applications. These advancements have led to a substantial, and still expanding, literature on the modeling and analysis of spatially oriented big data. In particular, Bayesian inferences for high-dimensional spatial processes are being sought in a variety of remote-sensing applications including, but not limited to, modeling next generation Light Detection and Ranging (LiDAR) systems and other remotely sensed data. Massively scalable spatial processes, in particular Gaussian processes (GPs), are being explored extensively for the increasingly encountered big data settings. Recent developments include GPs constructed from sparse Directed Acyclic Graphs (DAGs) with a limited number of neighbors (parents) to characterize dependence across the spatial domain. The DAG can be used to devise fast algorithms for posterior sampling of the latent process, but these may exhibit pathological behavior in estimating covariance parameters. While these issues are mitigated by considering marginalized samplers that exploit the underlying sparse precision matrix, these algorithms are slower, less flexible, and oblivious of structure in the data. The current article introduces the Grid-Parametrize-Split (GriPS) approach for conducting Bayesian inference in spatially oriented big data settings by a combination of careful model construction and algorithm design to effectuate substantial improvements in MCMC convergence. We demonstrate the effectiveness of our proposed methods through simulation experiments and subsequently undertake the modeling of LiDAR outcomes and production of their predictive maps using G-LiHT and other remotely sensed variables.
|
statistics
|
We consider the implications of the swampland conjectures on scalar-tensor theories defined in the Einstein frame in which the scalar interaction is screened. We show that chameleon models are not in the swampland provided the coupling to matter is larger than unity and the mass of the scalar field is much larger than the Hubble rate. We apply these conditions to the inverse power law chameleon and the symmetron. We then focus on the dilaton of string theory in the strong coupling limit, as defined in the string frame. We show that solar system tests of gravity imply that viable dilaton models are not in the swampland. In the future of the Universe, if the low energy description with a single scalar is still valid and the coupling to matter remains finite, we find that the scalar field energy density must vanish for models with the chameleon and symmetron mechanisms. Hence in these models dark energy is only a transient phenomenon. This is not the case for the strongly coupled dilaton, which keeps evolving slowly, leading to a quasi de Sitter space-time.
|
high energy physics theory
|
Many recent political events, like the 2016 US Presidential elections or the 2018 Brazilian elections have raised the attention of institutions and of the general public on the role of Internet and social media in influencing the outcome of these events. We argue that a safe democracy is one in which citizens have tools to make them aware of propaganda campaigns. We propose a novel task: performing fine-grained analysis of texts by detecting all fragments that contain propaganda techniques as well as their type. We further design a novel multi-granularity neural network, and we show that it outperforms several strong BERT-based baselines.
|
computer science
|
We study the complexity of holographic superconductors (Einstein-Maxwell-complex scalar actions in $d+1$ dimension) by the `complexity = volume' (CV) conjecture. First, it seems that there is a universal property: the superconducting phase always has a smaller complexity than the unstable normal phase below the critical temperature, which is similar to a free energy. We investigate the temperature dependence of the complexity. In the low temperature limit, the complexity (of formation) scales as $T^\alpha$, where $\alpha$ is a function of the complex scalar mass $m^2$, the $U(1)$ charge $q$, and dimension $d$. In particular, for $m^2=0$, we find $\alpha=d-1$, independent of $q$, which can be explained by the near horizon geometry of the low temperature holographic superconductor. Next, we develop a general numerical method to compute the time-dependent complexity by the CV conjecture. By this method, we compute the time-dependent complexity of holographic superconductors. In both normal and superconducting phase, the complexity increases as time goes on and the growth rate saturates to a temperature dependent constant. The higher the temperature is, the bigger the growth rate is. However, the growth rates do not violate the Lloyd's bound in all cases and saturate the Lloyd's bound in the high temperature limit at a late time.
|
high energy physics theory
|
We propose a novel framework of the model specification test in regression using unlabeled test data. In many cases, we have conducted statistical inferences based on the assumption that we can correctly specify a model. However, it is difficult to confirm whether a model is correctly specified. To overcome this problem, existing works have devised statistical tests for model specification. Existing works have defined a correctly specified model in regression as a model with zero conditional mean of the error term over train data only. Extending the definition in conventional statistical tests, we define a correctly specified model as a model with zero conditional mean of the error term over any distribution of the explanatory variable. This definition is a natural consequence of the orthogonality of the explanatory variable and the error term. If a model does not satisfy this condition, the model might lack robustness with regards to the distribution shift. The proposed method would enable us to reject a misspecified model under our definition. By applying the proposed method, we can obtain a model that predicts the label for the unlabeled test data well without losing the interpretability of the model. In experiments, we show how the proposed method works for synthetic and real-world datasets.
|
statistics
|
Co$_{2}$FeSi/GaAs(111)B hybrid structures are grown by molecular-beam epitaxy and characterized by transmission electron microscopy (TEM) and x-ray diffraction. The Co$_{2}$FeSi films grow in an island growth mode at substrate temperatures $T_{S}$ between $T_{S}$~=~100$\thinspace^{\circ}$C and 425$\thinspace^{\circ}$C. The structures have a stable interface up to $T_{S}=275~^{\circ}$C. The films contain fully ordered $L2_{1}$ and partially ordered $B2$ phases. The spatial distribution of long-range order in Co$_{2}$FeSi is characterized using a comparison of TEM images taken with superlattice reflections and the corresponding fundamental reflections. The spatial inhomogeneities of long-range order can be explained by local non-stoichiometry due to lateral segregation or stress relaxation without formation of extended defects.
|
condensed matter
|
We consider the indirect detection of dark matter that is captured in the Sun and subsequently annihilates to long lived dark mediators. If these mediators escape the Sun before decaying, they can produce striking gamma ray signals, either via the decay of the mediators directly to photons, or via bremsstrahlung and hadronization of the mediator decay products. Using recent measurements from the HAWC Observatory, we determine model-independent limits on heavy dark matter that are orders of magnitude more powerful than direct detection experiments, for both spin-dependent and spin-independent scattering. We also consider a well-motivated model in which fermionic dark matter annihilates to dark photons. For such a realistic scenario, the strength of the solar gamma ray constraints are reduced, compared to the idealistic case, due to the fact that the dark matter capture cross section and mediator lifetime are related. Nonetheless, solar gamma ray constraints enable us to exclude a previously unconstrained region of dark photon parameter space.
|
high energy physics phenomenology
|
We study decoherence effects on mixing among three generations of neutrinos. We show that in presence of a non--diagonal dissipation matrix, both Dirac and Majorana neutrinos can violate the $CPT$ symmetry and the oscillation formulae depend on the parametrization of the mixing matrix. We reveal the $CP$ violation in the transitions preserving the flavor, for a certain form of the dissipator. In particular, the $CP$ violation affects all the transitions in the case of Majorana neutrinos, unlike Dirac neutrinos which still preserve the $CP$ symmetry in one of the transitions flavor preserving. This theoretical result shows that decoherence effects, if exist for neutrinos, could allow to determine the neutrino nature and to test fundamental symmetries of physics. Next long baseline experiments could allow such an analysis. We relate our study with experiments by using the characteristic parameters and the constraints on the elements of the dissipation matrix of current experiments.
|
high energy physics phenomenology
|
Mitigation of radio frequency interference (RFI) is essential to deliver science-ready radio interferometric data to astronomers. In this paper, using dual polarized radio interferometers, we propose to use the polarization information of post-correlation interference signals to detect and mitigate them. We use the directional statistics of the polarized signals as the detection criteria and formulate a distributed, wideband spectrum sensing problem. Using consensus optimization, we solve this in an online manner, working with mini-batches of data. We present extensive results based on simulations to demonstrate the feasibility of our method.
|
astrophysics
|
We revisit the construction in four-dimensional gauged $Spin(4)$ supergravity of the holographic duals to topologically twisted three-dimensional $\mathcal{N}=4$ field theories. Our focus in this paper is to highlight some subtleties related to preserving supersymmetry in AdS/CFT, namely the inclusion of finite counterterms and the necessity of a Legendre transformation to find the dual to the field theory generating functional. Studying the geometry of these supergravity solutions, we conclude that the gravitational free energy is indeed independent from the metric of the boundary, and it vanishes for any smooth solution.
|
high energy physics theory
|
We discuss the importance of Fe$^{23+}$ in determining the line intensities of the Fe XXV K$\alpha$ complex in an optically thick cloud, and investigate the prediction of Liedahl (2005) on Resonance Auger Destruction (RAD) with CLOUDY. Although initially motivated by the Perseus cluster, our calculations are extended to the wide range of column densities encountered in astronomy. A Fe XXV line photon can change/lose its identity upon absorption by three-electron iron as a result of "line interlocking". This may lead to the autoionization of the absorbing ion, ultimately destroying the Fe XXV K$\alpha$ photon by RAD. Out of the four members in the Fe XXV K$\alpha$ complex, a significant fraction of the x line photons is absorbed by Fe$^{23+}$ and destroyed, causing the x line intensity to decrease. For example, at a hydrogen column density of 10$^{25}$ cm$^{-2}$, $\sim$ 32\% of x photons are destroyed due to RAD while w is mostly unaffected. The line intensity of y is slightly ($\leq$2\%) reduced. z is not directly affected by RAD, but the contrasting behavior between z and x line intensities points towards the possible conversion of a tiny fraction ($\sim$ 2\%) of x photons into z photons. The change in line intensities due to Electron Scattering Escape (ESE) off fast thermal electrons is also discussed.
|
astrophysics
|
Civilian Unmanned Aerial Vehicles (UAVs) are becoming more accessible for domestic use. Currently, UAV manufacturer DJI dominates the market, and their drones have been used for a wide range of applications. Model lines such as the Phantom can be applied for autonomous navigation where Global Positioning System (GPS) signals are not reliable, with the aid of Simultaneous Localization and Mapping (SLAM), such as monocular Visual SLAM. In this work, we propose a bridge among different systems, such as Linux, Robot Operating System (ROS), Android, and UAVs as an open-source framework, where the gimbal camera recording can be streamed to a remote server, supporting the implementation of an autopilot. Finally, we present some experimental results showing the performance of the video streaming validating the framework.
|
electrical engineering and systems science
|
Direct measurement protocol allows reconstructing specific elements of the density matrix of a quantum state without using quantum state tomography. However, the direct measurement protocols to date are primarily based on weak or strong measurements with ancillary pointers, which interacts with the investigated system to extract information about the specified elements. Here we present a new direct measurement protocol based on phase-shifting technique which do not need ancillary pointers. In this protocol, at most six different projective measurements suffice to determine any specific element of an unknown quantum density matrix. A concrete quantum circuit to implement the phase-shifting measurement protocol for multi-qubit states is provided, where the circuit is composed of just single-qubit gates and two multi-qubit controlled-phase gates. This protocol is also extended to the continuous-variable cases for directly measuring the Wigner function. Furthermore, we show that the protocol has the advantage of reducing measurement and device complexity in the task of measuring the complete density matrix compared to quantum state tomography in some quantum experiments. Our method provides an efficient way to characterize arbitrary quantum systems, which may be used to predict various properties of a quantum system and find applications in quantum information processing.
|
quantum physics
|
Circuit quantization links a physical circuit to its corresponding quantum Hamiltonian. The standard quantization procedure generally assumes any external magnetic flux to be static. Time dependence naturally arises, however, when flux is modulated or when flux noise is considered. In this case, application of the existing quantization procedure can lead to inconsistencies. To resolve these, we generalize circuit quantization to incorporate time-dependent external flux.
|
quantum physics
|
Conformational change of a DNA molecule is frequently observed in multiple biological processes and has been modelled using a chain of strongly coupled oscillators with a nonlinear bistable potential. While the mechanism and properties of conformational change in the model have been investigated and several reduced order models developed, the conformational dynamics as a function of the length of the oscillator chain is relatively less clear. To address this, we used a modified Lindstedt-Poincare method and numerical computations. We calculate a perturbation expansion of the frequency of the model's nonzero modes, finding that approximating these modes with their unperturbed dynamics, as in a previous reduced order model, may not hold when the length of the DNA model increases. We investigate the conformational change to local perturbation in models of varying lengths, finding that for chosen input and parameters, there are two regions of DNA length in the model, first where the minimum energy required to undergo the conformational change increases with DNA length; and second, where it is almost independent of the length of the DNA model. We analyze the conformational change in these models by adding randomness to the local perturbation, finding that the tendency of the system to remain in a stable conformation against random perturbation decreases with an increase in the DNA length. These results should help to understand the role of the length of a DNA molecule in influencing its conformational dynamics.
|
physics
|
We study the critical $O(3)$ model using the numerical conformal bootstrap. In particular, we use a recently developed cutting-surface algorithm to efficiently map out the allowed space of CFT data from correlators involving the leading $O(3)$ singlet $s$, vector $\phi$, and rank-2 symmetric tensor $t$. We determine their scaling dimensions to be $(\Delta_{s}, \Delta_{\phi}, \Delta_{t}) = (0.518942(51), 1.59489(59), 1.20954(23))$, and also bound various OPE coefficients. We additionally introduce a new ``tip-finding" algorithm to compute an upper bound on the leading rank-4 symmetric tensor $t_4$, which we find to be relevant with $\Delta_{t_4} < 2.99056$. The conformal bootstrap thus provides a numerical proof that systems described by the critical $O(3)$ model, such as classical Heisenberg ferromagnets at the Curie transition, are unstable to cubic anisotropy.
|
high energy physics theory
|
This work concerns the numerical approximation of a multicomponent compressible Euler system for a fluid mixture in multiple space dimensions on unstructured meshes with a high-order discontinuous Galerkin spectral element method (DGSEM). We first derive an entropy stable (ES) and robust (i.e., that preserves the positivity of the partial densities and internal energy) three-point finite volume scheme using relaxation-based approximate Riemann solvers from Bouchut [Nonlinear stability of finite volume methods for hyperbolic conservation laws and well-balanced schemes for sources, Birkhauser] and Coquel and Perthame [SINUM, 35, 1998]. Then, we consider the DGSEM based on collocation of quadrature and interpolation points which relies on the framework introduced by Fisher and Carpenter [JCP, 252, 2013] and Gassner [SISC, 35, 2013]. We replace the physical fluxes in the integrals over discretization elements by entropy conservative numerical fluxes [Tadmor, MCOM, 49, 1987], while ES numerical fluxes are used at element interfaces. We thus derive a two-point numerical flux satisfying the Tadmor's entropy conservation condition and use the numerical flux from the three-point scheme as ES flux. Time discretization is performed with a strong-stability preserving Runge-Kutta scheme. We then derive conditions on the numerical parameters to guaranty a semi-discrete entropy inequality as well as positivity of the cell average of the partial densities and internal energy of the fully discrete DGSEM at any approximation order. The later results allow to use existing limiters in order to restore positivity of nodal values within elements. The scheme also resolves exactly stationary material interfaces. Numerical experiments in one and two space dimensions on flows with discontinuous solutions support the conclusions of our analysis and highlight stability, robustness and high resolution of the scheme.
|
mathematics
|
The Petz recovery channel plays an important role in quantum information science as an operation that approximately reverses the effect of a quantum channel. The pretty good measurement is a special case of the Petz recovery channel, and it allows for near-optimal state discrimination. A hurdle to the experimental realization of these vaunted theoretical tools is the lack of a systematic and efficient method to implement them. This paper sets out to rectify this lack: using the recently developed tools of quantum singular value transformation and oblivious amplitude amplification, we provide a quantum algorithm to implement the Petz recovery channel when given the ability to perform the channel that one wishes to reverse. Moreover, we prove that our quantum algorithm's usage of the channel implementation cannot be improved by more than a quadratic factor. Our quantum algorithm also provides a procedure to perform pretty good measurements when given multiple copies of the states that one is trying to distinguish.
|
quantum physics
|
In this work we show the step by step calculations needed to quantify the contribution of a three-loop order diagram with dihedral symmetry to the radiative corrections of the pressure in SU(2) thermal Yang-Mills theory in deconfining phase. We surveyed past developments, and performed computations for separate channel combinations, defined by Mandelstam variables which are constrained by two 4-vertices. An analytically integrable approximation for high-temperature conditions was found, to verify the relevance of the corrections for this diagram. A numerical analysis with Monte Carlo methods was carried out to check the validity of such approximation, to compare it with the full integral. A Dyson-Schwinger resummation had to be performed to all dihedral loop orders in order to control the temperature dependency found.
|
high energy physics theory
|
Fully turbulent flows are characterized by intermittent formation of very localized and intense velocity gradients. These gradients can be orders of magnitude larger than their typical value and lead to many unique properties of turbulence. Using direct numerical simulations of the Navier-Stokes equations with unprecedented small-scale resolution, we characterize such extreme events over a significant range of turbulence intensities, parameterized by the Taylor-scale Reynolds number ($R_\lambda$). Remarkably, we find the strongest velocity gradients to empirically scale as $\tau_K^{-1} R_\lambda^{\beta}$, with $\beta \approx 0.775 \pm 0.025$, where $\tau_K$ is the Kolmogorov time scale (with its inverse, $\tau_K^{-1}$, being the {r.m.s.} of velocity gradient fluctuations). Additionally, we observe velocity increments across very small distances $r \le \eta$, where $\eta$ is the Kolmogorov length scale, to be as large as the {r.m.s.} of the velocity fluctuations. Both observations suggest that the smallest length scale in the flow behaves as $\eta R_\lambda^{-\alpha}$, with $\alpha = \beta - \frac{1}{2}$, which is at odds with predictions from existing phenomenological theories. We find that extreme gradients are arranged in vortex tubes, such that strain conditioned on vorticity grows on average slower than vorticity, approximately as a power law with an exponent $\gamma < 1$, which weakly increases with $R_\lambda$. Using scaling arguments, we get $\beta=(2-\gamma)^{-1}$, which suggests that $\beta$ would also slowly increase with $R_\lambda$. We conjecture that approaching the limit of infinite $R_\lambda$, the flow is overall smooth, with intense velocity gradients over scale $ \eta R_\lambda^{-1/2}$, corresponding to $\beta = 1$.
|
physics
|
We study the $O(N)^3$ supersymmetric quantum field theory of a scalar superfield $\Phi_{abc}$ with a tetrahedral interaction. In the large $N$ limit the theory is dominated by the melonic diagrams. We solve the corresponding Dyson-Schwinger equations in continuous dimensions below $3$. For sufficiently large $N$ we find an IR stable fixed point and computed the $3-\epsilon$ expansion up to the second order of perturbation theory, which is in agreement with the solution of DS equations. We also describe the $1+\epsilon$ expansion of the model and discuss the possiblity of adding the Chern-Simons action to gauge the supersymmetric model.
|
high energy physics theory
|
This paper studies distributionally robust optimization (DRO) when the ambiguity set is given by moments for the distributions. The objective and constraints are given by polynomials in decision variables. We reformulate the DRO with equivalent moment conic constraints. Under some general assumptions, we prove the DRO is equivalent to a linear optimization problem with moment and psd polynomial cones. A moment-SOS relaxation method is proposed to solve it. Its asymptotic and finite convergence are shown under certain assumptions. Numerical examples are presented to show how to solve DRO problems.
|
mathematics
|
Skyrmions and antiskyrmions are topologically protected spin structures with opposite topological charge. Particularly in coexisting phases, these two types of magnetic quasi-particles may show fascinating physics and potential for spintronic devices. While skyrmions are observed in a wide range of materials, until now antiskyrmions were exclusive to materials with D2d symmetry. In this work, we show first and second-order antiskyrmions stabilized by magnetic dipole-dipole interaction in Fe/Gd-based multilayers. We modify the magnetic properties of the multilayers by Ir insertion layers. Using Lorentz transmission electron microscopy imaging, we observe coexisting antiskyrmions, Bloch skyrmions, and type-2 bubbles and determine the range of material properties and magnetic fields where the different spin objects form and dissipate. We perform micromagnetic simulations to obtain more insight into the studied system and conclude that the reduction of saturation magnetization and uniaxial anisotropy leads to the existence of this zoo of different spin objects and that they are primarily stabilized by dipolar interaction.
|
condensed matter
|
We introduce and make publicly available an entity linking dataset from Reddit that contains 17,316 linked entities, each annotated by three human annotators and then grouped into Gold, Silver, and Bronze to indicate inter-annotator agreement. We analyze the different errors and disagreements made by annotators and suggest three types of corrections to the raw data. Finally, we tested existing entity linking models that are trained and tuned on text from non-social media datasets. We find that, although these existing entity linking models perform very well on their original datasets, they perform poorly on this social media dataset. We also show that the majority of these errors can be attributed to poor performance on the mention detection subtask. These results indicate the need for better entity linking models that can be applied to the enormous amount of social media text.
|
computer science
|
We consider a multiple-input multiple-output (MIMO) relaying boardcast channel in downlink cellular networks, where the base station and the relay stations are both equipped with multiple antennas, and each user terminal has only a single antenna. In practical scenarios, channel estimation is imperfect at the receivers. Aiming at maximizing the SINR at each user, we develop two robust linear beamforming schemes respectively for the single relay case and the multi-relay case. The two proposed schemes are based on sigular value decomposition (SVD), minimum mean square error (MMSE) and regularized zero-forcing (RZF). Simulation results show that the proposed scheme outperforms the conventional schemes with imperfect channel estimation.
|
electrical engineering and systems science
|
I address the question whether the wave function in quantum theory exists as a real (ontic) quantity or not. For this purpose, I discuss the essentials of the quantum formalism and emphasize the central role of the superposition principle. I then explain the measurement problem and discuss the process of decoherence. Finally, I address the special features that the quantization of gravity brings into the game. From all of this I conclude that the wave function really exists, that is, it is a real (ontic) feature of Nature.
|
quantum physics
|
We show that a very simple solution to the strong CP problem naturally leads to Dirac neutrinos. Small effective neutrino masses emerge from a type I Dirac seesaw mechanism. Neutrino mass limits probe the axion parameters in regions currently inaccessible to conventional searches.
|
high energy physics phenomenology
|
We briefly recall the procedure for computing the Ward Identities in the presence of a regulator which violates the symmetry being considered. We compute the first non-trivial correction to the supersymmetry Ward Identity of the Wess-Zumino model in the presence of background supergravity using dimensional regularisation. We find that the result can be removed using a finite local counter term and so there is no supersymmetry anomaly.
|
high energy physics theory
|
While Positron emission tomography (PET) imaging has been widely used in diagnosis of number of diseases, it has costly acquisition process which involves radiation exposure to patients. However, magnetic resonance imaging (MRI) is a safer imaging modality that does not involve patient's exposure to radiation. Therefore, a need exists for an efficient and automated PET image generation from MRI data. In this paper, we propose a new frequency-aware attention U-net for generating synthetic PET images. Specifically, we incorporate attention mechanism into different U-net layers responsible for estimating low/high frequency scales of the image. Our frequency-aware attention Unet computes the attention scores for feature maps in low/high frequency layers and use it to help the model focus more on the most important regions, leading to more realistic output images. Experimental results on 30 subjects from Alzheimers Disease Neuroimaging Initiative (ADNI) dataset demonstrate good performance of the proposed model in PET image synthesis that achieved superior performance, both qualitative and quantitative, over current state-of-the-arts.
|
electrical engineering and systems science
|
A very popular theory circulating among non-scientific communities claims that the massive deployment of 5G base stations over the territory, a.k.a. 5G densification, always triggers an uncontrolled and exponential increase of human exposure to Radio Frequency "Pollution" (RFP). To face such concern in a way that can be understood by the layman, in this work we develop a very simple model to compute the RFP, based on a set of worst-case and conservative assumptions. We then provide closed-form expressions to evaluate the RFP variation in a pair of candidate 5G deployments, subject to different densification levels. Results, obtained over a wide set of representative 5G scenarios, dispel the myth: 5G densification triggers an RFP decrease when the radiated power from the 5G base stations is adjusted to ensure a minimum sensitivity at the cell edge. Eventually, we analyze the conditions under which the RFP may increase when the network is densified (e.g., when the radiated power does not scale with the cell size), proving that the amount of RFP is always controlled. Finally, the results obtained by simulation confirm the outcomes of the RFP model.
|
electrical engineering and systems science
|
In this paper, we introduce a novel framework that can learn to make visual predictions about the motion of a robotic agent from raw video frames. Our proposed motion prediction network (PROM-Net) can learn in a completely unsupervised manner and efficiently predict up to 10 frames in the future. Moreover, unlike any other motion prediction models, it is lightweight and once trained it can be easily implemented on mobile platforms that have very limited computing capabilities. We have created a new robotic data set comprising LEGO Mindstorms moving along various trajectories in three different environments under different lighting conditions for testing and training the network. Finally, we introduce a framework that would use the predicted frames from the network as an input to a model predictive controller for motion planning in unknown dynamic environments with moving obstacles.
|
computer science
|
Upsilon (1S) decay to Xi_cc +anything is studied. It is shown that the branching ratio can be as significant as that of Upsilon (1S) decay to J/Psi +anything. The non-relativistic heavy quark effective theory framework is employed for the calculation on the decay width. Measurements on the production of Xi_cc and likely production characteristic of the partonic state with four charm quarks at BELLE2 are suggested.
|
high energy physics phenomenology
|
Domain adaptation addresses the common problem when the target distribution generating our test data drifts from the source (training) distribution. While absent assumptions, domain adaptation is impossible, strict conditions, e.g. covariate or label shift, enable principled algorithms. Recently-proposed domain-adversarial approaches consist of aligning source and target encodings, often motivating this approach as minimizing two (of three) terms in a theoretical bound on target error. Unfortunately, this minimization can cause arbitrary increases in the third term, e.g. they can break down under shifting label distributions. We propose asymmetrically-relaxed distribution alignment, a new approach that overcomes some limitations of standard domain-adversarial algorithms. Moreover, we characterize precise assumptions under which our algorithm is theoretically principled and demonstrate empirical benefits on both synthetic and real datasets.
|
computer science
|
Reasoning about Bell nonlocality from the correlations observed in post-selected data is always a matter of concern. This is because conditioning on the outcomes is a source of non-causal correlations, known as a selection bias, rising doubts whether the conclusion concerns the actual causal process or maybe it is just an effect of processing the data. Yet, even in the idealised case without detection inefficiencies, post-selection is an integral part of every experimental design, not least because it is a part of the entanglement generation process itself. In this paper we discuss a broad class of scenarios with post-selection on multiple spatially distributed outcomes. A simple criterion is worked out, called the all-but-one principle, showing when the conclusions about nonlocality from breaking Bell inequalities with post-selected data remain in force. Generality of this result, attained by adopting the high-level diagrammatic tools of causal inference, provides safe grounds for systematic reasoning based on the standard form of multipartite Bell inequalities in a wide array of entanglement generation schemes without worrying about the dangers of selection bias.
|
quantum physics
|
We show that the magnetic response of atomically thin materials with Dirac spectrum and spin-orbit interactions can exhibit strong dependence on electron-electron interactions. While graphene itself has a very small spin-orbit coupling, various two-dimensional (2D) compounds "beyond graphene" are good candidates to exhibit the strong interplay between spin-orbit and Coulomb interactions. Materials in this class include dichalcogenides (such as MoS$_2$ and WSe$_2$), silicene, germanene, as well as 2D topological insulators described by the Kane-Mele model. We present a unified theory for their in-plane magnetic field response leading to "anomalous", i.e. electron interaction dependent transition moments. Our predictions can be potentially used to construct unique magnetic probes with high sensitivity to electron correlations.
|
condensed matter
|
Imagined speech is spotlighted as a new trend in the brain-machine interface due to its application as an intuitive communication tool. However, previous studies have shown low classification performance, therefore its use in real-life is not feasible. In addition, no suitable method to analyze it has been found. Recently, deep learning algorithms have been applied to this paradigm. However, due to the small amount of data, the increase in classification performance is limited. To tackle these issues, in this study, we proposed an end-to-end framework using Siamese neural network encoder, which learns the discriminant features by considering the distance between classes. The imagined words (e.g., arriba (up), abajo (down), derecha (right), izquierda (left), adelante (forward), and atr\'as (backward)) were classified using the raw electroencephalography (EEG) signals. We obtained a 6-class classification accuracy of 31.40% for imagined speech, which significantly outperformed other methods. This was possible because the Siamese neural network, which increases the distance between dissimilar samples while decreasing the distance between similar samples, was used. In this regard, our method can learn discriminant features from a small dataset. The proposed framework would help to increase the classification performance of imagined speech for a small amount of data and implement an intuitive communication system.
|
electrical engineering and systems science
|
The physics of systems that cannot be described by a Hermitian Hamiltonian, has been attracting a great deal of attention in recent years, motivated by their nontrivial responses and by a plethora of applications for sensing, lasing, energy transfer/harvesting, topology and quantum networks. Electromagnetics is an inherently non-Hermitian research area because all materials are lossy, loss and gain distributions can be controlled with various mechanisms, and the underlying systems are open to radiation. Therefore, the recent developments in non-Hermitian physics offer exciting opportunities for a broad range of basic research and engineering applications relevant to the antennas and propagation community. In this work, we offer a tutorial geared at introducing the unusual electromagnetic phenomena emerging in non-Hermitian systems, with particular emphasis on a sub-class of these systems that obey parity-time (PT) symmetry. We discuss the basic concepts behind this topic and explore their implications for various phenomena. We first discuss the basic features of P, T and PT operators applied to electromagnetic and quantum mechanical phenomena. We then discuss the exotic response of PT-symmetric electromagnetic structures and their opportunities, with particular attention to singularities, known as exceptional points, emerging in these systems, and their unusual scattering response.
|
physics
|
We develop the effective field theoretical descriptions of spin systems in the presence of symmetry-breaking effects: the magnetic field, single-ion anisotropy, and Dzyaloshinskii-Moriya interaction. Starting from the lattice description of spin systems, we show that the symmetry-breaking terms corresponding to the above effects can be incorporated into the effective field theory as a combination of a background (or spurious) $\SO(3)$ gauge field and a scalar field in the symmetric tensor representation, which are eventually fixed at their physical values. We use the effective field theory to investigate mode spectra of inhomogeneous ground states, with focusing on one-dimensionally inhomogeneous states, such as helical and spiral states. Although the helical and spiral ground states share a common feature of supporting the gapless Nambu-Goldstone modes associated with the translational symmetry breaking, they have qualitatively different dispersion relations: isotropic in the helical phase while anisotropic in the spiral phase. We also discuss the magnon production induced by an inhomogeneous magnetic field, and find a formula akin to the Schwinger formula. Our formula for the magnon production gives a finite rate for antiferromagnets, and a vanishing rate for ferromagnets, whereas that for ferrimagnets interpolates between the two cases.
|
condensed matter
|
This article proposes a new class of Real Elliptically Skewed (RESK) distributions and associated clustering algorithms that allow for integrating robustness and skewness into a single unified cluster analysis framework. Non-symmetrically distributed and heavy-tailed data clusters have been reported in a variety of real-world applications. Robustness is essential because a few outlying observations can severely obscure the cluster structure. The RESK distributions are a generalization of the Real Elliptically Symmetric (RES) distributions. To estimate the cluster parameters and memberships, we derive an expectation maximization (EM) algorithm for arbitrary RESK distributions. Special attention is given to a new robust skew-Huber M-estimator, which is also the maximum likelihood estimator (MLE) for the skew-Huber distribution that belongs to the RESK class. Numerical experiments on simulated and real-world data confirm the usefulness of the proposed methods for skewed and heavy-tailed data sets.
|
electrical engineering and systems science
|
A pedagogical introduction to low-energy effective field theories. In some of them, heavy particles are "integrated out" (a typical example - the Heisenberg-Euler EFT); in some heavy particles remain but some of their degrees of freedom are "integrated out" (Bloch-Nordsieck EFT). A large part of these lectures is, technically, in the framework of QED. QCD examples, namely, decoupling of heavy flavors and HQET, are discussed only briefly. However, effective field theories of QCD are very similar to the QED case, there are just some small technical complications: more diagrams, color factors, etc. The method of regions provides an alternative view at low-energy effective theories; it is also briefly introduced.
|
high energy physics phenomenology
|
The design, fabrication, operation, and performance of a helium-3/4 dilution refrigerator and superconducting magnet system for holding a frozen-spin polarized hydrogen deuteride target in the Jefferson Laboratory CLAS detector during photon beam running is reported. The device operates both vertically (for target loading) and horizontally (for target bombardment). The device proves capable of maintaining a base temperature of 50 mK and a holding field of 1 Tesla for extended periods. These characteristics enabled multi-month polarization lifetimes for frozen spin HD targets having proton polarization of up to 50% and deuteron up to 27%.
|
physics
|
Superparticle models with $OSp(N|2)$ supersymmetry group are studied. We first consider the $N=4$ case and construct the models with $\kappa$-symmetry on the coset spaces of the $OSp(4|2)$ supergroup. In addition, within the canonical formalism we present an $OSp(4|2)$ superparticle model with semi-dynamical angular variables. For generic $N$ we construct a superparticle model on $AdS_2\times S^{N-1}$ with the reduced $\kappa$-symmetry. It is demonstrated that the Hamiltonian of this model has the same structure as the one for the $N=4$ case because additional fermions contribute to the second-class constraints only.
|
high energy physics theory
|
We investigated the material parameters of several single-layer cuprates, including those with fluorinated buffer layers, with the aim of identifying possible high-temperature superconductors. To evaluate the material parameters, we use the Wannierization techniques and the constrained random phase approximation. The obtained single-band Hubbard models are studied using the fluctuation-exchange approximation. Comparison among several cuprates reveals unknown high-$T_{\text{c}}$ superconductors. In, Ga, Al, and Cd compounds in particular show the potential to exhibit higher-$T_{\text{c}}$ superconductivity than Hg1201.
|
condensed matter
|
Attractor black holes in type II string compactifications on $K3 \times T^2$ are in correspondence with equivalence classes of binary quadratic forms. The discriminant of the quadratic form governs the black hole entropy, and the count of attractor black holes at a given entropy is given by a class number. Here, we show this tantalizing relationship between attractors and arithmetic can be generalized to a rich family, connecting black holes in supergravity and string models with analogous equivalence classes of more general forms under the action of arithmetic groups. Many of the physical theories involved have played an earlier role in the study of "magical" supergravities, while their mathematical counterparts are directly related to geometry-of-numbers examples in the work of Bhargava et. al. This paper is dedicated to the memory of Peter Freund. The last section is devoted to some of M.G's personal reminiscences of Peter Freund.
|
high energy physics theory
|
Random matrix theory has proven very successful in the understanding of the spectra of chaotic systems. Depending on symmetry with respect to time reversal and the presence or absence of a spin 1/2 there are three ensembles, the Gaussian orthogonal (GOE), Gaussian unitary (GUE), and Gaussian symplectic (GSE) one. With a further particle-antiparticle symmetry the chiral variants of these ensembles, the chiral orthogonal, unitary, and symplectic ensembles (the BDI, AIII, and CII in Cartan's notation) appear. A microwave study of the chiral ensembles is presented using a linear chain of evanescently coupled dielectric cylindrical resonators. In all cases the predicted repulsion behavior between positive and negative eigenvalues for energies close to zero could be verified.
|
condensed matter
|
We consider a nonlinearly coupled electromechanical system, and develop a quantitative theory for two-phonon cooling. In the presence of two-phonon cooling, the mechanical Hilbert space is effectively reduced to its ground and first excited states, thus forming a mechanical qubit. This allows for performing quantum operations at the level of individual mechanical phonons and preparing nonclassical mechanical states with negative Wigner functions. We propose a scheme for performing arbitrary Bloch sphere rotations, and derive the fidelity in the specific case of a $\pi$-pulse. We characterise detrimental processes that reduce the coherence in the system, and demonstrate that our scheme can be implemented in state-of-the-art electromechanical devices.
|
quantum physics
|
We study the magnetic and superconducting proximity effects in a semiconducting nanowire (NW) attached to superconducting leads and a ferromagnetic insulator (FI). We show that a sizable equilibrium spin polarization arises in the NW due to the interplay between the superconducting correlations and the exchange field in the FI. The resulting magnetization has a nonlocal contribution that spreads in the NW over the superconducting coherence length and is opposite in sign to the local spin polarization induced by the magnetic proximity effect in the normal state. For a Josephson-junction setup, we show that the nonlocal magnetization can be controlled by the superconducting phase bias across the junction. Our findings are relevant for the implementation of Majorana bound states in state-of-the-art hybrid structures.
|
condensed matter
|
Distributed statistical inference has recently attracted immense attention. The asymptotic efficiency of the maximum likelihood estimator (MLE), the one-step MLE, and the aggregated estimating equation estimator are established for generalized linear models under the "large $n$, diverging $p_n$" framework, where the dimension of the covariates $p_n$ grows to infinity at a polynomial rate $o(n^\alpha)$ for some $0<\alpha<1$. Then a novel method is proposed to obtain an asymptotically efficient estimator for large-scale distributed data by two rounds of communication. In this novel method, the assumption on the number of servers is more relaxed and thus practical for real-world applications. Simulations and a case study demonstrate the satisfactory finite-sample performance of the proposed estimators.
|
statistics
|
In completely generic four-dimensional gauge-Yukawa theories, the renormalization group $ \beta $-functions are known to the 3-2-2 loop order in gauge, Yukawa, and quartic couplings, respectively. It does, however, remain difficult to apply these results to specific models without the use of dedicated computer tools. We describe a procedure for extracting $ \beta $-functions using the general results and introduce RGBeta, a dedicated Mathematica package for extracting the $ \overline{\text{MS}} $ $ \beta $-functions in broad classes of models. The package and example notebooks are available from the GitHub repository at https://github.com/aethomsen/RGBeta .
|
high energy physics phenomenology
|
We study the mechanisms and evolutionary phases of bar formation in n-body simulations of a stellar disc and dark matter halo system using harmonic basis function expansion analysis to characterize the dynamical mechanisms in bar evolution. We correlate orbit families with phases of bar evolution by using empirical orthogonal functions that act as a spatial filter and form the gravitational potential basis. In both models we find evidence for three phases in evolution with unique harmonic signatures. We recover known analytic results, such as bar slowdown owing to angular momentum transfer. We also find new dynamical mechanisms for bar evolution: a steady-state equilibrium configuration and harmonic interaction resulting in harmonic mode locking, both of which may be observable. Additionally, we find that ellipse fitting may severely overestimate measurements of bar length by a factor of two relative to the measurements based on orbits that comprise the true backbone supporting the bar feature. The bias will lead to overestimates of both bar mass and bar pattern speed, affecting inferences about the evolution of bars in the real universe, such as the fraction of bars with fast pattern speeds. We propose a direct observational technique to compute the radial extent of trapped orbits and determine a dynamical length for the bar.
|
astrophysics
|
Photonic bound states in the continuum (BICs) are special localized and non-decaying states of a photonic system with a frequency embedded into the spectrum of scattered states. The simplest photonic structure displaying a single BIC is provided by two waveguides side-coupled to a common waveguide lattice, where the BIC is protected by symmetry. Here we consider such a simple photonic structure and show that, breaking mirror symmetry and allowing for non-nearest neighbor couplings, a doublet of quasi-BIC states can be sustained, enabling weakly-damped embedded Rabi oscillations of photons between the waveguides.
|
physics
|
A new class of higher-curvature modifications of $D(\geq 4$)-dimensional Einstein gravity has been recently identified. Densities belonging to this "Generalized quasi-topological" class (GQTGs) are characterized by possessing non-hairy generalizations of the Schwarzschild black hole satisfying $g_{tt}g_{rr}=-1$ and by having second-order equations of motion when linearized around maximally symmetric backgrounds. GQTGs for which the equation of the metric function $f(r)\equiv -g_{tt}$ is algebraic are called "Quasi-topological" and only exist for $D\geq 5$. In this paper we prove that GQTG and Quasi-topological densities exist in general dimensions and at arbitrarily high curvature orders. We present recursive formulas which allow for the systematic construction of $n$-th order densities of both types from lower order ones, as well as explicit expressions valid at any order. We also obtain the equation satisfied by $f(r)$ for general $D$ and $n$. Our results here tie up the remaining loose end in the proof presented in arXiv:1906.00987 that every gravitational effective action constructed from arbitrary contractions of the metric and the Riemann tensor is equivalent, through a metric redefinition, to some GQTG.
|
high energy physics theory
|
The Weak Gravity Conjecture (WGC) demands the existence of superextremal particles in any consistent quantum theory of gravity. The standard lore is that these particles are introduced to ensure that extremal black holes are either unstable or marginally stable, but it is not clear what is wrong if this doesn't happen. This note shows that, for a generic Einstein quantum theory of gravity in AdS, exactly stability of extremal black branes is in tension with rigorously proven quantum information theorems about entanglement entropy. Avoiding the contradiction leads to a nonperturbative version of the WGC, which reduces to the usual statement at weak coupling. The argument is general, and it does not rely on either supersymmetry or a particular UV completion, assuming only the validity of Einsteinian gravity, effective field theory, and holography. The pathology is related to the development of an infinite throat in the near-horizon region of the extremal solutions, which suggests a connection to the ER=EPR proposal.
|
high energy physics theory
|
A spatial point process can be characterized by an intensity function which predicts the number of events that occur across space. In this paper, we develop a method to infer predictive intensity intervals by learning a spatial model using a regularized criterion. We prove that the proposed method exhibits out-of-sample prediction performance guarantees which, unlike standard estimators, are valid even when the spatial model is misspecified. The method is demonstrated using synthetic as well as real spatial data.
|
statistics
|
Electron solid phases of matter are revealed by characteristic vibrational resonances. Sufficiently large magnetic fields can overcome the effects of disorder, leading to a weakly pinned collective mode called the magnetophonon. Consequently, in this regime it is possible to develop a tightly constrained hydrodynamic theory of pinned magnetophonons. The behavior of the magnetophonon resonance across thermal and quantum melting transitions has been experimentally characterized in two-dimensional electron systems. Applying our theory to these transitions we explain several key features of the data: (i) violation of the Fukuyama-Lee sum rule as the transition is approached is directly tied to the non-Lorentzian form taken by the resonance and (ii) the non-Lorentzian shape is caused by characteristic dissipative channels that become especially important close to melting: proliferating dislocations and uncondensed charge carriers.
|
condensed matter
|
We obtain some new results on products of large and small sets in the Heisenberg group as well as in the affine group over the prime field. Also, we derive an application of these growth results to Freiman's isomorphism in nonabelian groups.
|
mathematics
|
Understanding relaxation of supercooled fluids is a major challenge and confining such systems can lead to bewildering behaviour. Here we exploit an optically confined colloidal model system in which we use reduced pressure as a control parameter. The dynamics of the system are ``Arrhenius'' at low and moderate pressure, but at higher pressures relaxation is faster than expected. We associate this faster relaxation with a decrease in density adjacent to the confining boundary due to local ordering in the system enabled by the flexible wall.
|
condensed matter
|
Optimization of searching the best possible action depending on various states like state of environment, system goal etc. has been a major area of study in computer systems. In any search algorithm, searching best possible solution from the pool of every possibility known can lead to the construction of the whole state search space popularly called as minimax algorithm. This may lead to a impractical time complexities which may not be suitable for real time searching operations. One of the practical solution for the reduction in computational time is Alpha Beta pruning. Instead of searching for the whole state space, we prune the unnecessary branches, which helps reduce the time by significant amount. This paper focuses on the various possible implementations of the Alpha Beta pruning algorithms and gives an insight of what algorithm can be used for parallelism. Various studies have been conducted on how to make Alpha Beta pruning faster. Parallelizing Alpha Beta pruning for the GPUs specific architectures like mesh(CUDA) etc. or shared memory model(OpenMP) helps in the reduction of the computational time. This paper studies the comparison between sequential and different parallel forms of Alpha Beta pruning and their respective efficiency for the chess game as an application.
|
computer science
|
It is developed the considerations from (S. M. Min\v{c}i\'c, [14, 15]) about curvature tensors and pseudotensors for a non-symmetric affine connection space in this paper. How many kinds of covariant derivatives are enough to be defined for complete researching in the field of non-symmetric affine connection spaces is examined here. This result is interpreted. After that, we searched how many curvature tensors and pseudotensors of a non-symmetric affine connection space are linearly independent. In the next, it is examined is it possible to express all curvature tensors as the linear combinations of the pseudotensors. At the end of this paper, we considered these results for possible applications in physics.
|
mathematics
|
We describe eclingo, a solver for epistemic logic programs under Gelfond 1991 semantics built upon the Answer Set Programming system clingo. The input language of eclingo uses the syntax extension capabilities of clingo to define subjective literals that, as usual in epistemic logic programs, allow for checking the truth of a regular literal in all or in some of the answer sets of a program. The eclingo solving process follows a guess and check strategy. It first generates potential truth values for subjective literals and, in a second step, it checks the obtained result with respect to the cautious and brave consequences of the program. This process is implemented using the multi-shot functionalities of clingo. We have also implemented some optimisations, aiming at reducing the search space and, therefore, increasing eclingo's efficiency in some scenarios. Finally, we compare the efficiency of eclingo with two state-of-the-art solvers for epistemic logic programs on a pair of benchmark scenarios and show that eclingo generally outperforms their obtained results. Under consideration for acceptance in TPLP.
|
computer science
|
We propose the use of Agent Based Models (ABMs) inside a reinforcement learning framework in order to better understand the relationship between automated decision making tools, fairness-inspired statistical constraints, and the social phenomena giving rise to discrimination towards sensitive groups. There have been many instances of discrimination occurring due to the applications of algorithmic tools by public and private institutions. Until recently, these practices have mostly gone unchecked. Given the large-scale transformation these new technologies elicit, a joint effort of social sciences and machine learning researchers is necessary. Much of the research has been done on determining statistical properties of such algorithms and the data they are trained on. We aim to complement that approach by studying the social dynamics in which these algorithms are implemented. We show how bias can be accumulated and reinforced through automated decision making, and the possibility of finding a fairness inducing policy. We focus on the case of recidivism risk assessment by considering simplified models of arrest. We find that if we limit our attention to what is observed and manipulated by these algorithmic tools, we may determine some blatantly unfair practices as fair, illustrating the advantage of analyzing the otherwise elusive property with a system-wide model. We expect the introduction of agent based simulation techniques will strengthen collaboration with social scientists, arriving at a better understanding of the social systems affected by technology and to hopefully lead to concrete policy proposals that can be presented to policymakers for a true systemic transformation.
|
computer science
|
All-optical binary convolution with a photonic spiking vertical-cavity surface-emitting laser (VCSEL) neuron is proposed and demonstrated experimentally for the first time. Optical inputs, extracted from digital images and temporally encoded using rectangular pulses, are injected in the VCSEL neuron which delivers the convolution result in the number of fast (<100 ps long) spikes fired. Experimental and numerical results show that binary convolution is achieved successfully with a single spiking VCSEL neuron and that all-optical binary convolution can be used to calculate image gradient magnitudes to detect edge features and separate vertical and horizontal components in source images. We also show that this all-optical spiking binary convolution system is robust to noise and can operate with high-resolution images. Additionally, the proposed system offers important advantages such as ultrafast speed, high energy efficiency and simple hardware implementation, highlighting the potentials of spiking photonic VCSEL neurons for high-speed neuromorphic image processing systems and future photonic spiking convolutional neural networks.
|
physics
|
In the "ballistic" regime, the transport across a normal metal (N)/superconductor (S) point-contact is dominated by a quantum process called Andreev reflection. Andreev reflection causes an enhancement of the conductance below the superconducting energy gap, and the ratio of the low-bias and the high-bias conductance cannot be greater than 2 when the superconductor is conventional in nature. In this regime, the features associated with Andreev reflection also provide energy and momentum-resolved spectroscopic information about the superconducting phase. Here we theoretically consider various types of N/S point contacts, away from the ballistic regime, and show that even when the superconductor under investigation is simple conventional in nature, depending on the shape, size and anatomy of the point contacts, a wide variety of spectral features may appear in the conductance spectra. Such features may misleadingly mimic theoretically expected signatures of exotic physical phenomena like Klein tunneling in topological superconductors, Andreev bound states in unconventional superconductors, multiband superconductivity and Majorana zero modes.
|
condensed matter
|
The recently introduced topological quantum chemistry (TQC) framework has provided a description of universal topological properties of all possible band insulators in all space groups based on crystalline unitary symmetries and time reversal. While this formalism filled the gap between the mathematical classification and the practical diagnosis of topological materials, an obvious limitation is that it only applies to weakly interacting systems-which can be described within band theory. It is an open question to which extent this formalism can be generalized to correlated systems that can exhibit symmetry protected topological phases which are not adiabatically connected to any band insulator. In this work we address the many facettes of this question by considering the specific example of a Hubbard diamond chain. This model features a Mott insulator, a trivial insulating phase and an obstructed atomic limit phase. Here we discuss the nature of the Mott insulator and determine the phase diagram and topology of the interacting model with infinite density matrix renormalization group calculations, variational Monte Carlo simulations and with many-body topological invariants. We then proceed by considering a generalization of the TQC formalism to Green's functions combined with the concept of topological Hamiltonian to identify the topological nature of the phases, using cluster perturbation theory to calculate the Green's functions. The results are benchmarked with the above determined phase diagram and we discuss the applicability and limitations of the approach and its possible extensions.
|
condensed matter
|
In this paper we consider an ergodic diffusion process with jumps whose drift coefficient depends on an unknown parameter $\theta$. We suppose that the process is discretely observed at the instants (t n i)i=0,...,n with $\Delta$n = sup i=0,...,n--1 (t n i+1 -- t n i) $\rightarrow$ 0. We introduce an estimator of $\theta$, based on a contrast function, which is efficient without requiring any conditions on the rate at which $\Delta$n $\rightarrow$ 0, and where we allow the observed process to have non summable jumps. This extends earlier results where the condition n$\Delta$ 3 n $\rightarrow$ 0 was needed (see [10],[24]) and where the process was supposed to have summable jumps. Moreover, in the case of a finite jump activity, we propose explicit approximations of the contrast function, such that the efficient estimation of $\theta$ is feasible under the condition that n$\Delta$ k n $\rightarrow$ 0 where k > 0 can be arbitrarily large. This extends the results obtained by Kessler [15] in the case of continuous processes. L{\'e}vy-driven SDE, efficient drift estimation, high frequency data, ergodic properties, thresholding methods.
|
mathematics
|
We present a detailed spectral analysis of the black hole candidate MAXI J1836-194. The source was caught in the intermediate state during its 2011 outburst by Suzaku and RXTE. We jointly fit the X-ray data from these two missions using the relxill model to study the reflection component, and a steep inner emissivity profile indicating a compact corona as the primary source is required in order to achieve a good fit. In addition, a reflection model with a lamp-post configuration (relxilllp), which is normally invoked to explain the steep emissivity profile, gives a worse fit and is excluded at 99% confidence level compared to relxill. We also explore the effect of the ionization gradient on the emissivity profile by fitting the data with two relativistic reflection components, and it is found that the inner emissivity flattens. These results may indicate that the ionization state of the disc is not constant. All the models above require a supersolar iron abundance higher than 4.5. However, we find that the high-density version of reflionx can describe the same spectra even with solar iron abundance well. A moderate rotating black hole (a* = 0.84-0.94) is consistently obtained by our models, which is in agreement with previously reported values.
|
astrophysics
|
FCNC processes offer important tools to test the Standard Model (SM), and to search for possible new physics. In this work, we investigate the $s\to d\nu\bar{\nu}$ rare hyperon decays in SM and beyond. We find that in SM the branching ratios for these rare hyperon decays range from $10^{-14}$ to $10^{-11}$. When all the errors in the form factors are included, we find that the final branching fractions for most decay modes have an uncertainty of about $5\%$ to $10\%$. After taking into account the contribution from new physics, the generalized SUSY extension of SM and the minimal 331 model, the decay widths for these channels can be enhanced by a factor of $2 \sim 7$.
|
high energy physics phenomenology
|
Smart meters (SMs) can pose privacy threats for consumers, an issue that has received significant attention in recent years. This paper studies the impact of Side Information (SI) on the performance of distortion-based real-time privacy-preserving algorithms for SMs. In particular, we consider a deep adversarial learning framework, in which the desired releaser (a recurrent neural network) is trained by fighting against an adversary network until convergence. To define the loss functions, two different approaches are considered: the Causal Adversarial Learning (CAL) and the Directed Information (DI)-based learning. The main difference between these approaches is in how the privacy term is measured during the training process. On the one hand, the releaser in the CAL method, by getting supervision from the actual values of the private variables and feedback from the adversary performance, tries to minimize the adversary log-likelihood. On the other hand, the releaser in the DI approach completely relies on the feedback received from the adversary and is optimized to maximize its uncertainty. The performance of these two algorithms is evaluated empirically using real-world SMs data, considering an attacker with access to SI (e.g., the day of the week) that tries to infer the occupancy status from the released SMs data. The results show that, although they perform similarly when the attacker does not exploit the SI, in general, the CAL method is less sensitive to the inclusion of SI. However, in both cases, privacy levels are significantly affected, particularly when multiple sources of SI are included.
|
electrical engineering and systems science
|
We study the vanishing dissipation limit of the three-dimensional (3D) compressible Navier-Stokes-Fourier equations to the corresponding 3D full Euler equations. Our results are twofold. First, we prove that the 3D compressible Navier-Stokes-Fourier equations admit a family of smooth solutions that converge to the planar rarefaction wave solution of the 3D compressible Euler equations with arbitrary strength. Second, we obtain a uniform convergence rate in terms of the viscosity and heat-conductivity coefficients. For this multi-dimensional problem, we first need to introduce the hyperbolic wave to recover the physical dissipations of the inviscid rarefaction wave profile as in our previous work [29] on the two-dimensional (2D) case. However, due to the 3D setting that makes the analysis significantly more challenging than the 2D problem, the hyperbolic scaled variables for the space and time could not be used to normalize the dissipation coefficients as in the 2D case. Instead, the analysis of the 3D case is carried out in the original non-scaled variables, and consequently the dissipation terms are more singular compared with the 2D scaled case. Novel ideas and techniques are developed to establish the uniform estimates. In particular, more accurate {\it a priori} assumptions with respect to the dissipation coefficients are crucially needed for the stability analysis, and some new observations on the cancellations of the physical structures for the flux terms are essentially used to justify the 3D limit. Moreover, we find that the decay rate with respect to the dissipation coefficients is determined by the nonlinear flux terms in the original variables for the 3D limit in this paper, but fully determined by the error terms in the scaled variables for the 2D case in [29].
|
mathematics
|
We present a data-driven analysis of the resonant S-wave $\pi\pi \to \pi\pi$ and $\pi K \to \pi K$ reactions using the partial-wave dispersion relation. The contributions from the left-hand cuts are accounted for using the Taylor expansion in a suitably constructed conformal variable. The fits are performed to experimental and lattice data as well as Roy analyses. For the $\pi\pi$ scattering we present both a single- and coupled-channel analysis by including additionally the $K\bar{K}$ channel. For the latter the central result is the Omn\`es matrix, which is consistent with the most recent Roy and Roy-Steiner results on $\pi\pi \to \pi\pi$ and $\pi\pi \to K\bar{K}$, respectively. By the analytic continuation to the complex plane, we found poles associated with the lightest scalar resonances $\sigma/f_0(500)$, $f_0(980)$, and $\kappa/K_0^*(700)$ for the physical pion mass value and in the case of $\sigma/f_0(500)$, $\kappa/K_0^*(700)$ also for unphysical pion mass values.
|
high energy physics phenomenology
|
For a relatively large class of well-behaved absorbing (or killed) finite Markov chains, we give detailed quantitative estimates regarding the behavior of the chain before it is absorbed (or killed). Typical examples are random walks on box-like finite subsets of the square lattice $\mathbb Z^d$ absorbed (or killed) at the boundary. The analysis is based on Poincar\'e, Nash, and Harnack inequalities, moderate growth, and on the notions of John and inner-uniform domains.
|
mathematics
|
In this paper, a new class of Finsler metrics which are included $(\alpha,\beta)$-metrics are introduced. They are defined by a Riemannian metric and two 1-forms $\beta=b_i(x)y^i$ and $\gamma= \gamma_i(x)y^i$. This class of metrics are a generalization of $(\alpha,\beta)$-metrics which are not always $(\alpha,\beta)$-metric. We find a necessary and sufficient condition for this metric to be locally projectively flat and then we prove the conditions for this metric to be of Douglas type.
|
mathematics
|
Considering 3D heterogeneous anisotropic media, in Part III of this study we obtain the finite-element solution for the stationary ray path between two fixed endpoints using the finite element approach. It involves computation of the global (all-node) traveltime gradient vector and Hessian matrix, with respect to the nodal location and direction components. The global traveltime Hessian of the resolved stationary ray also plays an important role for obtaining the dynamic characteristics along the ray. It is a band matrix that includes the spatial, directional and mixed second derivatives of the traveltime at all nodes and nodal pairs. In Parts V, VI and VII of this study we explicitly use the global traveltime Hessian for solving the dynamic ray tracing equation, to obtain the geometric spreading at each point along the ray. Moreover, the dynamic ray tracing makes it possible to identify/classify caustics along the ray. In this part (Part IV) we propose an original two-stage approach for computing the relative geometric spreading of the entire ray path without explicitly performing the dynamic ray tracing. The first stage consists of an efficient algorithm for reducing the already computed global traveltime Hessian into a endpoint spatial traveltime Hessian. The second stage involves application of a known workflow for computing the geometrical spreading using the endpoint traveltime Hessian. Note that the method proposed in this part (Part IV) does not deliver information about caustics, nor the geometric spreading at intermediate points of the stationary ray path. Throughout the examples presented in Part VII we demonstrate the accuracy of the method proposed in this part over a set of isotropic and anisotropic examples by comparing the geometric spreading computed from this method based on the traveltime Hessian reduction with the one computed from the dynamic ray tracing.
|
physics
|
Significant progress in many classes of materials could be made with the availability of experimentally-derived large datasets composed of atomic identities and three-dimensional coordinates. Methods for visualizing the local atomic structure, such as atom probe tomography (APT), which routinely generate datasets comprised of millions of atoms, are an important step in realizing this goal. However, state-of-the-art APT instruments generate noisy and sparse datasets that provide information about elemental type, but obscure atomic structures, thus limiting their subsequent value for materials discovery. The application of a materials fingerprinting process, a machine learning algorithm coupled with topological data analysis, provides an avenue by which here-to-fore unprecedented structural information can be extracted from an APT dataset. As a proof of concept, the material fingerprint is applied to high-entropy alloy APT datasets containing body-centered cubic (BCC) and face-centered cubic (FCC) crystal structures. A local atomic configuration centered on an arbitrary atom is assigned a topological descriptor, with which it can be characterized as a BCC or FCC lattice with near perfect accuracy, despite the inherent noise in the dataset. This successful identification of a fingerprint is a crucial first step in the development of algorithms which can extract more nuanced information, such as chemical ordering, from existing datasets of complex materials.
|
condensed matter
|
In this work, we apply Convolutional Neural Networks (CNNs) to detect gravitational wave (GW) signals of compact binary coalescences, using single-interferometer data from LIGO detectors. As novel contribution, we adopted a resampling white-box approach to advance towards a statistical understanding of uncertainties intrinsic to CNNs in GW data analysis. Resampling is performed by repeated $k$-fold cross-validation experiments, and for a white-box approach, behavior of CNNs is mathematically described in detail. Through a Morlet wavelet transform, strain time series are converted to time-frequency images, which in turn are reduced before generating input datasets. Moreover, to reproduce more realistic experimental conditions, we worked only with data of non-Gaussian noise and hardware injections, removing freedom to set signal-to-noise ratio (SNR) values in GW templates by hand. After hyperparameter adjustments, we found that resampling smooths stochasticity of mini-batch stochastic gradient descend by reducing mean accuracy perturbations in a factor of $3.6$. CNNs were quite precise to detect noise but not sensitive enough to recall GW signals, meaning that CNNs are better for noise reduction than generation of GW triggers. However, applying a post-analysis, we found that for GW signals of SNR $\geq 21.80$ with H1 data and SNR $\geq 26.80$ with L1 data, CNNs could remain as tentative alternatives for detecting GW signals. Besides, with receiving operating characteristic curves we found that CNNs show much better performances than those of Naive Bayes and Support Vector Machines models and, with a significance level of $5\%$, we estimated that predictions of CNNs are significant different from those of a random classifier. Finally, we elucidated that performance of CNNs is highly class dependent because of the distribution of probabilistic scores outputted by the softmax layer.
|
astrophysics
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.