text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
We introduce a general class of combinatorial objects, which we call \emph{multi-complexes}, which simultaneously generalizes graphs, multigraphs, hypergraphs and simplicial and delta complexes. We introduce a natural algebra of multi-complexes which is defined as the algebra which has a formal basis $\mathcal{C}$ of all isomorphism types of multi-complexes, and multiplication is to take the disjoint union. This is a Hopf algebra with an operation encoding the dissasembly information for such objects, and extends the Hopf algebra of graphs. In our main result, we explicitly describe here the structure of this Hopf algebra of multi-complexes $H$. We find an explicit basis $\mathcal{B}$ of the space of primitives, which is of combinatorial relevance: it is such that each multi-complex is a polynomial with non-negative integer coefficients of the elements of $\mathcal{B}$, and each $b\in\mathcal{B}$ is a polynomial with integer coefficients in $\mathcal{C}$. Using this, we find the cancellation and grouping free formula for the antipode. The coefficients appearing in all these polynomials are, up to signs, numbers counting multiplicities of sub-multi-complexes in a multi-complex. We also explicitly illustrate how our results specialize to the graph Hopf algebra, and observe how they specialize to results in all of the above mentioned particular cases. We also investigate applications of these results to the graph reconstruction conjectures, and rederive some results in the literature on these questions.
|
mathematics
|
A novel numerical technique has been proposed to solve a two-phase tumour growth model in one spatial dimension without needing to account for the boundary dynamics explicitly. The equivalence to the standard definition of a weak solution is proved. The method is tested against equations with analytically known solutions, to illustrate the advantages over the existing techniques. The tumour growth model is solved using the new procedure and showed to be consistent with results available in the literature.
|
mathematics
|
We presented a general approach for obtaining the generalized transport equations with fractional derivatives by using the Liouville equation with fractional derivatives for a system of classical particles and Zubarev's nonequilibrium statistical operator (NSO) method within Gibbs statistics. The new non-Markovian diffusion equations of ions in spatially heterogeneous environment with fractal structure and generalized Cattaneo-Maxwell diffusion equation with taking into account the space-time nonlocality are obtained. Dispersion relations are found for the Cattaneo-Maxwell diffusion equation with taking into account the space-time nonlocality in fractional derivatives. The frequency spectrum, phase and group velocities are calculated. It is shown that it has a wave behaviour with discontinuities, which are also manifested in the behaviour of the phase velocity.
|
condensed matter
|
Electron velocity distribution functions in the solar wind according to standard models consist of 4 components, of which 3 are symmetric - the core, the halo, and the superhalo, and one is magnetic field-aligned, beam-like population, referred to as the strahl. We analysed in-situ measurements provided by the two Helios spacecrafts to study the behaviour of the last, the strahl electron population, in the inner Solar system between 0.3 and 1 au. The strahl is characterised with a pitch-angle width (PAW) depending on electron energy and evolving with radial distance. We find different behaviour of the strahl electrons for solar wind separated into types by the core electron beta parallel value ($\beta_{ec\parallel}$). For the low-$\beta_{ec\parallel}$ solar wind the strahl component is more pronounced, and the variation of PAW is electron energy dependent. At low energies a slight focusing over distance is observed, and the strahl PAW measured at 0.34 au agrees with the width predicted by a collisionless focusing model. The broadening observed for higher-energy strahl electrons during expansion can be described by an exponential relation, which points toward an energy dependent scattering mechanism. In the high-$\beta_{ec\parallel}$ solar wind the strahl appears broader in consistence with the high-$\beta_{ec\parallel}$ plasma being more unstable with respect to kinetic instabilities. Finally we extrapolate our observations to the distance of 0.16 au, predicting the strahl PAWs in the low-$\beta_{ec\parallel}$ solar wind to be $\sim$ 29$^o$ for all energies, and in the high-$\beta_{ec\parallel}$ solar wind a bit broader, ranging between 37$^o$ and 65$^o$.
|
physics
|
The early detection of anomalous events in time series data is essential in many domains of application. In this paper we deal with critical health events, which represent a significant cause of mortality in intensive care units of hospitals. The timely prediction of these events is crucial for mitigating their consequences and improving healthcare. One of the most common approaches to tackle early anomaly detection problems is standard classification methods. In this paper we propose a novel method that uses a layered learning architecture to address these tasks. One key contribution of our work is the idea of pre-conditional events, which denote arbitrary but computable relaxed versions of the event of interest. We leverage this idea to break the original problem into two hierarchical layers, which we hypothesize are easier to solve. The results suggest that the proposed approach leads to a better performance relative to state of the art approaches for critical health episode prediction.
|
statistics
|
Besides tunneling in static potential landscapes, for example, the Wentzel-Kramers-Brillouin (WKB) approach is a powerful nonperturbative approximation tool to study particle creation due to time-dependent background fields, such as cosmological particle production or the Sauter-Schwinger effect, i.e., electron-positron pair creation in a strong electric field. However, our understanding of particle creation processes in background fields depending on both space and time is rather incomplete. In order to venture into this direction, we propose a generalization of the WKB method to truly spacetime-dependent fields and apply it to the case of a spacetime-dependent mass.
|
high energy physics theory
|
In this paper we introduce JuliaSim, a high-performance programming environment designed to blend traditional modeling and simulation with machine learning. JuliaSim can build accelerated surrogates from component-based models, such as those conforming to the FMI standard, using continuous-time echo state networks (CTESN). The foundation of this environment, ModelingToolkit.jl, is an acausal modeling language which can compose the trained surrogates as components within its staged compilation process. As a complementary factor we present the JuliaSim model library, a standard library with differential-algebraic equations and pre-trained surrogates, which can be composed using the modeling system for design, optimization, and control. We demonstrate the effectiveness of the surrogate-accelerated modeling and simulation approach on HVAC dynamics by showing that the CTESN surrogates accurately capture the dynamics of a HVAC cycle at less than 4\% error while accelerating its simulation by 340x. We illustrate the use of surrogate acceleration in the design process via global optimization of simulation parameters using the embedded surrogate, yielding a speedup of two orders of magnitude to find the optimum. We showcase the surrogate deployed in a co-simulation loop, as a drop-in replacement for one of the coupled FMUs, allowing engineers to effectively explore the design space of a coupled system. Together this demonstrates a workflow for automating the integration of machine learning techniques into traditional modeling and simulation processes.
|
computer science
|
Metal nanostructures are key elements in nanooptics owing to their strong resonant interaction with light through local plasmonic charge oscillations. Their ability to shape light at the nanoscale have made them important across a multitude of areas, including biosensing, energy conversion and ultrathin flat metaoptics. Yet another dimension of avenues is foreseen for dynamic nanoantennas, ranging from tuneable metalenses for miniaturized medical devices to adaptable windows that control radiation flows in and out of buildings. However, enabling nano-optical antennas to be dynamically controllable remains highly challenging and particularly so for traditional metals with fixed permittivity. Here we present state-of-the-art conductive polymers as a new class of organic plasmonic materials for redox-tuneable nano-optics. Through experiments and simulations, we show that nanodisks of highly conductive polymers can provide clear optical extinction peaks via excitation of dipolar localised surface plasmon resonances. Resonance frequencies redshift with increasing nanodisk aspect ratio, in agreement with analytical calculations based on dipolar polarizability theory. We furthermore demonstrate complete switching of the optical response of the organic nanoantennas by chemical tuning of the polymer's redox state, which effectively modulates the material permittivity between plasmonic and non-plasmonic regimes. Our results thereby show that conductive polymer nanostructures can act as redox-tuneable plasmonic nanoantennas, based on bipolaronic charge carriers rather than electrons as in conventional metals. Future directions may investigate different polymers and geometries to further widen the plasmonic spectral range (here around 0.8 to 3.6 {\mu}m) as well as different ways of tuning.
|
physics
|
The Hamiltonian Monte Carlo (HMC) method allows sampling from continuous densities. Favorable scaling with dimension has led to wide adoption of HMC by the statistics community. Modern auto-differentiating software should allow more widespread usage in Bayesian inverse problems. This paper analyzes the two major difficulties encountered using HMC for inverse problems: poor conditioning and multi-modality. Novel results on preconditioning and replica exchange Monte Carlo parameter selection are presented in the context of spectroscopy. Recommendations are analyzed rigorously in the Gaussian case, and shown to generalize in a fusion plasma reconstruction.
|
statistics
|
We consider reinforcement learning with performance evaluated by a dynamic risk measure. We construct a projected risk-averse dynamic programming equation and study its properties. Then we propose risk-averse counterparts of the methods of temporal differences and we prove their convergence with probability one. We also perform an empirical study on a complex transportation problem.
|
mathematics
|
We discuss low-frequency g modes excited by resonant couplings with weakly unstable oscillatory convective modes in the rotating convective core in early-type main-sequence stars. Our non-adiabatic pulsation analyses including the effect of Coriolis force for $2\,M_\odot$ main-sequence models show that if the convective core rotates slightly faster than the surrounding radiative layers, g modes in the radiative envelope are excited by a resonance coupling. The frequency of the excited g mode in the inertial frame is close to $|m\Omega_{\rm c}|$ with $m$ and $\Omega_{\rm c}$ being the azimuthal order of the g mode and the rotation frequency of the convective core, respectively. These g mode frequencies are consistent with those of photometric rotational modulations and harmonics observed in many early-type main-sequence stars.
|
astrophysics
|
Among all tissue imaging modalities, photo-acoustic tomography (PAT) has been getting increasing attention in the recent past due to the fact that it has high contrast, high penetrability, and has capability of retrieving high resolution. The reconstruction methods used in PAT plays a crucial role in the applicability of PAT, and PAT finds particularly a wider applicability if a model-based regularized reconstruction method is used. A crucial factor that determines the quality of reconstruction in such methods is the choice of regularization weight. Unfortunately, an appropriately tuned value of regularization weight varies significantly with variation in the noise level, as well as, with the variation in the high resolution contents of the image, in a way that has not been well understood. There has been attempts to determine optimum regularization weight from the measured data in the context of using elementary and general purpose regularizations. In this paper, we develop a method for semi-automated tuning of the regularization weight in the context of using a modern type of regularization that was specifically designed for PAT image reconstruction. As a first step, we introduce a relative smoothness constraint with a parameter; this parameter computationally maps into the actual regularization weight, but, its tuning does not vary significantly with variation in the noise level, and with the variation in the high resolution contents of the image. Next, we construct an algorithm that integrates the task of determining this mapping along with obtaining the reconstruction. Finally we demonstrate experimentally that we can run this algorithm with a nominal value of the relative smoothness parameter -- a value independent of the noise level and the structure of the underlying image -- to obtain good quality reconstructions.
|
electrical engineering and systems science
|
Autonomous vehicle (AV) technology is rapidly becoming a reality on U.S. roads, offering the promise of improvements in traffic management, safety, and the comfort and efficiency of vehicular travel. With this increasing popularity and ubiquitous deployment, resilience has become a critical requirement for public acceptance and adoption. Recent studies into the resilience of AVs have shown that though the AV systems are improving over time, they have not reached human levels of automation. Prior work in this area has studied the safety and resilience of individual components of the AV system (e.g., testing of neural networks powering the perception function). However, methods for holistic end-to-end resilience assessment of AV systems are still non-existent.
|
electrical engineering and systems science
|
We explore possible physical origin of correlation between radio wave and very-high-energy neutrino emission in active galactic nuclei (AGN), suggested by recently reported evidence for correlation between neutrino arrival directions and positions of brightest radio-loud AGN. We show that such correlation is expected if both synchrotron emitting electrons and neutrinos originate from decays of charged pions produced in proton-proton interactions in parsec-scale relativistic jet propagating through circum-nuclear medium of the AGN.
|
astrophysics
|
Metasurfaces are subwavelength-structured artificial media that can shape and localize electromagnetic waves in unique ways. The inverse design of these devices is a non-convex optimization problem in a high dimensional space, making global optimization a major challenge. We present a new type of population-based global optimization algorithm for metasurfaces that is enabled by the training of a generative neural network. The loss function used for backpropagation depends on the generated pattern layouts, their efficiencies, and efficiency gradients, which are calculated by the adjoint variables method using forward and adjoint electromagnetic simulations. We observe that the distribution of devices generated by the network continuously shifts towards high performance design space regions over the course of optimization. Upon training completion, the best generated devices have efficiencies comparable to or exceeding the best devices designed using standard topology optimization. Our proposed global optimization algorithm can generally apply to other gradient-based optimization problems in optics, mechanics and electronics.
|
physics
|
In mixture modeling and clustering application, the number of components is often not known. The stick-breaking model is an appealing construction that assumes infinitely many components, while shrinking most of the redundant weights to near zero. However, it has been discovered that such a shrinkage is unsatisfactory: even when the component distribution is correctly specified, small and spurious weights will appear and give an inconsistent estimate on the cluster number. In this article, we propose a simple solution that gains stronger control on the redundant weights -- when breaking each stick into two pieces, we adjust the length of the second piece by multiplying it to a quasi-Bernoulli random variable, supported at one and a positive constant close to zero. This substantially increases the chance of shrinking {\em all} the redundant weights to almost zero, leading to a consistent estimator on the cluster number; at the same time, it avoids the singularity due to assigning an exactly zero weight, and maintains a support in the infinite-dimensional space. As a stick-breaking model, its posterior computation can be carried out efficiently via the classic blocked Gibbs sampler, allowing straightforward extension of using non-Gaussian components. Compared to existing methods, our model demonstrates much superior performances in the simulations and data application, showing a substantial reduction in the number of clusters.
|
statistics
|
The dynamical arrest of attractive colloidal particles into out-of-equilibrium structures, known as gelation, is central to biophysics, materials science, nanotechnology, and food and cosmetic applications, but a complete understanding is lacking. In particular, for intermediate particle density and attraction, the structure formation process remains unclear. Here, we show that the gelation of short-range attractive particles is governed by a nonequilibrium percolation process. We combine experiments on critical Casimir colloidal suspensions, numerical simulations, and analytical modeling with a master kinetic equation to show that cluster sizes and correlation lengths diverge with exponents 1.6 and 0.8, respectively, consistent with percolation theory, while detailed balance in the particle attachment and detachment processes is broken. Cluster masses exhibit power-law distributions with exponents -3/2 and -5/2 before and after percolation, as predicted by solutions to the master kinetic equation. These results revealing a nonequilibrium continuous phase transition unify the structural arrest and yielding into related frameworks.
|
condensed matter
|
We apply the Effective Field Theory of Large-Scale Structure (EFTofLSS) to analyze cosmological models with clustering quintessence, which allows us to consistently describe the parameter region in which the quintessence equation of state $w < - 1$. First, we extend the description of biased tracers in redshift space to the presence of clustering quintessence, and compute the one-loop power spectrum. We solve the EFTofLSS equations using the exact time dependence, which is relevant to obtain unbiased constraints. Then, fitting the full shape of BOSS pre-reconstructed power spectrum measurements, the BOSS post-reconstruction BAO measurements, BAO measurements from 6DF/MGS and eBOSS, the Supernovae from Pantheon, and a prior from BBN, we bound the clustering quintessence equation of state parameter $w=-1.011_{-0.048}^{+0.053}$ at $68\%$ C.L.. Further combining with Planck, we obtain $w=-1.028_{-0.030}^{+0.037}$ at $68\%$ C.L.. We also obtain constraints on smooth quintessence, in the physical regime $w \geq -1$: combining all datasets, we get $-1\leq w < - 0.979$ at $68\%$ C.L.. These results strongly support a cosmological constant.
|
astrophysics
|
We consider a team of mobile autonomous agents with the aim to cover a given set of targets. Each agent aims to determine a target to select and physically reach by the final time in coordination with other agents given locations of targets. Agents are unaware of which targets other agents intend to cover. Each agent can control its mobility and who to send information to. We assume communication happens over a wireless channel that is subject to failures. Given the setup, we propose a decentralized algorithm based on the distributed fictitious play algorithm in which agents reason about the selections and locations of other agents to decide which target to select, whether to communicate or not, who to communicate with, and where to move. Specifically, the communication actions of the agents are learning-aware, and their mobility actions are sensitive to the communication success probability. We show that the decentralized algorithm guarantees that agents will cover their targets in finite time. Numerical experiments show that mobility control for communication and learning-aware voluntary communication protocols reduce the number of communication attempts in comparison to a benchmark distributed algorithm that relies on sustained communication.
|
electrical engineering and systems science
|
In this paper, we study a first Dirichlet eigenfunction of the weighted $p$-Laplacian on a bounded domain in a complete weighted Riemannian manifold. By constructing gradient estimates for a first eigenfunction, we obtain some relationships between weighted $p$-Laplacian first eigenvalues. As an immediate application, we also obtain some eigenvalue comparison results between the first Dirichlet eigenvalue of the weighted Laplacian, the first clamped plate eigenvalue and the first buckling eigenvalue.
|
mathematics
|
The paper investigates the linear growth of the mixing zone during polymer slug injection into a water reservoir. The velocities of the slug front and of the boundaries of the mixing zone are analyzed as key parameters. Using two different numerical methods (finite volumes and finite elements), the impact of the slug size, reservoir dimensions, Peclet number, and viscosity curve shape on the corresponding velocities is examined. Notwithstanding the realization of the solution by two computational schemes, the simulation results coincide with sufficient accuracy. The numerically obtained velocities are compared with theoretical estimates within the transverse flow equilibrium approximation and Koval model. Based on the comparison pattern, recommendations are presented on the use of specific analytical methods for estimating the growth rate of the mixing zone depending on the characteristics of the polymer.
|
physics
|
The determination of the CKM element $V_{cb}$ from inclusive semileptonic $b\to c \ell \bar\nu$ decays has reached a high precision thanks to a combination of theoretical and experimental efforts. Aiming towards even higher precision, we discuss two processes that contaminate the inclusive $V_{cb}$ determination; the $b\to u$ background and the contribution of the tauonic mode: $b\to c(\tau \to \mu\nu\bar{\nu})\bar{\nu}$. Both of these contributions are dealt with at the experimental side, using Monte-Carlo methods and momentum cuts. However, these contributions can be calculated with high precision within the Heavy-Quark Expansion. In this note, we calculate the theoretical predictions for these two processes. The $b\to u$ results are compared with generator-level Monte-Carlo results used at Belle and Belle II. We have good agreement between theory and Monte-Carlo for lepton energy moments, but less for hadronic mass moments. Based on our results the uncertainties due to these backgrounds processes can basically be eliminated by properly including them into the analyses.
|
high energy physics phenomenology
|
Half a century after its discovery, the Josephson junction has become the most important nonlinear quantum electronic component at our disposal. It has helped reshape the SI system around quantum effects and is used in scores of quantum devices. By itself, the use of Josephson junctions in the volt metrology seems to imply an exquisite understanding of the component in every aspect. Yet, surprisingly, there have been long-standing subtle issues regarding the modeling of the interaction of a junction with its electromagnetic environment. Here, we find that a Josephson junction connected to a resistor does not become insulating beyond a given value of the resistance due to a dissipative quantum phase transition, as is commonly believed. Our work clarifies how this key quantum component behaves in the presence of a dissipative environment and provides a comprehensive and consistent picture, notably regarding the treatment of its phase.
|
condensed matter
|
By a combined experimental and theoretical approach, we investigate normal state thermoelectric transport in MgB2, as a probe of selective disorder and doping in the sigma and pi bands. We calculate the temperature dependent diffusive Seebeck coefficient Sdiff(T) with the Boltzmann equation resolved in relaxation time approximation, taking into account the scattering with phonons and impurities, the effect of renormalization and the effect doping in a rigid band approximation. We show that selective disorder has a sizeable effect on the Sdiff magnitude, as it tunes the relative contributions of sigma and pi bands. Disorder also affects the Sdiff temperature dependences, eventually yielding a linear Sdiff(T) behavior in the dirty limit. We also show that band filling has opposite effects on S, depending on which band dominates transport. In parallel, we carry out Seebeck effect measurements on neutron-irradiated Mg11B2, and on two series of doped samples Mg1-xAlxB2 and Mg(B1-xCx)2. From comparison of calculated Sdiff(T) and experimental S(T) curves, we demonstrate that diffusive and phonon drag terms give comparable contributions in clean samples, but the phonon drag term is progressively suppressed with increasing disorder. In C and Al doped samples we observe very different experimental behaviors in terms of sign, magnitude and temperature dependence. Indeed, notwithstanding the similar electron doping introduced by both substitutions, C or Al doping yields disorder which mainly affects either sigma or pi bands, respectively. With the help of our ab-initio approach, we are able to disentangle the several effects and prove that Seebeck coefficient is a very sensitive probe of the kind of disorder.
|
condensed matter
|
In this article we consider the simultaneous recovery of bulk and boundary potentials in (degenerate) elliptic equations modelling (degenerate) conducting media with inaccessible boundaries. This connects local and nonlocal Calder\'on type problems. We prove two main results on these type of problems: On the one hand, we derive simultaneous bulk and boundary Runge approximation results. Building on these, we deduce uniqueness for localized bulk and boundary potentials. On the other hand, we construct a family of CGO solutions associated with the corresponding equations. These allow us to deduce uniqueness results for arbitrary bounded, not necessarily localized bulk and boundary potentials. The CGO solutions are constructed by duality to a new Carleman estimate.
|
mathematics
|
The phase III BNT162b2 mRNA COVID-19 vaccine trial is based on a Bayesian design and analysis, and the main evidence of vaccine efficacy is presented in Bayesian statistics. Confusion and mistakes are produced in the presentation of the Bayesian results. Some key statistics, such as Bayesian credible intervals, are mislabeled and stated as confidence intervals. Posterior probabilities of the vaccine efficacy are not reported as the main results. We illustrate the main differences in the reporting of Bayesian analysis results for a clinical trial and provide four recommendations. We argue that statistical evidence from a Bayesian trial, when presented properly, is easier to interpret and directly addresses the main clinical questions, thereby better supporting regulatory decision making. We also recommend using abbreviation "BI" to represent Bayesian credible intervals as a differentiation to "CI" which stands for confidence interval.
|
statistics
|
We provide statistical guarantees for Bayesian variational boosting by proposing a novel small bandwidth Gaussian mixture variational family. We employ a functional version of Frank-Wolfe optimization as our variational algorithm and study frequentist properties of the iterative boosting updates. Comparisons are drawn to the recent literature on boosting, describing how the choice of the variational family and the discrepancy measure affect both convergence and finite-sample statistical properties of the optimization routine. Specifically, we first demonstrate stochastic boundedness of the boosting iterates with respect to the data generating distribution. We next integrate this within our algorithm to provide an explicit convergence rate, ending with a result on the required number of boosting updates.
|
statistics
|
Preserving entanglement is a crucial dynamical process for entanglement-based quantum computation and quantum-information processes, such as one-way quantum computing and quantum key distribution. However, how to quantify the ability of an experimental process to preserve two-qubit entanglement in experimentally feasible ways is not well understood. Accordingly, herein, we propose a method for quantitatively characterizing the ability of a process to preserve entanglement, referred to henceforth as entanglement preservability. A fidelity benchmark is additionally derived for identifying the ability of a process to preserve entanglement. It is shown that the proposed method and benchmark are experimentally feasible and require only local measurements on single qubits and preparations of separable states. Moreover, they are applicable to all physical processes that can be described using the general theory of quantum operations, e.g., qubit dynamics in photonic and superconducting systems. The results are of significant interest for applications in quantum-information processing in which entanglement preservation is required.
|
quantum physics
|
We present a precise analysis to test hypothetical models involving sterile neutrinos beyond the standard flat-$\Lambda$CDM cosmology with the CMB observations from the $Planck$ mission and BAO measurements. This analysis shows that adding the locally measured Hubble parameter $H_{0}$ = $73.00\pm1.75 $ km $\textrm s^{-1}$ Mp$\textrm c^{-1} $ to the data removes the need for the informative physical $m_{sterile}^{thermal}$ prior in CMB constraints of $ m_{\nu,sterile}^{eff}$. Under the constraints from the data containing the locally measured $H_{0}$ we obtain an upper limit $ m_{\nu,sterile}^{eff} < 0.306 $ eV scale mass for the massive sterile neutrino, and an upper limit $\Sigma m_{\nu} < 0.214$ eV scale mass for the three degenerate massive neutrino (95 per cent confidence level). We also obtain the value $\sigma_{8} $ = $0.81^{+0.05}_{-0.06} $ (95 per cent confidence level), which is in compatibility with the constraints from $Planck$ 2015 CMB data at the 1$\sigma$ level. We find that introducing parameter $ m_{\nu,sterile}^{eff}$ to the model of cosmology reduces the $\sigma_{8} $ value and moves it closer to the obtained value for this parameter from the KiDS-450 analysis. Our results show that the locally measured Hubble parameter can increase constraints on $\sigma_{8} $ values.
|
astrophysics
|
The Sachdev-Ye-Kitaev (SYK) model, a theory of N Majorana fermions with q-body interactions, becomes in the large q limit a conformally-broken Liouville field theory. Taking this limit preserves many interesting properties of the model, yet makes the theory as a whole much more tractable. Accordingly, we produce novel expressions for the two and four-point correlators at arbitrary temperature and find the surprising result they take a universal closed form. We note that these expressions correctly match onto and interpolate between previously-obtained low-energy results and simple high-energy perturbative checks. We find that the time-ordered four-point correlators are always determined by finite temperature OPEs into the identity and Hamiltonian, while the out-of-time-order four-point correlators remain nontrivial and always scramble. This has only been established in the conformal limit, so to find that it holds for large q at all temperatures/couplings is a nontrivial result. Finally, we determine the system's thermalization and scrambling rates and find that they always agree, regardless of temperature. This adds to the increasing body of evidence that there exists simple structures in large N internal dynamics, such as those formed by SYK's epidemic operator growth.
|
high energy physics theory
|
Given a quadruple of finite index subfactors we explicitly compute the Pimsner-Popa probabilistic constant for the pair of intermediate subfactors and relate it with the corresponding Connes-St\o rmer relative entropy between them. This generalizes an old result of Pimsner and Popa.
|
mathematics
|
This work proposes a scheme that allows learning complex multi-agent behaviors in a sample efficient manner, applied to 2v2 soccer. The problem is formulated as a Markov game, and solved using deep reinforcement learning. We propose a basic multi-agent extension of TD3 for learning the policy of each player, in a decentralized manner. To ease learning, the task of 2v2 soccer is divided in three stages: 1v0, 1v1 and 2v2. The process of learning in multi-agent stages (1v1 and 2v2) uses agents trained on a previous stage as fixed opponents. In addition, we propose using experience sharing, a method that shares experience from a fixed opponent, trained in a previous stage, for training the agent currently learning, and a form of frame-skipping, to raise performance significantly. Our results show that high quality soccer play can be obtained with our approach in just under 40M interactions. A summarized video of the resulting game play can be found in https://youtu.be/f25l1j1U9RM.
|
computer science
|
Accounting for sex and gender characteristics is a complex, structural challenge in social science research. While other methodology papers consider issues surrounding appropriate measurement, we consider how gender and sex impact adjustments for non-response patterns in sampling and survey estimates. We consider the problem of survey adjustment arising from the recent push toward measuring sex or gender as a non-binary construct. This is challenging not only in that response categories differ between sex and gender measurement, but also in that both of these attributes are potentially multidimensional. In this manuscript we reflect on similarities to measuring race/ethnicity before considering the ethical and statistical implications of the options available to us. We do not conclude with a single best recommendation but rather an awareness of the complexity of the issues surrounding this challenge and the benefits and weaknesses of different approaches.
|
statistics
|
Given a finite dimensional pure state transformation restricted by entanglement assisted local operations and classical communication (ELOCC), we derive minimum and maximum bounds on the entanglement of an ancillary catalyst that allows that transformation. These bounds are non-trivial even when the Schmidt number of both the original and ancillary states becomes large. We identify a lower bound for the dimension of a catalyst allowing a particular ELOCC transformation. Along with these bounds, we present further constraints on ELOCC transformations by identifying restrictions on the Schmidt coefficients of the target state. In addition, an example showing the existence of qubit ELOCC transformations with multiple ranges of potential ancillary states is provided. This example reveals some additional difficulty in finding strict bounds on ELOCC transformations, even in the qubit case. Finally, a comparison of the bounds in this paper with previously discovered bounds is presented.
|
quantum physics
|
Significant and persistent trajectory-to-trajectory variance are commonly observed in the particle tracking experiments, which have become a major challenge for the experiment data analysis. In this theoretical paper, we investigate the ergodicity recovery behavior, which helps to clarify the origin and the convergence of trajectory-to-trajectory fluctuation in various heterogeneous disordered media. The concepts of self-averaging and ergodicity are revisited in the context of trajectory analysis. The slow ergodicity recovery and the non-Gaussian diffusion in the annealed disordered media are shown as the consequences of the central limit theorem in different situations. The strange ergodicity recovery behavior is reported in the quenched disordered case, which arises from a localization mechanism. The first-passage approach is introduced to the ergodicity analysis for this case, of which the central limit theorem can be employed and the ergodicity is recovered in the length scale of diffusivity correlation.
|
condensed matter
|
(abridged) Even if the HARPS spectrograph has been operational for more than 15 years and it provides among the most precise Doppler measurements, improvements are still possible. One known problem, for instance, is the non-fully regular block-stitching of the CCDs, which introduces, in some cases, one-year period parasitic signals in the measured radial velocity. The aim is to improve the wavelength calibration of HARPS to push further its planet-detection capabilities. The properties of the CCD stitching-induced pixel-size anomalies are determined with LED flat-field frames, and then a physical, gap-corrected map of the CCDs is used for the fitting model of the spectral orders. We also use a new thorium line list, based on much higher-accuracy measurements than the one used up to now. We derive new wavelength solutions for the 15 years of HARPS data, both before and after the 2015 fibre upgrade. We demonstrate that we correct the gap anomalies by computing the wavelength solutions of laser frequency comb exposures, both with and without taking the gap correction into account. By comparing the rms of the most stable stars of the HARPS sample, we show that we globally decrease the radial velocity dispersion of the data, especially for the data acquired after the change of fibres. Finally, the comparative analysis of several individual systems shows that we manage to attenuate the periodogram power at one year in most cases. The analysis of the RVs derived from individual stellar lines also shows that we correct the stitching-induced RV variation. This improved calibration of the HARPS spectrograph allows to go deeper in the search for low-amplitude radial-velocity signals. It will be further improved by combining the thorium calibration spectra with laser frequency comb and Fabry-Perot calibration spectra, and not only for HARPS but notably also for HARPS-N and ESPRESSO.
|
astrophysics
|
Given a solution to 4D Einstein gravity with an isometry direction, it is known that the equations of motion are identical to those of a 3D $\sigma$-model with target space geometry $SU(1,1)/U(1)$. Thus, any transformation by $SU(1, 1) \cong SL(2,\mathbb{R})$ is a symmetry for the action and allows one to generate new solutions in 4D. Here we clarify and extend recent work on electromagnetic (EM) duality in the context of the classical double copy. In particular, for pure gravity, we identify an explicit map between the Maxwell field of the single copy and the scalars in the target space, allowing us to identify the $U(1) \subset SL(2, \mathbb{R})$ symmetry dual to EM duality in the single copy. Moreover, we extend the analysis to Einstein-Maxwell theory, where we highlight the role of Ehlers-Harrison transformations and, for spherically symmetric charged black hole solutions, we interpret the equations of motion as a truncation of the putative single copy for Einstein-Yang-Mills theory.
|
high energy physics theory
|
A quantum computer, i.e. utilizing the resources of quantum physics, superposition of states and entanglement, could furnish an exponential gain in computing time. A simulation using such resources is called a quantum simulation. The advantage of quantum simulations over classical ones is well established at the theoretical, i.e. software level. Their practical benefit requires their implementation on a quantum hardware. The quantum computer, i.e. the universal one (see below), has not seen the light of day yet, but the efforts in this direction are both growing and diverse. Also, quantum simulation has already been illustrated by numerous experimental proofs of principle, thanks too small-size and specific-task quantum computers or simulators. Quantum walks are particularly-studied quantum-simulation schemes, being elementary bricks to conceive any quantum algorithm, i.e. to achieve so-called universal quantum computation. The present thesis is a step more towards a simulation of quantum field theories based on discrete-time quantum walks (DTQWs). Indeed, it is shown, in certain cases, how DTQWs can simulate, in the continuum, the action of Yang-Mills gauge fields on fermionic matter, and the retroaction of the latter on the gauge-field dynamics. The suggested schemes preserve gauge invariance on the spacetime lattice, i.e. not only in the continuum. In the (1+2)-dimensional Abelian case, consistent lattice equivalents to both Maxwell's equations and the current conservation are suggested. In the (1+1)-dimensional non-Abelian case, a lattice version of the non-Abelian field strength is suggested. Moreover, it is shown how this fermionic matter based on DTQWs can be coupled to relativistic gravitational fields of the continuum, i.e. to curved spacetimes, in 1+2 dimensions.
|
quantum physics
|
In many studies, dimension reduction methods are used to profile participant characteristics. For example, nutrition epidemiologists often use latent class models to characterize dietary patterns. One challenge with such approaches is understanding subtle variations in patterns across subpopulations. Robust Profile Clustering (RPC) provides a dual flexible clustering model, where participants may cluster at two levels: (1) globally, where participants are clustered according to behaviors shared across an overall population, and (2) locally, where individual behaviors can deviate and cluster in subpopulations. We link clusters to a health outcome using a joint model. This model is used to derive dietary patterns in the United States and evaluate case proportion of orofacial clefts. Using dietary consumption data from the 1997-2009 National Birth Defects Prevention Study, a population-based case-control study, we determine how maternal dietary profiles are associated with an orofacial cleft among offspring. Results indicated that mothers who consumed a high proportion of fruits and vegetables compared to meats, such as chicken and beef, had lower odds delivering a child with an orofacial cleft defect.
|
statistics
|
Standard Model Neutrino Effective Field Theory (SMNEFT) is an effective theory with Standard Model (SM) gauge-invariant operators constructed only from SM and right-handed neutrino fields. For the full set of dimension-six SMNEFT operators, we present the gauge coupling terms of the one-loop anomalous dimension matrix for renormalization group evolution (RGE) of the Wilson coefficients between a new physics scale and the electroweak scale. We find that the SMNEFT operators can be divided into five subsets which are closed under RGE. Our results apply for both Dirac and Majorana neutrinos. We also discuss the operator mixing pattern numerically and comment on some interesting phenomenological implications.
|
high energy physics phenomenology
|
We present an updated version of the Kimball & Ivezic (2008) radio catalog using updated radio and optical data, and new low-frequency radio data. The catalog, containing millions of radio sources, was created by consolidating large-area radio and optical surveys GB6 (6cm), FIRST (20cm), NVSS (20cm), WENSS (92cm), VLSSr (4m), and SDSS DR9 (optical). The region where all surveys overlap covers 3269 square degrees in the North Galactic Cap, and contains >160,000 20-cm sources, with about 12,000 detected in all five radio surveys and over one-third detected optically. Combining parameters from the sky surveys allows easy and efficient classification by radio and optical morphology and radio spectral index. The catalog is available at http://www.aoc.nrao.edu/~akimball/radiocat.shtml .
|
astrophysics
|
We consider 1/4 BPS black hole solutions of ${\cal N}=2$ gauged supergravity in $AdS_4$. The near horizon geometry is $AdS_2 \times S^2$ and supersymmetry is enhanced. In the first part of the paper we choose a moment map, which allows the embedding of this supergravity solution into a sugra theory with a hypermultiplet. We then perform the s-wave reduction of this theory at the horizon and determine the dilaton multiplet, which couples to both metric and gravitino fluctuations. In the second part we work with Euclidean axial $\mathcal{N}=(2,2)$ JT supergravity and show how to add gauged matter in form of covariantly twisted chiral and anti-chiral multiplets. We demonstrate how to reduce the on-shell action to boundary superspace. We compare both theories and calculate the fourpoint function by integrating out gravitons, gravitini and photons for the s-wave setting and by use of the Super-Schwarzian modes in the JT theory.
|
high energy physics theory
|
We present various 4d $\mathcal{N}=1$ theories enjoying IR global symmetry enhancement. The models we consider have the $USp(2n)$ gauge group, 8 fundamental, one antisymmetric chirals and various numbers of gauge singlets. By suitably turning on superpotential deformations involving the singlets which break part of the UV symmetry we flow to SCFTs with $E_6$, $SO(10)$, $SO(9)$, $SO(8)$ and $F_4$ IR global symmetry. We explain these patterns of symmetry enhancement following two arguments due to Razamat, Sela and Zafrir. The first one involves the study of the relations satisfied by marginal operators, while the second one relies on the existence of self-duality frames.
|
high energy physics theory
|
In this paper, we show that the Carath\'{e}odory function $\varphi_{\scriptscriptstyle {Ne}}(z)=1+z-z^3/3$ maps the open unit disk $\mathbb{D}$ onto the interior of the nephroid, a $2$-cusped kidney-shaped curve, \begin{align*} \left((u-1)^2+v^2-\frac{4}{9}\right)^3-\frac{4 v^2}{3}=0, \end{align*} and introduce new Ma-Minda type function classes $\mathcal{S}^*_{Ne}$ and $\mathcal{C}_{Ne}$ associated with it. Apart from studying the characteristic properties of the region bounded by this nephroid, the structural formulas, extremal functions, growth and distortion results, inclusion results, coefficient bounds and Fekete-Szeg\"{o} problems are discussed for the classes $\mathcal{S}^*_{Ne}$ and $\mathcal{C}_{Ne}$. Moreover, for $\beta\in\mathbb{R}$ and some analytic function $p(z)$ satisfying $p(0)=1$, we prove certain subordination implications of the first order differential subordination $1+\beta\frac{zp'(z)}{p^j(z)}\prec\varphi_{\scriptscriptstyle {Ne}}(z),\,j=0,1,2,$ and obtain sufficient conditions for some geometrically defined function classes available in the literature.
|
mathematics
|
Chern-Simons topological quantum computer is a device that can be effectively described by the Chern-Simons topological quantum field theory and used for quantum computations. Quantum qudit gates of this quantum computer are represented by sequences of quantum $\mathcal{R}$-matrices. Its dimension and explicit form depend on the parameters of the Chern-Simons theory -- level $k$, gauge group $SU(N)$, and representation, which is chosen to be symmetric representation $[r]$. In this paper, we examine the universality of such a quantum computer. We prove that for sufficiently large $k$ it is universal, and the minimum allowed value of $k$ depends on the remaining parameters $r$ and $N$.
|
high energy physics theory
|
As machine learning models become more accurate, they typically become more complex and uninterpretable by humans. The black-box character of these models holds back its acceptance in practice, especially in high-risk domains where the consequences of failure could be catastrophic such as health-care or defense. Providing understandable and useful explanations behind ML models or predictions can increase the trust of the user. Example-based reasoning, which entails leveraging previous experience with analogous tasks to make a decision, is a well known strategy for problem solving and justification. This work presents a new explanation extraction method called LEAFAGE, for a prediction made by any black-box ML model. The explanation consists of the visualization of similar examples from the training set and the importance of each feature. Moreover, these explanations are contrastive which aims to take the expectations of the user into account. LEAFAGE is evaluated in terms of fidelity to the underlying black-box model and usefulness to the user. The results showed that LEAFAGE performs overall better than the current state-of-the-art method LIME in terms of fidelity, on ML models with non-linear decision boundary. A user-study was conducted which focused on revealing the differences between example-based and feature importance-based explanations. It showed that example-based explanations performed significantly better than feature importance-based explanation, in terms of perceived transparency, information sufficiency, competence and confidence. Counter-intuitively, when the gained knowledge of the participants was tested, it showed that they learned less about the black-box model after seeing a feature importance-based explanation than seeing no explanation at all. The participants found feature importance-based explanation vague and hard to generalize it to other instances.
|
computer science
|
Features in predictive models are not exchangeable, yet common supervised models treat them as such. Here we study ridge regression when the analyst can partition the features into $K$ groups based on external side-information. For example, in high-throughput biology, features may represent gene expression, protein abundance or clinical data and so each feature group represents a distinct modality. The analyst's goal is to choose optimal regularization parameters $\lambda = (\lambda_1, \dotsc, \lambda_K)$ -- one for each group. In this work, we study the impact of $\lambda$ on the predictive risk of group-regularized ridge regression by deriving limiting risk formulae under a high-dimensional random effects model with $p\asymp n$ as $n \to \infty$. Furthermore, we propose a data-driven method for choosing $\lambda$ that attains the optimal asymptotic risk: The key idea is to interpret the residual noise variance $\sigma^2$, as a regularization parameter to be chosen through cross-validation. An empirical Bayes construction maps the one-dimensional parameter $\sigma$ to the $K$-dimensional vector of regularization parameters, i.e., $\sigma \mapsto \widehat{\lambda}(\sigma)$. Beyond its theoretical optimality, the proposed method is practical and runs as fast as cross-validated ridge regression without feature groups ($K=1$).
|
statistics
|
Let ({\lambda}, v) be a known real eigenpair of a square real matrix A. In this paper it is shown how to locate the other eigenvalues of A in terms of the components of v. The obtained region is a union of Gershgorin discs of the second type recently introduced by the authors in a previous paper. Two cases are considered depending on whether or not some of the components of v are equal to zero. Upper bounds are obtained, in two different ways, for the largest eigenvalue in absolute value of A other than {\lambda}. Detailed examples are provided. Although nonnegative irreducible matrices are somewhat emphasized, the main results in this paper are valid for any square real matrix.
|
mathematics
|
We present a structured overview of adaptation algorithms for neural network-based speech recognition, considering both hybrid hidden Markov model / neural network systems and end-to-end neural network systems, with a focus on speaker adaptation, domain adaptation, and accent adaptation. The overview characterizes adaptation algorithms as based on embeddings, model parameter adaptation, or data augmentation. We present a meta-analysis of the performance of speech recognition adaptation algorithms, based on relative error rate reductions as reported in the literature.
|
electrical engineering and systems science
|
Motivated by applications to wireless communications, this paper addresses the propagation of waves transmitted by ambient noise sources and interacting with metamaterials. We discuss a generalized Helmholtz-Kirchhoff identity that is valid in dispersive media and we characterize the statistical properties of the empirical cross spectral density of the wave field. We can then introduce and analyze an original communication scheme between two passive arrays that uses only ambient noise illumination. The passive transmitter array does not transmit anything but it is a tunable metamaterial surface that can modulate its dispersive properties and encode a message in the modulation. The passive receiver array made of two receivers that are half-a-wavelength apart from each other can decode the message from the empirical cross spectral density of the wave field.
|
electrical engineering and systems science
|
Batch normalization (BN) is an effective method to accelerate model training and improve the generalization performance of neural networks. In this paper, we propose an improved batch normalization technique called attentive batch normalization (ABN) in Long Short Term Memory (LSTM) based acoustic modeling for automatic speech recognition (ASR). In the proposed method, an auxiliary network is used to dynamically generate the scaling and shifting parameters in batch normalization, and attention mechanisms are introduced to improve their regularized performance. Furthermore, two schemes, frame-level and utterance-level ABN, are investigated. We evaluate our proposed methods on Mandarin and Uyghur ASR tasks, respectively. The experimental results show that the proposed ABN greatly improves the performance of batch normalization in terms of transcription accuracy for both languages.
|
electrical engineering and systems science
|
The effect of gravitational fluctuations on the quantum effective potential for scalar fields is a key ingredient for predictions of the mass of the Higgs boson, understanding the gauge hierarchy problem and a possible explanation of an---asymptotically---vanishing cosmological constant. We find that the quartic self-interaction of the Higgs scalar field is an irrelevant coupling at the asymptotically safe ultraviolet fixed point of quantum gravity. This renders the ratio between the masses of the Higgs boson and top quark predictable. If the flow of couplings below the Planck scale is approximated by the Standard Model, this prediction is consistent with the observed value. The quadratic term in the Higgs potential is irrelevant if the strength of gravity at short distances exceeds a bound that is determined here as a function of the particle content. In this event, a tiny value of the ratio between the Fermi scale and the Planck scale is predicted.
|
high energy physics theory
|
The contact resistance limits the down-scaling and operating range of OFETs. With the monolayer (1L) organic crystals and non-destructive metal/semiconductor interfaces, intrinsic mobility of 12.5 cm2V-1s-1 and Ohmic contact resistance of 40 ohm-cm were achieved. The on/off ratio maintained at 10^3 even at a small VDS of -0.1 mV. High current density of 4.2 uA/um was achieved with the 1L-crystal as the active layer. At such high current density, the velocity saturation and channel self-heating effects are observed in OFETs for the first time. In addition to the low contact resistance and high-resolution lithography, we suggest the thermal management of the high mobility OFETs will be the next major challenge to achieve high-speed densely integrated flexible electronics.
|
physics
|
We discuss some properties of Cohen and random reals. We show that they belong to any definable partition regular family, and hence they satisfy most "largeness" properties studied in Ramsey theory. We determine their position in the Mahler's classification of the reals and using it, we get some information about Liouville numbers. We also show that they are wild in the sense of o-minimality, i.e., they define the set of integers.
|
mathematics
|
Background and aim: Most of the Mixed Reality models used in the surgical telepresence are suffering from discrepancies in the boundary area and spatial-temporal inconsistency due to the illumination variation in the video frames. The aim behind this work is to propose a new solution that helps produce the composite video by merging the augmented video of the surgery site and the virtual hand of the remote expertise surgeon. The purpose of the proposed solution is to decrease the processing time and enhance the accuracy of merged video by decreasing the overlay and visualization error and removing occlusion and artefacts. Methodology: The proposed system enhanced the mean value cloning algorithm that helps to maintain the spatial-temporal consistency of the final composite video. The enhanced algorithm includes the 3D mean value coordinates and improvised mean value interpolant in the image cloning process, which helps to reduce the sawtooth, smudging and discolouration artefacts around the blending region. Results: As compared to the state of the art solution, the accuracy in terms of overlay error of the proposed solution is improved from 1.01mm to 0.80mm whereas the accuracy in terms of visualization error is improved from 98.8% to 99.4%. The processing time is reduced to 0.173 seconds from 0.211 seconds. Conclusion: Our solution helps make the object of interest consistent with the light intensity of the target image by adding the space distance that helps maintain the spatial consistency in the final merged video.
|
physics
|
We study the quantum ground state phases of the disordered Bose-Hubbard model with attractive interactions. We map the phase diagram through exact diagonalization. At strong disorder, all the bosons localize into the vicinity of a single site, contrary to the Bose glass behavior of the repulsive model. At weak disorder, depending on hopping, the ground state is either superfluid or a W state, an entangled superposition of states where all the bosons occupy a single site. We show that the disorder stability of the W phase diminishes exponentially as the total boson number increases.
|
quantum physics
|
An efficient machine-learning-based method combined with a conventional local optimization technique has been proposed for exploring local energy minima of interstitial species in a crystal. In the proposed method, an effective initial point for local optimization is sampled at each iteration from a given feasible set in the search space. The effective initial point is here defined as the grid point that most likely converges to a new local energy minimum by local optimization and/or is located in the vicinity of the boundaries between energy basins. Specifically, every grid point in the feasible set is classified by the predicted label indicating the local energy minimum that the grid point converges to. The classifier is created and updated at every iteration using the already-known information on the local optimizations at the earlier iterations, which is based on the support vector machine (SVM). The SVM classifier uses our original kernel function designed as reflecting the symmetries of both host crystal and interstitial species. The most distant unobserved point on the classification boundaries from the observed points is sampled as the next initial point for local optimization. The proposed method is applied to three model cases, i.e., the six-hump camelback function, a proton in strontium zirconate with the orthorhombic perovskite structure, and a water molecule in lanthanum sulfate with the monoclinic structure, to demonstrate the high performance of the proposed method.
|
physics
|
We classify four-dimensional shrinking Ricci solitons satisfying $Sec \geq \frac{1}{24} R$, where $Sec$ and $R$ denote the sectional and the scalar curvature, respectively. They are isometric to either $\mathbb{R}^{4}$ (and quotients), $\mathbb{S}^{4}$, $\mathbb{RP}^{4}$ or $\mathbb{CP}^{2}$ with their standard metrics.
|
mathematics
|
We study the dynamics of a soliton-impurity system modeled in terms of a binary Bose-Einstein condensate. This is achieved by `switching off' one of the two self-interaction scattering lengths, giving a two component system where the second component is trapped entirely by the presence of the first component. It is shown that this system possesses rich dynamics, including the identification of unusual `weak' dimers that appear close to the zero inter-component scattering length. It is further found that this system supports quasi-stable trimers in regimes where the equivalent single-component gas does not, which is attributed to the presence of the impurity atoms which can dynamically tunnel between the solitons, and maintain the required phase differences that support the trimer state.
|
condensed matter
|
Consider a large-scale cellular network in which base stations (BSs) serve massive Internet-of-Things (IoT) devices. Since IoT devices are powered by a capacity-limited battery, how to prolong their working lifetime is a paramount problem for the success of cellular IoT systems. This paper proposes how to use BSs to manage the active and dormant operating modes of the IoT devices via downlink signaling in an energy-efficient fashion and how the IoT devices perform energy-efficient uplink power control to improve their uplink coverage. We first investigate the fundamental statistical properties of an activation signaling process induced by BSs that would like to activate the devices in their cells, which helps to derive the neat expressions of the true, false and total activation probabilities that reveal joint downlink power control and BS coordination is an effective means to significantly improve the activation performance. We then propose an energy-efficient uplink power control for IoT devices which is shown to save power and ameliorate the uplink coverage probability at the same time. We also propose an energy-efficient downlink power control and BS coordination scheme, which is shown to remarkably improve the activation and uplink coverage performances at the same time.
|
electrical engineering and systems science
|
MRI is an inherently slow process, which leads to long scan time for high-resolution imaging. The speed of acquisition can be increased by ignoring parts of the data (undersampling). Consequently, this leads to the degradation of image quality, such as loss of resolution or introduction of image artefacts. This work aims to reconstruct highly undersampled Cartesian or radial MR acquisitions, with better resolution and with less to no artefact compared to conventional techniques like compressed sensing. In recent times, deep learning has emerged as a very important area of research and has shown immense potential in solving inverse problems, e.g. MR image reconstruction. In this paper, a deep learning based MR image reconstruction framework is proposed, which includes a modified regularised version of ResNet as the network backbone to remove artefacts from the undersampled image, followed by data consistency steps that fusions the network output with the data already available from undersampled k-space in order to further improve reconstruction quality. The performance of this framework for various undersampling patterns has also been tested, and it has been observed that the framework is robust to deal with various sampling patterns, even when mixed together while training, and results in very high quality reconstruction, in terms of high SSIM (highest being 0.990$\pm$0.006 for acceleration factor of 3.5), while being compared with the fully sampled reconstruction. It has been shown that the proposed framework can successfully reconstruct even for an acceleration factor of 20 for Cartesian (0.968$\pm$0.005) and 17 for radially (0.962$\pm$0.012) sampled data. Furthermore, it has been shown that the framework preserves brain pathology during reconstruction while being trained on healthy subjects.
|
electrical engineering and systems science
|
This paper presents a system identification technique for systems whose output is asymptotically periodic under constant inputs. The model used for system identification is a discrete-time Lur'e model consisting of asymptotically stable linear dynamics, a time delay, a washout filter, and a static nonlinear feedback mapping. For all sufficiently large scalings of the loop transfer function, these components cause divergence under small signal levels and decay under large signal amplitudes, thus producing an asymptotically oscillatory output. A bias-generation mechanism is used to provide a bias in the oscillation. The contribution of the paper is a least-squares technique that estimates the coefficients of the linear model as well as the parameterization of the continuous, piecewise-linear feedback mapping.
|
electrical engineering and systems science
|
The modified rigorous coupled-wave analysis technique is developed to describe the optical characteristics of the plasmonic structures with the grating-gated delta-thin conductive channel in the far- and near-field zones of electromagnetic waves. The technique was applied for analysis of the resonant properties of AlGaN/GaN heterostructures combined with deeply-subwavelength metallic grating which facilitates the excitation of the two-dimensional plasmons in the THz frequency range. The convergence of the calculations at the frequencies near the plasmon resonances is discussed. The impact of the grating's parameters, including filling factor and thickness of the grating, on resonant absorption of the structure was investigated in details. The spatial distributions of electromagnetic field in a near-field zone were used for the evaluation of total absorption of the plasmonic structures separating contributions of the grating-gated two-dimensional electron gas and the grating coupler.
|
condensed matter
|
Interpretation and improvement of deep neural networks relies on better understanding of their underlying mechanisms. In particular, gradients of classes or concepts with respect to the input features (e.g., pixels in images) are often used as importance scores or estimators, which are visualized in saliency maps. Thus, a family of saliency methods provide an intuitive way to identify input features with substantial influences on classifications or latent concepts. Several modifications to conventional saliency maps, such as Rectified Gradients and Layer-wise Relevance Propagation (LRP), have been introduced to allegedly denoise and improve interpretability. While visually coherent in certain cases, Rectified Gradients and other modified saliency maps introduce a strong input bias (e.g., brightness in the RGB space) because of inappropriate uses of the input features. We demonstrate that dark areas of an input image are not highlighted by a saliency map using Rectified Gradients, even if it is relevant for the class or concept. Even in the scaled images, the input bias exists around an artificial point in color spectrum. Our modification, which simply eliminates multiplication with input features, removes this bias. This showcases how a visual criteria may not align with true explainability of deep learning models.
|
computer science
|
The detonation of a helium shell on top of a carbon-oxygen white dwarf has been argued as a potential explosion mechanism for type Ia supernovae (SNe~Ia). The ash produced during helium shell burning can lead to light curves and spectra that are inconsistent with normal SNe~Ia, but may be viable for some objects showing a light curve bump within the days following explosion. We present a series of radiative transfer models designed to mimic predictions from double detonation explosion models. We consider a range of core and shell masses, and systematically explore multiple post-explosion compositions for the helium shell. We find that a variety of luminosities and timescales for early light curve bumps result from those models with shells containing $^{56}$Ni, $^{52}$Fe, or $^{48}$Cr. Comparing our models to SNe~Ia with light curve bumps, we find that these models can reproduce the shapes of almost all of the bumps observed, but only those objects with red colours around maximum light ($B-V \gtrsim 1$) are well matched throughout their evolution. Consistent with previous works, we also show that those models in which the shell does not contain iron-group elements provide good agreement with normal SNe~Ia of different luminosities from shortly after explosion up to maximum light. While our models do not amount to positive evidence in favour of the double detonation scenario, we show that provided the helium shell ash does not contain iron-group elements, it may be viable for a wide range of normal SNe~Ia.
|
astrophysics
|
We derive the chiral effective Lagrangian for excited heavy-light mesons from QCD under proper approximations. We focus on the chiral partners with $j_l^P=\frac{3}{2}^+$ and $j_l^P=\frac{3}{2}^-$ which amounts to ($1^+,2^+$) and ($1^-,2^-$) states respectively. The low energy constants including the masses of the chiral partners are calculated. The calculated spectrum for the excited mesons are found roughly consistent with experimental data. In addition, our results indicate that quantum numbers of $B_J(5970)$ can be identified with $1^-$ or $2^-$.
|
high energy physics phenomenology
|
Although the Higgs boson has been discovered, its self-couplings are poorly constrained. This leaves the nature of the Higgs boson undetermined. Motivated by different Higgs potential scenarios other than the Landau-Ginzburg type in the standard model, we systematically organize various new physics scenarios -- elementary Higgs, Nambu-Goldstone Higgs, Coleman-Weinberg Higgs, and Tadpole-induced Higgs, etc. We find that double-Higgs production at the 27 TeV high energy LHC can be used to discriminate different Higgs potential scenarios, while it is necessary to use triple-Higgs production at a future 100 TeV proton-proton collider to fully determine the shape of the Higgs potential.
|
high energy physics phenomenology
|
IoT devices, equipped with embedded actuators and sensors, provide custom automation in the form of IoT apps. IoT apps subscribe to events and upon receipt, transmit actuation commands which trigger a set of actuators. Events and actuation commands follow paths in the IoT ecosystem such as sensor-to-edge, edge-to-cloud, and cloud-to-actuator, with different network and processing delays between these connections. Significant delays may occur especially when an IoT system cloud interacts with other clouds. Due to this variation in delays, the cloud may receive events in an incorrect order, and in turn, devices may receive and actuate misordered commands. In this paper, we first study eight major IoT platforms and show that they do not make strong guarantees on event orderings to address these issues. We then analyze the end-to-end interactions among IoT components, from the creation of an event to the invocation of a command. From this, we identify and formalize the root causes of misorderings in events and commands leading to undesired states. We deploy 23 apps in a simulated smart home containing 35 IoT devices to evaluate the misordering problem. Our experiments demonstrate a high number of misordered events and commands that occur through different interaction paths. Through this effort, we reveal the root and extent of the misordering problem and guide future work to ensure correct ordering in IoT systems.
|
computer science
|
With the recent advent of a sound mathematical theory for extreme events in dynamical systems, new ways of analyzing a system's inherent properties have become available: Studying only the probabilities of extremely close Poincar\'{e} recurrences, we can infer the underlying attractor's local dimensionality -- a quantity which is closely linked to the predictability of individual configurations, as well as the information gained from observing them. This study examines possible ways of estimating local and global attractor dimensions, identifies potential pitfalls and discusses conceivable applications. The Portable University Model of the Atmosphere (PUMA) serves a test subject of intermediate complexity between simple mathematical toys and truly realistic atmospheric data-sets. It is demonstrated that the introduction of a simple, analytical estimator can streamline the procedure and allows for additional tests of the agreement between theoretical expectation and observed data. We furthermore show how the newly gained knowledge about local dimensions can complement classical techniques like principal component analysis and may assist in separating meaningful patterns from mathematical artifacts.
|
physics
|
In this article, we extend a result of L. Loomis and W. Rudin, regarding boundary behavior of positive harmonic functions on the upper half space $\R_+^{n+1}$. We show that similar results remain valid for more general approximate identities. We apply this result to prove a result regarding boundary behavior of nonnegative eigenfunctions of the Laplace-Beltrami operator on real hyperbolic space $\mathbb H^n$. We shall also prove a generalization of a result regarding large time behavior of solution of the heat equation proved in \cite{Re}. We use this result to prove a result regarding asymptotic behavior of certain eigenfunctions of the Laplace-Beltrami operator on real hyperbolic space $\mathbb H^n$.
|
mathematics
|
In this paper we consider the Navier-Stokes-Korteweg equations for a viscous compressible fluid with capillarity effects in three space dimensions. We prove global existence of finite energy weak solutions for large initial data. Contrary to previous results regarding this system, vacuum regions are allowed in the definition of weak solutions and no additional damping terms are considered. The convergence of the approximating solutions is obtained by introducing suitable truncations in the momentum equations of the velocity field and the mass density at different scales and use only the a priori bounds obtained by the energy and the BD entropy. Moreover, the approximating solutions enjoy only a limited amount of regularity, and the derivation of the truncations of the velocity and the density is performed by a suitable regularization procedure.
|
mathematics
|
Skyrmions, spin spirals, and other chiral magnetization structures developing in materials with intrinsic Dzyaloshinsky-Moriya Interaction display unique properties that have been the subject of intense research in thin-film geometries. Here we study the formation of three-dimensional chiral magnetization structures in FeGe nanospheres by means of micromagnetic finite-element simulations. In spite of the deep sub-micron particle size, we find a surprisingly large number of distinct equilibrium states, namely, helical, meron, skyrmion, chiral-bobber and quasi-saturation state. The distribution of these states is summarized in a phase diagram displaying the ground state as a function of the external field and particle radius. This unusual multiplicity of possible magnetization states in individual nanoparticles could be a useful feature for multi-state memory devices. We also show that the magneto-dipolar interaction is almost negligible in these systems, which suggests that the particles could be arranged at high density without experiencing unwanted coupling.
|
condensed matter
|
Vector addition systems with states (VASS) are widely used for the formal verification of concurrent systems. Given their tremendous computational complexity, practical approaches have relied on techniques such as reachability relaxations, e.g., allowing for negative intermediate counter values. It is natural to question their feasibility for VASS enriched with primitives that typically translate into undecidability. Spurred by this concern, we pinpoint the complexity of integer relaxations with respect to arbitrary classes of affine operations. More specifically, we provide a trichotomy on the complexity of integer reachability in VASS extended with affine operations (affine VASS). Namely, we show that it is NP-complete for VASS with resets, PSPACE-complete for VASS with (pseudo-)transfers and VASS with (pseudo-)copies, and undecidable for any other class. We further present a dichotomy for standard reachability in affine VASS: it is decidable for VASS with permutations, and undecidable for any other class. This yields a complete and unified complexity landscape of reachability in affine VASS. We also consider the reachability problem parameterized by a fixed affine VASS, rather than a class, and we show that the complexity landscape is arbitrary in this setting.
|
computer science
|
Fix a field $k$ of characteristic zero. If $a_1, ..., a_n$ ($n>2$) are positive integers, the integral domain $B = k[X_1, ..., X_n] / ( X_1^{a_1} + ... + X_n^{a_n} )$ is called a Pham-Brieskorn ring. It is conjectured that if $a_i > 1$ for all $i$ and $a_i=2$ for at most one $i$, then $B$ is rigid. (A ring $B$ is said to be rigid if the only locally nilpotent derivation $D: B \to B$ is the zero derivation.) We give partial results towards the conjecture.
|
mathematics
|
We develop a computational approach to locate the source of a steady-state gradient of diffusing particles from the fluxes through narrow windows distributed either on the boundary of a three dimensional half-space or on a sphere. This approach is based on solving the mixed boundary stationary diffusion equation with the Neumann-Green's function and matched asymptotic. We compute the probability fluxes and develop a highly efficient analytical-Brownian numerical scheme. This scheme accelerates the simulation time by avoiding the explicit computation of Brownian trajectories in the infinite domain. Our derived analytical formulas agree with the results obtained from the fast numerical simulation scheme. Using the analytical representation of the particle fluxes, we show how to reconstruct the location of the point source. Furthermore, we investigate the uncertainty in the source reconstruction due to additive fluctuations present in the fluxes. We also study the influence of various window configurations (cluster vs uniform distributions) on recovering the source position.
|
condensed matter
|
We initiate the study of Regge theory in a bottom-up holographic model for QCD in the Veneziano limit, where the backreaction of the quarks to the gluon dynamics is included. We determine the parameters of the model by carrying out a precise fit to the meson spectrum in QCD. The spectrum for spin-one and pseudoscalar mesons is well reproduced. We then generalise the model to incluce higher spin fields in the bulk trajectories dual to the Pomeron and meson Regge trajectories at the boundary. With this setting, we fit the masses of the mesons with spins $J=2$, $3$, and $4$,as well as the experimental data of the total cross-sections $\sigma(\gamma \gamma \to X)$, $\sigma(\gamma p \to X)$ and $\sigma(p p \to X)$. For the cross sections we obtain a $\chi^2_{\mathrm{d.o.f.}}$ of $0.74$ for a total of 199 experimental points.
|
high energy physics phenomenology
|
In this work we investigate the matrix elements of the energy-momentum tensor for massless on-shell states in four-dimensional unitary, local, and Poincar\'e covariant quantum field theories. We demonstrate that these matrix elements can be parametrised in terms of covariant multipoles of the Lorentz generators, and that this gives rise to a form factor decomposition in which the helicity dependence of the states is factorised. Using this decomposition we go on to explore some of the consequences for conformal field theories, deriving the explicit analytic conditions imposed by conformal symmetry, and using examples to illustrate that they uniquely fix the form of the matrix elements. We also provide new insights into the constraints imposed by the existence of massless particles, showing in particular that massless free theories are necessarily conformal.
|
high energy physics theory
|
The system of Type PDL ($\tau$PDL) is an extension of Propositional Dynamic Logic (PDL) and its main goal is to provide a formal basis for reasoning about types of actions (modeled by their preconditions and effects) and agent capabilities. The system has two equivalent interpretations, namely the standard relational semantics and the type semantics, where process terms are interpreted as types, i.e. sets of binary relations. Its satisfiability problem is decidable, as a NExpTime decision procedure was provided based on a filtration argument and it was suggested that the satisfiability problem for $\tau$PDL should be solvable in deterministic, single exponential time. In this paper, we address the problem of the complexity of the satisfiability problem of $\tau$PDL. We present a deterministic tableau-based satisfiability algorithm and prove that it is sound and complete and that it runs in ExpTime. Additionally, the algorithm detects satisfiability as earlier as possible, by restricting or-branching whenever possible.
|
computer science
|
The present work reviews the various aspects of the extension of the large-$N_c$ approach, as had been proposed and developed by 't Hooft, to the case of tetraquark states, which are a category of the general class of exotic states, also called multiquark states, whose internal valence-quark structure does not match with that of ordinary hadrons, and which have received, in recent years, many experimental confirmations. The primary question of describing, or probing, on theoretical grounds, multiquark states is first examined. The signature of such states inside Feynman diagrams in relation with their singularities is highlighted. The main mechanisms of formation of tetraquark states, provided by the diquark model and the molecular scheme, are considered together with their specific implications. The properties of tetraquark states at large $N_c$ are analyzed through the Feynman diagrams that describe two-meson scattering amplitudes. It turns out that, in that limit, the possible formation of tetraquark states is mainly due to the mutual interactions of their internal mesonic clusters. These essentially arise from the quark-rearrangement, or quark-interchange, mechanism. In coupled-channel meson-meson scattering amplitudes, one may expect the occurrence of two independent tetraquark states, each having priviledged couplings with the two mesons of their dominant channel. The question of the energy balance of various schemes in the static limit is also analyzed. The clarification of the mechanisms that are at work in the formation of tetraquarks is the main outcome from the large-$N_c$ approach to this problem.
|
high energy physics phenomenology
|
Relativistic formalism of Green's functions is dicussed in QCD and QED,where the relativistic Green's functions are constructed using the Schwinger proper time formalism and the Fock-Feynman-Schwinger method.As a result a simple and exact method is found for the relativistic systems,where the interaction can be written in a time-independent form.In this case one can write the relativistic Green's function as a one-dimensional integral of the corresponding nonrelativistic Green's function.The explicit example for the problem of a charge in the constant magnetic field is discussed in detail,and the exact agreement with the Schwinger relativistic form is demonstrated. A similar analysis is performed in the relativistiv Coulomb probem,supporting the accuracy of the proposed relativistic formalism.
|
high energy physics phenomenology
|
We suggest a new class of tests for searching for lepton flavor non-universality (LFNU) using ratio observables and based on correlations among the underlying LFNU new physics (NP) effects in several (seemingly independent) di-lepton and single lepton + jet(s) processes. This is demonstrated by studying the effects generated by LFNU 4-Fermi interactions involving 3rd generation quarks. We find that the sensitivity to the scale ($\Lambda$) of the LFNU 4-Fermi operators significantly improves when the correlations among the various di-lepton +jets and single-lepton + jets processes are used, reaching $\Lambda \sim {\cal O}(10)$~TeV at the HL-LHC.
|
high energy physics phenomenology
|
We study two practically important cases of model based clustering using Gaussian Mixture Models: (1) when there is misspecification and (2) on high dimensional data, in the light of recent advances in Gradient Descent (GD) based optimization using Automatic Differentiation (AD). Our simulation studies show that EM has better clustering performance, measured by Adjusted Rand Index, compared to GD in cases of misspecification, whereas on high dimensional data GD outperforms EM. We observe that both with EM and GD there are many solutions with high likelihood but poor cluster interpretation. To address this problem we design a new penalty term for the likelihood based on the Kullback Leibler divergence between pairs of fitted components. Closed form expressions for the gradients of this penalized likelihood are difficult to derive but AD can be done effortlessly, illustrating the advantage of AD-based optimization. Extensions of this penalty for high dimensional data and for model selection are discussed. Numerical experiments on synthetic and real datasets demonstrate the efficacy of clustering using the proposed penalized likelihood approach.
|
statistics
|
Analysis of three-way data is becoming ever more prevalent in the literature, especially in the area of clustering and classification. Real data, including real three-way data, are often contaminated by potential outlying observations. Their detection, as well as the development of robust models insensitive to their presence, is particularly important for this type of data because of the practical issues concerning their effective visualization. Herein, the contaminated matrix variate normal distribution is discussed and then utilized in the mixture model paradigm for clustering. One key advantage of the proposed model is the ability to automatically detect potential outlying matrices by computing their \textit{a posteriori} probability to be a "good" or "bad" point. Such detection is currently unavailable using existing matrix variate methods. An expectation conditional maximization algorithm is used for parameter estimation, and both simulated and real data are used for illustration.
|
statistics
|
This paper concerns the dynamics of a layer of incompressible viscous fluid lying above a vertically oscillating rigid plane and with an upper boundary given by a free surface. We consider the problem with gravity and surface tension for horizontally periodic flows. This problem gives rise to flat but vertically oscillating equilibrium solutions, and the main thrust of this paper is to study the asymptotic stability of these equilibria in certain parameter regimes. We prove that both with and without surface tension there exists a parameter regime in which sufficiently small perturbations of the equilibrium at time $t = 0$ give rise to global-in-time solutions that decay to equilibrium at an identified quantitative rate.
|
mathematics
|
Motivated by the theory of relativistic hydrodynamic fluctuations we make use of the Green-Kubo formula to compute the electrical conductivity and the (second-order) relaxation time of the electric current of an interacting hadron gas. We use the recently developed transport code SMASH to numerically solve the coupled set of Boltzmann equations implementing realistic hadronic interactions. In particular, we explore the role of the resonance lifetimes in the determination of the electrical relaxation time. As opposed to a previous calculation of the shear viscosity we observe that the presence of resonances with lifetimes of the order of the mean-free time does not appreciably affect the relaxation of the electric current fluctuations. We compare our results to other approaches describing similar systems, and provide the value of the electrical conductivity and the relaxation time for a hadron gas at temperatures between T=60 MeV and T=150 MeV.
|
high energy physics phenomenology
|
An electoral quick count is a statistical procedure whose main objective is to obtain a relatively small but representative sample of all the polling stations in a certain election, and to measure the uncertainty about the final result before the total count of votes. A stratified sampling design is commonly preferred to reduce estimation variability. The present work shows that dependence among strata and among candidates should be taken into consideration for statistical inferences therein, and a copula based model is proposed and applied to Mexico's 2006, 2012, and 2018 presidential elections data.
|
statistics
|
Organic materials are known to feature long spin-diffusion times, originating in a generally small spin-orbit coupling observed in these systems. From that perspective, chiral molecules acting as efficient spin selectors pose a puzzle, that attracted a lot of attention during the recent years. Here we revisit the physical origins of chiral-induced spin selectivity (CISS), and propose a simple analytic minimal model to describe it. The model treats a chiral molecule as an anisotropic wire with molecular dipole moments aligned arbitrarily with respect to the wire's axes, and is therefore quite general. Importantly, it shows that helical structure of the molecule is not necessary to observe CISS and other chiral non-helical molecules can also be considered as a potential candidates for CISS effect. We also show that the suggested simple model captures the main characteristics of CISS observed in experiment, without the need for additional constraints employed in the previous studies. The results pave the way for understanding other related physical phenomena where CISS effect plays an essential role.
|
condensed matter
|
Ultra Fast Outflows (UFOs) are an established feature in X-ray spectra of AGNs. According to the standard picture, they are launched at accretion disc scales with relativistic velocities, up to 0.3-0.4 c. Their high kinetic power is enough to induce an efficient feedback on galactic-scale, possibly contributing to the co-evolution between the central supermassive black hole (SMBH) and the host galaxy. It is therefore of paramount importance to fully understand the UFO physics, in particular the forces driving their acceleration and the relation with the accretion flow they originate from. In this paper we investigate the impact of special relativity effects on the radiative pressure exerted onto the outflow. The radiation received by the wind decreases for increasing outflow velocity v, implying that the standard Eddington limit argument has to be corrected according to v. Due to the limited ability of the radiation to counteract the SMBH gravity, we expect to find lower typical velocities with respect to the non-relativistic scenario. We integrate the relativistic-corrected outflow equation of motion for a realistic set of starting conditions. We concentrate on UFO typical values of ionisation, column density and launching radius. We explore a one-dimensional, spherical geometry and a 3D setting with a rotating thin accretion disc. We find that the inclusion of relativistic effects leads to sizeable differences in the wind dynamics and that v is reduced up to 50% with respect to the non-relativistic treatment. We compare our results with a sample of UFO from the literature, and we find that the relativistic-corrected velocities are systematically lower than the reported ones, indicating the need for an additional mechanism, such as magnetic driving, to explain the highest velocity components. These conclusions, derived for AGN winds, have a general applicability.
|
astrophysics
|
Texturing is a fundamental process in computer graphics. Texture is leveraged to enhance the visualization outcome for a 3D scene. In many cases a texture image cannot cover a large 3D model surface because of its small resolution. Conventional techniques like repeating, mirror repeating or clamp to edge do not yield visually acceptable results. Deep learning based texture synthesis has proven to be very effective in such cases. All deep texture synthesis methods trying to create larger resolution textures are limited in terms of GPU memory resources. In this paper, we propose a novel approach to example-based texture synthesis by using a robust deep learning process for creating tiles of arbitrary resolutions that resemble the structural components of an input texture. In this manner, our method is firstly much less memory limited owing to the fact that a new texture tile of small size is synthesized and merged with the original texture and secondly can easily produce missing parts of a large texture.
|
computer science
|
In the context of the recently measured non-leptonic decays $B_{d}\to K^{*0}\bar{K}^{*0}$ and $B_{s}\to K^{*0}\bar{K}^{*0}$ we analyse the anatomy of the $L_{VV}$ observable that compares the longitudinal components of $B_s \to VV$ and $B_d \to VV$ decays. This observable is cleaner than the longitudinal polarisation fraction as it is afflicted only at subleading order in a $1/m_b$ expansion by the theoretical uncertainties arising in the transverse components entering the polarisation fraction. Focusing on the particular case of $B_{d}\to K^{*0}\bar{K}^{*0}$ and $B_{s}\to K^{*0}\bar{K}^{*0}$, we discuss the main sources of hadronic uncertainty in the SM. We find for the SM prediction $L_{K^*\bar{K}^*}=19.5^{+9.3}_{-6.8}$, which implies a $2.6\sigma$ tension with respect to the most recent data, pointing to a deficit in the $b \to s$ transition of the non-leptonic decay versus the corresponding $b \to d$ transition. We discuss possible New Physics explanations for this deviation, first at the level of the Weak Effective Theory and we identify that the two Wilson coefficients ${\cal C}_{4}$ and ${\cal C}_{8g}$ can play a central role in explaining this anomaly. Finally, we briefly explore two different simplified New Physics models which can explain the anomaly through a contribution either in ${\cal C}_4$ (Kaluza-Klein gluon) or in ${\cal C}_{8g}$, with a significant amount of fine tuning, but possible connections to the $b \to s \ell \ell$ anomalies.
|
high energy physics phenomenology
|
Voice controlled devices and services have become very popular in the consumer IoT. Cloud-based speech analysis services extract information from voice inputs using speech recognition techniques. Services providers can thus build very accurate profiles of users' demographic categories, personal preferences, emotional states, etc., and may therefore significantly compromise their privacy. To address this problem, we have developed a privacy-preserving intermediate layer between users and cloud services to sanitize voice input directly at edge devices. We use CycleGAN-based speech conversion to remove sensitive information from raw voice input signals before regenerating neutralized signals for forwarding. We implement and evaluate our emotion filtering approach using a relatively cheap Raspberry Pi 4, and show that performance accuracy is not compromised at the edge. In fact, signals generated at the edge differ only slightly (~0.16%) from cloud-based approaches for speech recognition. Experimental evaluation of generated signals show that identification of the emotional state of a speaker can be reduced by ~91%.
|
electrical engineering and systems science
|
Number-non-conserving terms in quadratic bosonic Hamiltonians can induce unwanted dynamical instabilities. By exploiting the pseudo-Hermitian structure built in to these Hamiltonians, we show that as long as dynamical stability holds, one may always construct a non-trivial dual (unitarily equivalent) number-conserving quadratic bosonic Hamiltonian. We exemplify this construction for a gapped harmonic chain and a bosonic analogue to Kitaev's Majorana chain. Our duality may be used to identify local number-conserving models that approximate stable bosonic Hamiltonians without the need for parametric amplification and to implement non-Hermitian $\mathcal{P}\mathcal{T}$-symmetric dynamics in non-dissipative number-conserving bosonic systems. Implications for computing topological invariants are addressed.
|
quantum physics
|
We introduce Bayesian Probability Theory to investigate uncertainty propagation based on meta-models. We approach the problem from the perspective of data analysis, with a given (however almost-arbitrary) input probability distribution and a given "training" set of computer simulations. While proven mathematically to be the unique consistent probability calculus, the subject of this paper is not to demonstrate beauty but usefulness. We explicitly list all propositions and lay open the general structure of any uncertainty propagation based on meta-models. The former allows rigorous treatment at any stage, while the latter allows us to quantify the interaction of the surrogate uncertainties with the usual parameter uncertainties. Additionally, we show a simple way to implicitly include spatio-temporal correlations. We then apply the framework jointly to a family of generalized linear meta-model that implicitly includes Polynomial Chaos Expansions as a special case. While we assume a Gaussian surrogate-uncertainty, we do not assume a scale for the surrogate uncertainty to be known, i.e. a Student-t. We end up with semi-analytic formulas for surrogate uncertainties and uncertainty propagation
|
statistics
|
Deep learning methods for enhancing dark images learn a mapping from input images to output images with pre-determined discrete exposure levels. Often, at inference time the input and optimal output exposure levels of the given image are different from the seen ones during training. As a result the enhanced image might suffer from visual distortions, such as low contrast or dark areas. We address this issue by introducing a deep learning model that can continuously generalize at inference time to unseen exposure levels without the need to retrain the model. To this end, we introduce a dataset of 1500 raw images captured in both outdoor and indoor scenes, with five different exposure levels and various camera parameters. Using the dataset, we develop a model for extreme low-light imaging that can continuously tune the input or output exposure level of the image to an unseen one. We investigate the properties of our model and validate its performance, showing promising results.
|
electrical engineering and systems science
|
We have built regions in a Sky map where it should be expected the arrival of 120 EeV ultrahigh energy cosmic rays (UHECR) directionally correlated with the latest astrophysical neutrino tracks observed at IceCube, which are taken as point sources. In order to calculate these arrival directions we have considered contributions to the cosmic rays deflections originated by the galactic and the extragalactic magnetic field, and a UHECR composition compatible with the current expectations. We have used the Jansson-Farrar JF12 model for the Galactic magnetic field and an extragalactic magnetic field strength of 1nG and coherence length of 1Mpc. We observe that the regions outside of the Galactic plane are more strongly correlated with the neutrino tracks than those adjacent to or in it, with the former regions being good candidates to search for excesses, or anisotropies, in the UHECR flux. Additionally, we have focused, as an example, on the region of 150 EeV UHECR arrival directions correlated with the IceCube event 37 located at $(l,b)=(-137.1^\circ,65.8^\circ)$ in the Northern Hemisphere, far away from the Galactic plane, obtaining an angular size $\sim 5^\circ$, being $\sim 3^\circ$ for 200 EeV, and $\sim 8^\circ$ for 120 EeV. The results presented in this paper represent a useful UHECR excess search Sky map guide.
|
astrophysics
|
We consider KdV currents in a quantum field theory obtained by deforming a two dimensional conformal field theory on a cylinder via the irrelevant operator $T{\bar T}$. In this paper we determine their one-point functions modular properties. We find that the one-point functions factorize into two components each with a definite modular weight. We also obtain a general differential equation that the generalized torus partition sum satisfies.
|
high energy physics theory
|
We discuss the violation of quark-flavor symmetry at high temperatures, induced from nonperturbative thermal loop corrections and axial anomaly, based on a three-flavor linear-sigma model including an axial-anomaly induced-flavor breaking term. We employ a nonperturbative analysis following the Cornwall-Jackiw-Tomboulis formalism, and show that the model undergoes a chiral crossover with a pseudo-critical temperature, consistently with lattice observations. We find following features regarding the flavor breaking eminent around and above the pseudo-critical temperature: i) up-and down-quark condensates drop faster than the strange quark's toward the criticality, but still keep nonzero value even going far above the critical temperature; ii) the introduced anomaly-related flavor-breaking effect acts as a catalyzer toward the chiral restoration, and reduces the amount of flavor breaking in the up, down and strange quark condensates; iii) a dramatic deformation for the meson flavor mixing structure is observed, in which the anomaly-induced favor breaking is found to be almost irrelevant; iv) the meson spectroscopy gets corrected by the net nonperturbative flavor breaking effects, where the scalar meson mass hierarchy (inverse mass hierarchy) is significantly altered by the presence of the anomaly-related flavor breaking; v) the topological susceptibility significantly gets the contribution from the surviving strange quark condensate, which cannot be dictated by the chiral perturbation theory, and deviates from the dilute instanton gas prediction. There the anomaly-induced flavor breaking plays a role of the destructive interference for the net flavor violation; vi) the U(1)_A breaking is enhanced by the strange quark condensate, which may account for the tension in the effective U(1)_A restoration observed on lattices with two flavors and 2+1 flavors near the chiral limit.
|
high energy physics phenomenology
|
We studied the effects of the absolute neutrino mass scale in the scotogenic radiative seesaw model. From a scan over the parameter space of this model, a linear relation between the absolute neutrino mass and the dark sector-Higgs coupling $\lambda_5= 3.1\times10^{-9}\ m_{\nu_e}/$eV has been established. With the projected sensitivity of the KATRIN experiment nearing cosmologically favored values, a neutrino mass measurement would fix the value of $\lambda_5$. Subsequent correlations between the DM mass and the Yukawa coupling between DM and the SM leptons can probe the fermion DM parameter space, when lepton flavor violation constraints are also considered. The results are independent of the neutrino mass hierarchy and the CP phase.
|
high energy physics phenomenology
|
We performed an intensive accretion disk reverberation mapping campaign on the high accretion rate active galactic nucleus Mrk 142 in early 2019. Mrk 142 was monitored with the Neil Gehrels Swift Observatory for 4 months in X-rays and 6 UV/optical filters. Ground-based photometric monitoring was obtained from the Las Cumbres Observatory, Liverpool Telescope and Dan Zowada Memorial Observatory in ugriz filters and the Yunnan Astronomical Observatory in V. Mrk 142 was highly variable throughout, displaying correlated variability across all wavelengths. We measure significant time lags between the different wavelength light curves, finding that through the UV and optical the wavelength-dependent lags, $\tau(\lambda)$, generally follow the relation $\tau(\lambda) \propto \lambda^{4/3}$, as expected for the $T\propto R^{-3/4}$ profile of a steady-state optically-thick, geometrically-thin accretion disk, though can also be fit by $\tau(\lambda) \propto \lambda^{2}$, as expected for a slim disk. The exceptions are the u and U band, where an excess lag is observed, as has been observed in other AGN and attributed to continuum emission arising in the broad-line region. Furthermore, we perform a flux-flux analysis to separate the constant and variable components of the spectral energy distribution, finding that the flux-dependence of the variable component is consistent with the $f_\nu\propto\nu^{1/3}$ spectrum expected for a geometrically-thin accretion disk. Moreover, the X-ray to UV lag is significantly offset from an extrapolation of the UV/optical trend, with the X-rays showing a poorer correlation with the UV than the UV does with the optical. The magnitude of the UV/optical lags is consistent with a highly super-Eddington accretion rate.
|
astrophysics
|
Optical lattice clock systems with ultra-cold strontium-88 atoms have been used to demonstrate superradiant lasing and magnetic field-controlled optical transmission. We explain these phenomena theoretically with a rigorous model for three-level atoms coupled to a single cavity mode. We identify a class of dark atom-light dressed states which become accessible due to mixing with bright dressed states in the presence of a magnetic field. We predict that these states, under moderate incoherent pumping, lead to lasing with a linewidth of only tens of Hz, orders of magnitude smaller than the cavity linewidth and the atomic incoherent decay and pumping rates.
|
quantum physics
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.