text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
The NOvA long-baseline neutrino experiment uses a pair of large, segmented, liquid-scintillator calorimeters to study neutrino oscillations, using GeV-scale neutrinos from the Fermilab NuMI beam. These detectors are also sensitive to the flux of neutrinos which are emitted during a core-collapse supernova through inverse beta decay interactions on carbon at energies of $\mathcal{O}(10~\text{MeV})$. This signature provides a means to study the dominant mode of energy release for a core-collapse supernova occurring in our galaxy. We describe the data-driven software trigger system developed and employed by the NOvA experiment to identify and record neutrino data from nearby galactic supernovae. This technique has been used by NOvA to self-trigger on potential core-collapse supernovae in our galaxy, with an estimated sensitivity reaching out to 10~kpc distance while achieving a detection efficiency of 23\% to 49\% for supernovae from progenitor stars with masses of 9.6\~M$_\odot$ to 27\~M$_\odot$, respectively. | physics |
Reflecting the fundamental interactions of polarized light with magnetic matter, magneto-optical effects are well known since more than a century. The emergence of these phenomena is commonly attributed to the interplay between exchange splitting and spin-orbit coupling in the electronic structure of magnets. Using theoretical arguments, we demonstrate that topological magneto-optical effects can arise in noncoplanar antiferromagnets due to the finite scalar spin chirality, without any reference to exchange splitting or spin-orbit coupling. We propose spectral integrals of certain magneto-optical quantities that uncover the unique topological nature of the discovered effect. We also find that the Kerr and Faraday rotation angles can be quantized in insulating topological antiferromagnets in the low-frequency limit, owing to nontrivial global properties that manifest in quantum topological magneto-optical effects. Although the predicted topological and quantum topological magneto-optical effects are fundamentally distinct from conventional light-matter interactions, they can be measured by readily available experimental techniques. | condensed matter |
We compute the frequency-dependent shear and bulk viscosity spectral functions of an interacting Fermi gas in a quantum virial expansion up to second quadratic order in the fugacity parameter $z=e^{\beta \mu}$, which is small at high temperatures. Calculations are carried out using a diagrammatic finite-temperature field-theoretic framework, in which the analytic continuation from Matsubara to real frequencies is carried out in closed analytic form. Besides a possible zero-frequency Drude peak, our results for the spectral functions show a broad continuous spectrum at all frequencies with an additional bound-state contribution for frequencies larger than the dimer-breaking energy. Our results are consistent with various sum rules and universal high-frequency tails. In the low-frequency limit, the shear viscosity spectral function is recast as a collision integral, which reproduces known results for the static shear viscosity from kinetic theory. Our findings for the static bulk viscosity of a Fermi gas near unitarity, however, show a nonanalytic dependence on the scattering length, at variance with kinetic theory. | condensed matter |
Single-particle electron cryomicroscopy (cryo-EM) is an increasingly popular technique for elucidating the three-dimensional structure of proteins and other biologically significant complexes at near-atomic resolution. It is an imaging method that does not require crystallization and can capture molecules in their native states. In single-particle cryo-EM, the three-dimensional molecular structure needs to be determined from many noisy two-dimensional tomographic projections of individual molecules, whose orientations and positions are unknown. The high level of noise and the unknown pose parameters are two key elements that make reconstruction a challenging computational problem. Even more challenging is the inference of structural variability and flexible motions when the individual molecules being imaged are in different conformational states. This review discusses computational methods for structure determination by single-particle cryo-EM and their guiding principles from statistical inference, machine learning, and signal processing that also play a significant role in many other data science applications. | physics |
As dataset sizes increase, data analysis tasks in high performance computing (HPC) are increasingly dependent on sophisticated dataflows and out-of-core methods for efficient system utilization. In addition, as HPC systems grow, memory access and data sharing are becoming performance bottlenecks. Cloud computing employs a data processing paradigm typically built on a loosely connected group of low-cost computing nodes without relying upon shared storage and/or memory. Apache Spark is a popular engine for large-scale data analysis in the cloud, which we have successfully deployed via job submission scripts on production clusters. In this paper, we describe common parallel analysis dataflows for both Message Passing Interface (MPI) and cloud based applications. We developed an effective benchmark to measure the performance characteristics of these tasks using both types of systems, specifically comparing MPI/C-based analyses with Spark. The benchmark is a data processing pipeline representative of a typical analytics framework implemented using map-reduce. In the case of Spark, we also consider whether language plays a role by writing tests using both Python and Scala, a language built on the Java Virtual Machine (JVM). We include performance results from two large systems at Argonne National Laboratory including Theta, a Cray XC40 supercomputer on which our experiments run with 65,536 cores (1024 nodes with 64 cores each). The results of our experiments are discussed in the context of their applicability to future HPC architectures. Beyond understanding performance, our work demonstrates that technologies such as Spark, while typically aimed at multi-tenant cloud-based environments, show promise for data analysis needs in a traditional clustering/supercomputing environment. | computer science |
As an effective tool for two-dimensional data analysis, two-dimensional canonical correlation analysis (2DCCA) is not only capable of preserving the intrinsic structural information of original two-dimensional (2D) data, but also reduces the computational complexity effectively. However, due to the unsupervised nature, 2DCCA is incapable of extracting sufficient discriminatory representations, resulting in an unsatisfying performance. In this letter, we propose a complete discriminative tensor representation learning (CDTRL) method based on linear correlation analysis for analyzing 2D signals (e.g. images). This letter shows that the introduction of the complete discriminatory tensor representation strategy provides an effective vehicle for revealing, and extracting the discriminant representations across the 2D data sets, leading to improved results. Experimental results show that the proposed CDTRL outperforms state-of-the-art methods on the evaluated data sets. | computer science |
We investigate the impact of the nonzero neutrino splitting and elastic neutrino-nucleon collisions on fast neutrino oscillations. Our calculations confirm that a small neutrino mass splitting and the neutrino mass hierarchy have very little effect on fast oscillation waves. We also demonstrate explicitly that fast oscillations remain largely unaffected for the time/distance scales that are much smaller than the neutrino mean free path but are damped on larger scales. This damping originates from both the direct modification of the dispersion relation of the oscillation waves in the neutrino medium and the flattening of the neutrino angular distributions over time. Our work suggests that fast neutrino oscillation waves produced near the neutrino sphere can propagate essentially unimpeded which may have ramifications in various aspects of the supernova physics. | high energy physics phenomenology |
The thermal properties of materials are critically important to various technologies and are increasingly the target of materials design efforts. However, it is only relatively recent advances in first-principles computational techniques that have enabled researchers to explore the microscopic mechanisms of thermal properties, such as thermal expansion. We use the Gr\"{u}neisen theory of thermal expansion in combination with density functional calculations and the quasiharmonic approximation to uncover mechanisms of thermal expansion in PbTiO$_3$ thin-films in terms of elastic and vibrational contributions to the free energy. Surprisingly, we find that although the structural parameters of PbTiO$_3$ thin-films evolve with temperature as if they are dominated by linear elasticity, PbTiO$_3$ thin-films are strongly anharmonic, with large changes in the elastic constants and Gr\"{u}neisen parameters with both misfit strain and temperature. We show that a fortuitous near-cancellation between different types of anharmonicity gives rise to this behavior. Our results illustrate the importance of high-order phonon-strain anharmonicity in determining the temperature-dependent structural parameters of PbTiO$_3$ thin-films, and highlight the complex manner in which thermal expansion, misfit strain and elastic and vibrational properties are intertwined. | condensed matter |
We give a concise version of a recently proposed concept of fractionalization of an order parameter, thus generating a constraint through a fictitious gauge field. We argue that this new line of approach is key to explain the longstanding mystery of the pseudo-gap phase in cuprate superconductors. For example, the fractionalization of a finite momentum, charge two state living on latice bonds -- also called Pair Density Wave, into a particle-particle and a particle-hole pair leads to the opening of a gap in the fermionic spectrum. It induces "phase-locking" between the particle-particle and particle-hole pairs. We describe the formation of the Fermi arcs in the spectrum and give an account of recent Raman spectroscopy results from a minimal microscopic model. We relate the "phase-locking" to intriguing STM experimental observations. | condensed matter |
Aggregated Relational Data, known as ARD, capture information about a social network by asking about the number of connections between a person and a group with a particular characteristic, rather than asking about connections between each pair of individuals directly. Breza et al. (Forthcoming) and McCormick and Zheng (2015) relate ARD questions, consisting of survey items of the form "How many people with characteristic X do you know?" to parametric statistical models for complete graphs. In this paper, we propose criteria for consistent estimation of individual and graph level statistics from ARD data. | statistics |
Experimental validation and control of quantum traits for an open quantum system are important for any quantum information purpose. We consider a traveling atom qubit as a quantum memory with adjustable velocity inside a leaky cavity, adopting a quantum witness as a figure of merit for quantumness assessment. We show that this model constitutes an inherent physical instance where the quantum witness does not work properly if not suitably optimized. We then supply the optimal intermediate blind measurements which make the quantum witness a faithful tester of quantum coherence. We thus find that larger velocities protect quantumness against noise, leading to lifetime extension of hybrid qubit-photon entanglement and to higher phase estimation precision. Control of qubit motion thus reveals itself as a quantum enhancer. | quantum physics |
In this note we explore a fully unsupervised deep-learning framework for simulating non-linear structural equation models from observational training data. The main contribution of this note is an architecture for applying moment-matching loss functions to the edges of a causal Bayesian graph, resulting in a generative conditional-moment-matching graph-neural-network. This framework thus enables automated sampling of latent space conditional probability distributions for various graphical interventions, and is capable of generating out-of-sample interventional probabilities that are often faithful to the ground truth distributions well beyond the range contained in the training set. These methods could in principle be used in conjunction with any existing autoencoder that produces a latent space representation containing causal graph structures. | statistics |
We consider unconstrained formulation of the higher spin gauge theory in anti de Sitter (AdS) spacetime, given by actions [1, 2], and provide on-shell supersymmetry transformations for the $\mathcal{N}=1$ unconstrained massless higher spin supermultiplet in four-dimensional AdS$_4$. Such an irreducible supermultiplet $(\Phi_1, \Phi_2 \,; \Psi_1,\Psi_2)$ contains a pair of bosonic fields, with opposite parity, which are generating functions (infinite collection of totally symmetric real tensor fields of all integer rank $s= 0, 1, \ldots, \infty$), as well as two fermionic fields, which have opposite signs of the AdS radius, that are spinorial generating functions (infinite tower of totally symmetric Majorana spinor-tensor fields of all half-integer spin $s={\scriptstyle \frac{1}{2}}, {\scriptstyle \frac{3}{2}}, \ldots, \infty$). | high energy physics theory |
We investigate the dynamics of quantum correlations (QC) under the effects of reservoir memory, as a resource for quantum information and computation tasks. Quantum correlations of two-qubit systems are used for implementing quantum teleportation successfully, and for investigating how teleportation fidelity, violation of Bell-CHSH inequality, quantum steering and entanglement are connected with each other under the influence of noisy environments. Both Markovian and non-Markovian channels are considered, and it is shown that the decay and revival of correlations follow the hierarchy of quantum correlations in the state space. Noise tolerance of quantum correlations are checked for different types of unital and non-unital quantum channels, with and without memory. The quantum speed limit time $(\tau_{QSL})$ is investigated from the perspective of memory of quantum noise, and the corresponding dynamics is used to analyze the evolution of quantum correlations. We establish the connection between information backflow, quantum speed limit time and dynamics of quantum correlations for non-Markovian quantum channels. | quantum physics |
Barium stars are thought to result from binary evolution in systems wide enough to allow the more massive component to reach the asymptotic giant branch and eventually become a CO white dwarf. While Ba stars were initially known only among giant or subgiant stars, some were subsequently discovered also on the main sequence (and known as dwarf Ba stars). We provide here the orbital parameters of three dwarf Ba stars, completing the sample of 27 orbits published recently by Escorza et al. with these three southern targets. We show that these new orbital parameters are consistent with those of other dwarf Ba stars. | astrophysics |
The INTERSPEECH 2021 Computational Paralinguistics Challenge addresses four different problems for the first time in a research competition under well-defined conditions: In the COVID-19 Cough and COVID-19 Speech Sub-Challenges, a binary classification on COVID-19 infection has to be made based on coughing sounds and speech; in the Escalation SubChallenge, a three-way assessment of the level of escalation in a dialogue is featured; and in the Primates Sub-Challenge, four species vs background need to be classified. We describe the Sub-Challenges, baseline feature extraction, and classifiers based on the 'usual' COMPARE and BoAW features as well as deep unsupervised representation learning using the AuDeep toolkit, and deep feature extraction from pre-trained CNNs using the Deep Spectrum toolkit; in addition, we add deep end-to-end sequential modelling, and partially linguistic analysis. | electrical engineering and systems science |
The bulk-boundary correspondence is a defining feature of topological states of matter. However, for quantum magnets such as spin liquids or topological magnon insulators a direct observation of topological surface states has proven challenging because of the charge-neutral character of the excitations. Here we propose spin-polarized scanning tunneling microscopy as a spin-sensitive local probe to provide direct information about charge neutral topological edge states. We show how their signatures, imprinted in the local structure factor, can be extracted by specifically employing the strengths of existing technologies. As our main example, we determine the dynamical spin correlations of the Kitaev honeycomb model with open boundaries. We show that by contrasting conductance measurements of bulk and edge locations, one can extract direct signatures of the existence of fractionalized excitations and non-trivial topology. The broad applicability of this approach is corroborated by a second example of a kagome topological magnon insulator. | condensed matter |
Here, necessary optimal condition for Optimistic Bilevel programming problem is obtained in Asplund spaces. Also we have got necessary optimal conditions in finite dimensional spaces, by assuming differentiability on the given functions. | mathematics |
Recently the calculation of holographic free energy for mass-deformed ABJM model (mABJM) with ${\cal N}=2$ supersymmetry and $SU(3)\times U(1)$ global symmetry was tackled by Bobev et al. in arXiv:1812.01026. We solve the associated BPS equations, requiring IR regularity, using a perturbative method proposed by one of us recently in axXiv:1902.00418. In particular, we provide an analytic proof of a crucial conjecture made in arXiv:1812.01026 based on numerical solutions: that the R-charge values of three chiral multiplets in mABJM should be independent of the IR values of a hypermultiplet scalar, which is holographically dual to the superpotential mass term. | high energy physics theory |
A challenge when dealing with survival analysis data is accounting for a cure fraction, meaning that some subjects will never experience the event of interest. Mixture cure models have been frequently used to estimate both the probability of being cured and the time to event for the susceptible subjects, by usually assuming a parametric (logistic) form of the incidence. We propose a new estimation procedure for a parametric cure rate that relies on a preliminary smooth estimator and is independent of the model assumed for the latency. We investigate the theoretical properties of the estimators and show through simulations that, in the logistic/Cox model, presmoothing leads to more accurate results compared to the maximum likelihood estimator. To illustrate the practical use, we apply the new estimation procedure to two studies of melanoma survival data. | statistics |
We study $p$-adic families of cohomological automorphic forms for ${\mathrm{GL}}(2)$ over imaginary quadratic fields and prove that families interpolating a Zariski-dense set of classical cuspidal automorphic forms only occur under very restrictive conditions. We show how to computationally determine when this is not the case and establish concrete examples of families interpolating only finitely many Bianchi modular forms. | mathematics |
The EPR argued that quantum mechanics is an incomplete description of reality. So far, the Heisenberg uncertainty principle and its extensions are all still inequalities form which hold the superior approximate estimations, a precise estimation and environmental variables have never appeared in any formula. With the indeterminism of observable quantities constrained by the uncertainty principle. The question arises whether there might be some deeper reality hidden beneath quantum mechanics, to be described by a more fundamental theory that can always predict the outcome of each measurement with certainty, this paper by using the QCPB attempts to answer this question. As a result of the QCPB theory, we geometrically propose an equality called quantum geomertainty relation (QGR) to modify the uncertainty relation based on the fundamental theory QCPB to positively give a complete description of reality that predicts the outcome of each measurement with certainty, meanwhile, the uncertainty relation is just a derivation from this quantum geometric certainty equality. It deals with the measurement in quantum equality for different manifolds equipped with various mathematical or physical structure. Accordingly, the environment joins the physical process, by taking environment variable as a geometric structure function in the QCPB into consideration, it has naturally solved the environment problem for the measurements. We demonstrate that entanglement term exists between the observable and the environment. Actually, the QCPB nicely explains how the environment has an effect on the measurement which causes the unavoidable influences. Conversely, we state that quantum mechanics is incomplete assuredly. Doubtlessly, the QCPB is surely a new way for such complete description of reality. | quantum physics |
We determine the local equivalence class of the Seiberg-Witten Floer stable homotopy type of a spin rational homology 3-sphere $Y$ embedded into a spin rational homology $S^{1} \times S^{3}$ with a positive scalar curvature metric so that $Y$ generates the third homology. The main tool of the proof is a relative Bauer-Furuta-type invariant on a periodic-end 4-manifold. As a consequence, we give obstructions to positive scalar curvature metrics on spin rational homology $S^{1} \times S^{3}$, typically described as the coincidence of various Fr{\o}yshov-type invariants. This coincidence also yields alternative proofs of two known obstructions by Jianfeng Lin and by the authors for the same class of 4-manifolds. | mathematics |
Supernova remnants (SNRs) are observable for about 6-15x10^4 years before they fade into the Galactic interstellar medium. With a Galactic supernova rate of approximately two per century, we can expect to have of the order of 1200 SNRs in our Galaxy. However, only about 300 of them are known to date, with the majority having been discovered in Galactic plane radio surveys. Given that these SNRs represent the brightest tail of the distribution and are mostly located close to the plane, they are not representative of the complete sample. Here we report findings from the search for new SNRs in the eROSITA all-sky survey data which led to the detection of one of the largest SNRs discovered at wavelengths other than the radio: G249.5+24.5. This source is located at a relatively high Galactic latitude, where SNRs are not usually expected to be found. The remnant, 'Hoinga', has a diameter of about 4.4 degrees and shows a circular shaped morphology with diffuse X-ray emission filling almost the entire remnant. Spectral analysis of the remnant emission reveals that an APEC spectrum from collisionally ionised diffuse gas and a plane-parallel shock plasma model with non-equilibrium ionisation are both able to provide an adequate description of the data, suggesting a gas temperature of the order of kT = 0.1 keV and an absorbing column density of N_H=3.6 x 10^20 cm^-2. Subsequent searches for a radio counterpart of the Hoinga remnant identified its radio emission in archival data from the Continuum HI Parkes All-Sky Survey (CHIPASS) and the 408-MHz `Haslam' all-sky survey. The radio spectral index alpha=-0.69 +- 0.08 obtained from these data definitely confirms the SNR nature of Hoinga. From its size and X-ray and radio spectral properties we conclude that Hoinga is a middle-aged Vela-like SNR located at a distance of about twice that of the Vela SNR, i.e. at ~500 pc. | astrophysics |
In this study, we present a new module built for users interested in a programming language similar to BUGS to fit a Bayesian model based on the piecewise exponential (PE) distribution. The module is an extension to the open-source program JAGS by which a Gibbs sampler can be applied without requiring the derivation of complete conditionals and the subsequent implementation of strategies to draw samples from unknown distributions. The PE distribution is widely used in the fields of survival analysis and reliability. Currently, it can only be implemented in JAGS through methods to indirectly specify the likelihood based on the Poisson or Bernoulli probabilities. Our module provides a more straightforward implementation and is thus more attractive to the researchers aiming to spend more time exploring the results from the Bayesian inference rather than implementing the Markov Chain Monte Carlo (MCMC) algorithm. For those interested in extending JAGS, this work can be seen as a tutorial including important information not well investigated or organized in other materials. Here, we describe how to use the module taking advantage of the interface between R and JAGS. A short simulation study is developed to ensure that the module behaves well and a real illustration, involving two PE models, exhibits a context where the module can be used in practice. | statistics |
IceCube is a cubic-kilometer Cherenkov detector located in the deep ice at the geographic South Pole. The dominant event yield in the deep ice detector consists of penetrating atmospheric muons produced in cosmic ray air showers with energies above several 100 GeV. In addition, the surface array, IceTop, deployed above the IceCube deep ice detector, measures the electromagnetic signal and low-energy muons of the air shower. Hence, IceCube and IceTop yield unique opportunities to study cosmic rays with large statistics in great detail. We discuss the latest results of air shower measurements from IceCube and IceTop, including the energy spectrum of cosmic rays from 250 TeV up to the EeV range. We will also report a measurement of the cosmic ray mass composition in the PeV to EeV range and show recent results from searches for PeV gamma ray sources in the Southern Hemisphere. In addition, results from a full-sky analysis of the cosmic ray anisotropy, using combined data from the IceCube and HAWC observatories, will be reported. Finally, we will present a measurement of the density of muons in the GeV range and discuss its consistency with predictions from hadronic interaction models. | astrophysics |
We study properties of the Fuss-Catalan distributions $\mu(p,r)$, $p\geq1$, $0<r\leq p$: free infinite divisibility, free self-decomposability, free regularity and unimodality. We show that the Fuss-Catalan distribution $\mu(p,r)$ is freely self-decomposable if and only if $1 \leq p=r \leq 2$. | mathematics |
Moduli spaces - finite-dimensional, collective coordinate manifolds - for kinks and antikinks in $\phi^4$ theory and sine-Gordon theory are reconsidered. The field theory Lagrangian restricted to moduli space defines a reduced Lagrangian, combining a potential with a kinetic term that can be interpreted as a Riemannian metric on moduli space. Moduli spaces should be metrically complete, or have an infinite potential on their boundary. Examples are constructed for both kink-antikink and kink-antikink-kink configurations. The naive position coordinates of the kinks and antikinks sometimes need to be extended from real to imaginary values, although the field remains real. The previously discussed null-vector problem for the shape modes of $\phi^4$ kinks is resolved by a better coordinate choice. In sine-Gordon theory, moduli spaces can be constructed using exact solutions at the critical energy separating scattering and breather (or wobble) solutions; here, energy conservation relates the metric and potential. The reduced dynamics on these moduli spaces accurately reproduces properties of the exact solutions over a range of energies. | high energy physics theory |
We investigate the high energy behavior of the SU(N) chiral Gross-Neveu model in 1 + 1 dimensions. The model is integrable and matrix elements of several local operators (form factors) are known exactly. The form factors show rapidity space clustering, which means factorization, if a group of rapidities is shifted to infinity. We analyze this phenomenon for the SU(N) model. For several operators the factorization formulas are presented explicitly. | high energy physics theory |
We consider the problem of 3D shape reconstruction from multi-modal data, given uncertain calibration parameters. Typically, 3D data modalities can be in diverse forms such as sparse point sets, volumetric slices, 2D photos and so on. To jointly process these data modalities, we exploit a parametric level set method that utilizes ellipsoidal radial basis functions. This method not only allows us to analytically and compactly represent the object, it also confers on us the ability to overcome calibration related noise that originates from inaccurate acquisition parameters. This essentially implicit regularization leads to a highly robust and scalable reconstruction, surpassing other traditional methods. In our results we first demonstrate the ability of the method to compactly represent complex objects. We then show that our reconstruction method is robust both to a small number of measurements and to noise in the acquisition parameters. Finally, we demonstrate our reconstruction abilities from diverse modalities such as volume slices obtained from liquid displacement (similar to CTscans and XRays), and visual measurements obtained from shape silhouettes. | computer science |
$\tt DsixTools$ is a Mathematica package for the handling of the Standard Model Effective Field Theory (SMEFT) and the Low-energy Effective Field Theory (LEFT) with operators up to dimension six, both at the algebraic and numerical level. $\tt DsixTools$ contains a visually accessible and operationally convenient repository of all operators and parameters of the SMEFT and the LEFT. This repository also provides information concerning symmetry categories and number of degrees of freedom, and routines that allow to implement this information on global expressions (such as decay amplitudes and cross-sections). $\tt DsixTools$ also performs weak basis transformations, and implements the full one-loop Renormalization Group Evolution in both EFTs (with SM beta functions up to five loops in QCD), and the full one-loop SMEFT-LEFT matching at the electroweak scale. | high energy physics phenomenology |
We explore the stability of the phase separation phenomenon in few-fermion spin-$1/2$ systems confined in a double-well potential. It is shown that within the SU(2) symmetric case, where the total spin is conserved, the phase separation cannot be fully stabilized. An interaction regime characterized by metastable phase separation emerges for intermediate interactions which is inherently related with ferromagnetic spin-spin correlations emanating within each of the wells. The breaking of the SU(2) symmetry crucially affects the stability properties of the system as the phase separated state can be stabilized even for weak magnetic potential gradients. Our results imply an intricate relation between the phenomena of phase separation and ferromagnetism that lies beyond the view of the Stoner instability. | condensed matter |
In the present paper the author evaluates the path integral of a charged anisotropic Harmonic Oscillator (HO) in crossed electric and magnetic fields by two alternative methods. Both methods enable a rather formal calculation and circumvent some mathematical delicate issues such as the occurrence of an infinite Normalization constant and ambiguities with path integral calculations when magnetic fields are present. The \emph{$1^{\text{st}}$} method uses complex Fourier series and a regularization scheme via the Riemann-$\zeta$-function. The \emph{$2^{\text{nd}}$} method evaluates the path integral by transforming the Lagrangian to a uniformly rotating system. The latter method uses the fact that the Lorentz- and Coriolis force have the same functional form. Both forces cancel each other within the rotating system given that it rotates with Larmor frequency $\omega_{L}$. This fact simplifies considerably the calculation of the path integral. | quantum physics |
This work presents a physics based compact model for SiC power MOSFETs that accurately describes the I-V characteristics up to large voltages and currents. Charge-based formulations accounting for the different physics of SiC power MOSFETs are presented. The formulations account for the effect of the large SiC/SiO2 interface traps density characteristic of SiC MOSFETs and its dependence with temperature. The modeling of interface charge density is found to be necessary to describe the electrostatics of SiC power MOSFETs when operating at simultaneous high current and high voltage regions. The proposed compact model accurately fits the measurement data extracted of a 160 milli ohms, 1200V SiC power MOSFET in the complete IV plane from drain-voltage $V_d$ = 5mV up to 800 V and current ranges from few mA to 30 A. | electrical engineering and systems science |
In any other circumstance, it might make sense to define the extent of the terrain (Data Science) first, and then locate and describe the landmarks (Principles). But this data revolution we are experiencing defies a cadastral survey. Areas are continually being annexed into Data Science. For example, biometrics was traditionally statistics for agriculture in all its forms but now, in Data Science, it means the study of characteristics that can be used to identify an individual. Examples of non-intrusive measurements include height, weight, fingerprints, retina scan, voice, photograph/video (facial landmarks and facial expressions), and gait. A multivariate analysis of such data would be a complex project for a statistician, but a software engineer might appear to have no trouble with it at all. In any applied-statistics project, the statistician worries about uncertainty and quantifies it by modelling data as realisations generated from a probability space. Another approach to uncertainty quantification is to find similar data sets, and then use the variability of results between these data sets to capture the uncertainty. Both approaches allow 'error bars' to be put on estimates obtained from the original data set, although the interpretations are different. A third approach, that concentrates on giving a single answer and gives up on uncertainty quantification, could be considered as Data Engineering, although it has staked a claim in the Data Science terrain. This article presents a few (actually nine) statistical principles for data scientists that have helped me, and continue to help me, when I work on complex interdisciplinary projects. | statistics |
We discuss the decay of a two-level system into an engineered reservoir of coupled harmonic oscillators in the single-excitation manifold and propose its optical simulation with an homogeneous chain of coupled waveguides where individual elements couple to an external waveguide. We use two approaches to study the decay of the optical analogue for the probability amplitude of the two-level system being in the excited state. A Born approximation allows us to provide analytic closed-form amplitudes valid for small propagation distances. A Fourier-Laplace approach allows us to estimate an effective decay rate valid for long propagation distances. In general, our two analytic approximations match our numerical simulations using coupled mode theory and show non-Markovian decay into the engineered reservoir. In particular, we focus on two examples that provide enhancement or suppression of the decay decay rate using flat-top or Gaussian coupling distributions. | physics |
Although experiments can offer some fingerprints of the atomic structure of glasses (coordination numbers, pair distribution function, etc.), atomistic simulations are often required to directly access the structure itself (i.e., the positions of the atoms). On the one hand, molecular dynamics (MD) simulations can be used to generate by quenching a liquid - but MD simulations remain plagued by extremely high cooling rates. On the other hand, reverse Monte Carlo (RMC) modeling bypasses the melt-quenching route - but RMC often yields non-unique glass structures. Here, we adopt the force-enhanced atomic refinement (FEAR) method to overcome these limitations and decipher the atomic structure of a sodium silicate glass. We show that FEAR offers an unprecedented description of the atomic structure of sodium silicate. The FEAR-generated glass structure simultaneously exhibits (i) enhanced agreement with experimental neutron diffraction data and (ii) higher energetic stability as compared to those generated by MD or RMC. This result allows us to reveal new insights into the atomic structure of sodium silicate glasses. Specifically, we show that sodium silicate glasses exhibit a more ordered medium-range order structure than previously suggested by MD simulations. These results pave the way toward an increased ability to accurately describe the atomic structure of glasses. | condensed matter |
Phantom dark energy can produce amplified cosmic acceleration at late times, thus increasing the value of $H_0$ favored by CMB data and releasing the tension with local measurements of $H_0$. We show that the best fit value of $H_0$ in the context of the CMB power spectrum is degenerate with a constant equation of state parameter $w$, in accordance with the approximate effective linear equation $H_0 + 30.93\; w - 36.47 = 0$ ($H_0$ in $km \; sec^{-1} \; Mpc^{-1}$). This equation is derived by assuming that both $\Omega_{0 \rm m}h^2$ and $d_A=\int_0^{z_{rec}}\frac{dz}{H(z)}$ remain constant (for invariant CMB spectrum) and equal to their best fit Planck/$\Lambda$CDM values as $H_0$, $\Omega_{0 \rm m}$ and $w$ vary. For $w=-1$, this linear degeneracy equation leads to the best fit $H_0=67.4 \; km \; sec^{-1} \; Mpc^{-1}$ as expected. For $w=-1.22$ the corresponding predicted CMB best fit Hubble constant is $H_0=74 \; km \; sec^{-1} \; Mpc^{-1}$ which is identical with the value obtained by local distance ladder measurements while the best fit matter density parameter is predicted to decrease since $\Omega_{0 \rm m}h^2$ is fixed. We verify the above $H_0-w$ degeneracy equation by fitting a $w$CDM model with fixed values of $w$ to the Planck TT spectrum showing also that the quality of fit ($\chi^2$) is similar to that of $\Lambda$CDM. However, when including SnIa, BAO or growth data the quality of fit becomes worse than $\Lambda$CDM when $w< -1$. Finally, we generalize the $H_0-w(z)$ degeneracy equation for $w(z)=w_0+w_1\; z/(1+z)$ and identify analytically the full $w_0-w_1$ parameter region that leads to a best fit $H_0=74\; km \; sec^{-1} \; Mpc^{-1}$ in the context of the Planck CMB spectrum. This exploitation of $H_0-w(z)$ degeneracy can lead to immediate identification of all parameter values of a given $w(z)$ parametrization that can potentially resolve the $H_0$ tension. | astrophysics |
The electroencephalogram (EEG) provides a non-invasive, minimally restrictive, and relatively low cost measure of mesoscale brain dynamics with high temporal resolution. Although signals recorded in parallel by multiple, near-adjacent EEG scalp electrode channels are highly-correlated and combine signals from many different sources, biological and non-biological, independent component analysis (ICA) has been shown to isolate the various source generator processes underlying those recordings. Independent components (IC) found by ICA decomposition can be manually inspected, selected, and interpreted, but doing so requires both time and practice as ICs have no particular order or intrinsic interpretations and therefore require further study of their properties. Alternatively, sufficiently-accurate automated IC classifiers can be used to classify ICs into broad source categories, speeding the analysis of EEG studies with many subjects and enabling the use of ICA decomposition in near-real-time applications. While many such classifiers have been proposed recently, this work presents the ICLabel project comprised of (1) an IC dataset containing spatiotemporal measures for over 200,000 ICs from more than 6,000 EEG recordings, (2) a website for collecting crowdsourced IC labels and educating EEG researchers and practitioners about IC interpretation, and (3) the automated ICLabel classifier. The classifier improves upon existing methods in two ways: by improving the accuracy of the computed label estimates and by enhancing its computational efficiency. The ICLabel classifier outperforms or performs comparably to the previous best publicly available method for all measured IC categories while computing those labels ten times faster than that classifier as shown in a rigorous comparison against all other publicly available EEG IC classifiers. | electrical engineering and systems science |
In this paper, we analyze the downlink coverage probability and rate of an aerial user in vertical HetNets (VHetNets) comprising aerial base stations (aerial-BSs) and terrestrial-BSs. The locations of terrestrial-BSs are modeled as an infinite 2-D Poisson Point Process (PPP), while the locations of aerial-BSs are modeled as a finite 2-D Binomial Point Process (BPP). Our cellular-to-air (C2A) channel model incorporates line-of-sight (LoS) and non-LoS transmissions between terrestrial-BSs and a typical aerial user, while we assume LoS transmissions for all aerial links. We assume that the aerial user is associated with an aerial-BS or terrestrial-BS that provides the strongest average received power. Using stochastic geometry, we derive exact and approximate expressions of the coverage probability and rate in terms of interference power's Laplace transform. The expressions are simplified assuming only LoS transmissions for the C2A channels. This enables easy-to-compute equations with good accuracy at elevated aerial user heights. We find that aerial users hovering at low altitudes tend to connect to aerial-BSs in denser terrestrial environments. Employing directive beamforming at aerial-BSs guarantees an acceptable performance at the aerial user by reducing interference signals received from the aerial-BSs. In denser terrestrial networks, the performance at the aerial user degrades substantially despite beamforming. | electrical engineering and systems science |
We present a bi-waveguide paradigm composed of joined Positive-Index-Material PIM)/Negative-Index-Material (NIM) slabs, demonstrating ultra-slow light propagation stemming from the competing propagation disposition in the PIM and NIM regions. We report for the first time a mesoscopic extended electromagnetic (EM) enhancement covering regions of the order of the free space wavelength, enabled by the slow-light mode in our system. Our dynamic numerical results are consistent with our developed theoretical model, predicting an EM energy accumulation reminiscent of a charging capacitor. Our analysis reveals that spatial compression is not a requirement to EM enhancement in slow-light systems and stresses on the merits of high coupling efficiency, strong temporal compression, monomodality and modal index bandwidth, -all present in our proposed paradigm. Furthermore, we show that the heterostructure waveguide mode is an extra-ordinary entity with a unique energy velocity, that is opposite to the Poynting vector in one of the participant waveguides. We believe these results will inspire new slow-light platforms relevant to the collective harvesting of strong light-matter interactions. | physics |
We study cluster algebras for some all-loop Feynman integrals, including box-ladder, penta-box-ladder, and (seven-point) double-penta-ladder integrals. In addition to the well-known box ladder whose symbol alphabet is $D_2\simeq A_1^2$, we show that penta-box ladder has an alphabet of $D_3\simeq A_3$ and provide strong evidence that the alphabet of double-penta ladder can be identified with a $D_4$ cluster algebra. We relate the symbol letters to the ${\bf u}$ variables of cluster configuration space, which provide a gauge-invariant description of the cluster algebra, and we find various sub-algebras associated with limits of the integrals. We comment on constraints similar to extended-Steinmann relations or cluster adjacency conditions on cluster function spaces. Our study of the symbol and alphabet is based on the recently proposed Wilson-loop ${\rm d}\log$ representation, which allows us to predict higher-loop alphabet recursively; by applying such recursions to six-dimensional hexagon integrals, we also find $D_5$ and $D_6$ cluster functions for the two-mass-easy and three-mass-easy case, respectively. | high energy physics theory |
The ResNet and its variants have achieved remarkable successes in various computer vision tasks. Despite its success in making gradient flow through building blocks, the simple shortcut connection mechanism limits the ability of re-exploring new potentially complementary features due to the additive function. To address this issue, in this paper, we propose to introduce a regulator module as a memory mechanism to extract complementary features, which are further fed to the ResNet. In particular, the regulator module is composed of convolutional RNNs (e.g., Convolutional LSTMs or Convolutional GRUs), which are shown to be good at extracting Spatio-temporal information. We named the new regulated networks as RegNet. The regulator module can be easily implemented and appended to any ResNet architecture. We also apply the regulator module for improving the Squeeze-and-Excitation ResNet to show the generalization ability of our method. Experimental results on three image classification datasets have demonstrated the promising performance of the proposed architecture compared with the standard ResNet, SE-ResNet, and other state-of-the-art architectures. | electrical engineering and systems science |
We study a multi-body asset-guarding game in missile defense where teams of interceptor missiles collaborate to defend a non-manuevering asset against a group of threat missiles. We approach the problem in two steps. We first formulate an assignment problem where we optimally assign subsets of collaborating interceptors to each threat so that all threats are intercepted as far away from the asset as possible. We assume that each interceptor is controlled by a collaborative guidance law derived from linear quadratic dynamic games. Our results include a 6-DOF simulation of a 5-interceptor versus 3-threat missile engagement where each agent is modeled as a missile airframe controlled by an autopilot. Despite the assumption of linear dynamics in our collaborative guidance law and the unmodeled dynamics in the simulation environment (e.g., varying density and gravity), we show that the simulated trajectories match well with those predicted by our approach. Furthermore, we show that a more agile threat, with greater speed and acceleration, can be intercepted by inferior interceptors when they collaborate. We believe the concepts introduced in this paper may be applied in asymmetric missile defense scenarios, including defense against advanced cruise missiles and hypersonic vehicles. | electrical engineering and systems science |
A series of recent papers has used a parsing algorithm due to Shen et al. (2018) to recover phrase-structure trees based on proxies for "syntactic depth." These proxy depths are obtained from the representations learned by recurrent language models augmented with mechanisms that encourage the (unsupervised) discovery of hierarchical structure latent in natural language sentences. Using the same parser, we show that proxies derived from a conventional LSTM language model produce trees comparably well to the specialized architectures used in previous work. However, we also provide a detailed analysis of the parsing algorithm, showing (1) that it is incomplete---that is, it can recover only a fraction of possible trees---and (2) that it has a marked bias for right-branching structures which results in inflated performance in right-branching languages like English. Our analysis shows that evaluating with biased parsing algorithms can inflate the apparent structural competence of language models. | computer science |
In this paper, we propose a MCMC algorithm based on elliptical slice sampling with the purpose to improve sampling efficiency. During sampling, a mixture distribution is fitted periodically to previous samples. The components of the mixture distribution are called regional pseudo-priors because each component serves as the pseudo-prior for a subregion of the sampling space. Expectation maximization algorithm, variational inference algorithm and stochastic approximation algorithm are used to estimate the parameters. Meanwhile, parallel computing is used to relieve the burden of computation. Ergodicity of the proposed algorithm is proven mathematically. Experimental results on one synthetic and two real-world dataset show that the proposed algorithm has the following advantages: with the same starting points, the proposed algorithm can find more distant modes; the proposed algorithm has lower rejection rates; when doing Bayesian inference for uni-modal posterior distributions, the proposed algorithm can give more accurate estimations; when doing Bayesian inference for multi-modal posterior distributions, the proposed algorithm can find different modes well, and the estimated means of the mixture distribution can provide additional information for the location of modes. | statistics |
In this paper we first derived the mathematical expression for lower bound spectral efficiency (SE) calculation for zero-force (ZF), and minimum mean square error (MMSE). Secondly, we calculated the simulation SE with three algorithms for ZF and MMSE precoding. We compared the simulation and theoretical results and found that the theoretical results are 1 to 1.5 bits less than the simulation values which implied that the theoretical lower bounds are actually the lower bounds. To achieve the maximum spectral efficiency in downlink massive MIMO systems we assumed perfect CSI, ZF and MMSE precoding in this paper. We also considered that the channel has the characteristics of small and large scale fading (SSF and LSF) as the model is like a practical. We investigated the effect of different SNR, base station (M) and radius (R) of the cell on spectral efficiency for simulation and theoretical results. We also evaluated the performance of SE of each algorithms and precoding schemes for different configurations. From the results we have observed that algorithms-1 and ZF outperform other algorithms and MMSE. From our investigation we noticed that the LSF parameters are the most dominated factor in SE in massive MIMO systems. | electrical engineering and systems science |
We report on n-type degenerate doping in MOVPE grown \b{eta}-(Al0.26Ga0.74)2O3 epitaxial thin films and modulation doping in \b{eta}-(Al0.26Ga0.74)2O3/\b{eta}-Ga2O3 heterostructure. Alloy composition is confirmed using HRXRD measurements. Carrier concentration in the thin films is proportional to the silane molar flow. Room temperature hall measurements showed a high carrier concentration of 6x1018-7.3x1019 cm-3 with a corresponding electron mobility of 53-27 cm2/V.s in uniformly-doped \b{eta}-(Al0.26Ga0.74)2O3 layers. Modulation doping is used to realize a total electron sheet charge of 2.3x1012 cm-2 in a \b{eta}-(Al0.26Ga0.74)2O3/\b{eta}-Ga2O3 heterostructure using a uniformly-doped \b{eta}-(Al0.26Ga0.74)2O3 barrier layer and a thin spacer layer. | condensed matter |
The deep inferior epigastric artery perforator (DIEAP) flap is the most common free flap used for breast reconstruction after a mastectomy. It makes use of the skin and fat of the lower abdomen to build a new breast mound either at the same time of the mastectomy or in a second surgery. This operation requires preoperative imaging studies to evaluate the branches - the perforators - that irrigate the tissue that will be used to reconstruct the breast mound. These branches will support tissue viability after the microsurgical ligation of the inferior epigastric vessels to the receptor vessels in the thorax. Usually through a Computed Tomography Angiography (CTA), each perforator, diameter and direction is manually identified by the imaging team, who will subsequently draw a map for the identification of the best vascular support for the reconstruction. In the current work we propose a semi-automatic methodology that aims at reducing the time and subjectivity inherent to the manual annotation. In 21 CTAs from patients proposed for breast reconstruction with DIEAP flaps, the subcutaneous region of each perforator was extracted, by means of a tracking procedure, whereas the intramuscular portion was detected through a minimum cost approach. Both were subsequently compared with the radiologist manual annotation. Results showed that the semi-automatic procedure was able to correctly detect the course of the DIEAPs with a minimum error (average error of 0.64 mm and 0.50 mm regarding the extraction of subcutaneous and intramuscular paths, respectively). The objective methodology is a promising tool in the automatic detection of perforators in CTA and can contribute to spare human resources and reduce subjectivity in the aforementioned task. | electrical engineering and systems science |
Artificial Intelligence techniques are already popular and important in the legal domain. We extract legal indicators from judicial judgment to decrease the asymmetry of information of the legal system and the access-to-justice gap. We use NLP methods to extract interesting entities/data from judgments to construct networks of lawyers and judgments. We propose metrics to rank lawyers based on their experience, wins/loss ratio and their importance in the network of lawyers. We also perform community detection in the network of judgments and propose metrics to represent the difficulty of cases capitalising on communities features. | computer science |
The Cox regression model is a commonly used model in survival analysis. In public health studies, clinical data are often collected from medical service providers of different locations. There are large geographical variations in the covariate effects on survival rates from particular diseases. In this paper, we focus on the variable selection issue for the Cox regression model with spatially varying coefficients. We propose a Bayesian hierarchical model which incorporates a horseshoe prior for sparsity and a point mass mixture prior to determine whether a regression coefficient is spatially varying. An efficient two-stage computational method is used for posterior inference and variable selection. It essentially applies the existing method for maximizing the partial likelihood for the Cox model by site independently first, and then applying an MCMC algorithm for variable selection based on results of the first stage. Extensive simulation studies are carried out to examine the empirical performance of the proposed method. Finally, we apply the proposed methodology to analyzing a real data set on respiratory cancer in Louisiana from the SEER program. | statistics |
Numerical simulation of bubble dynamics and cavitation is challenging; even the seemingly simple problem of a collapsing spherical bubble is difficult to compute accurately with a general, three-dimensional, compressible, multicomponent flow solver. Difficulties arise due to both the physical model and the numerical method chosen for its solution. We consider the 5-equation model of Allaire et al. [1], the 5-equation model of Kapila et al. [2], and the 6-equation model of Saurel et al. [3] as candidate approaches for spherical bubble dynamics, and both MUSCL and WENO interface-capturing methods are implemented and compared. We demonstrate the inadequacy of the traditional 5-equation model of Allaire et al. [1] for spherical bubble collapse problems and explain the corresponding advantages of the augmented model of Kapila et al. [2] for representing this phenomenon. Quantitative comparisons between the augmented 5-equation and 6-equation models for three-dimensional bubble collapse problems demonstrate the versatility of pressure-disequilibrium models. Lastly, the performance of pressure disequilibrium model for representing a three-dimensional spherical bubble collapse for different bubble interior/exterior pressure ratios is evaluated for different numerical methods. Pathologies associated with each factor and their origins are identified and discussed. | physics |
Einstein Telescope, a future third-generation gravitational wave detector, is expected to have an increased broadband sensitivity by a factor of approximately 10 with respect to the advanced detectors, and also extending the low frequency sensitivity of ground based gravitational wave interferometers below 10 Hz. While gravitational wave observations using a network of detectors permits a direct and independent measurement of the sky position, polarization and distance to the source, we analyze here the capabilities of the Einstein Telescope as a standalone instrument. The redshift and the system chirp mass are degenerate in gravitational wave observations with a single detector so it is usually assumed that the source redshift is obtained from the electromagnetic counterparts. We analyze the current design of the Einstein Telescope, consisting of three overlapping interferometers, arranged in an equilateral configuration with arm-opening angles of 60 degrees, and perform a joint analysis of coalescing binary black hole detection with three ET-D interferometers in the triangular configuration and show that such analysis to constrain their luminosity distances and chirp masses with the accuracy down to 20\%. | astrophysics |
We consider the task of approximating the ground state energy of two-local quantum Hamiltonians on bounded-degree graphs. Most existing algorithms optimize the energy over the set of product states. Here we describe a family of shallow quantum circuits that can be used to improve the approximation ratio achieved by a given product state. The algorithm takes as input an $n$-qubit product state $|v\rangle$ with mean energy $e_0=\langle v|H|v\rangle$ and variance $\mathrm{Var}=\langle v|(H-e_0)^2|v\rangle$, and outputs a state with an energy that is lower than $e_0$ by an amount proportional to $\mathrm{Var}^2/n$. In a typical case, we have $\mathrm{Var}=\Omega(n)$ and the energy improvement is proportional to the number of edges in the graph. When applied to an initial random product state, we recover and generalize the performance guarantees of known algorithms for bounded-occurrence classical constraint satisfaction problems. We extend our results to $k$-local Hamiltonians and entangled initial states. | quantum physics |
In this paper, we introduce notions of $\alpha$-planes in $5$D complex Heisenberg group and the twistor space as the moduli space of all $\alpha$-planes. So we can define an anti-self-dual (ASD) connection as a connection flat over all $\alpha$-planes. This geometric approach allows us to establish Penrose-Ward correspondence between ASD connections over $5$D complex Heisenberg group and a class of holomorphic vector bundles on the twistor space. By Atiyah-Ward ans\"{a}tz, we also construct a family of ASD connections on $5$D complex Heisenberg group. When restricted to $5$D real Heisenberg group, the flat model of $5$D contact manifolds, an ASD connection satisfies the horizontal part of the contact instanton equation introduced by physicists. | mathematics |
One ABR algorithm implemented on Puffer is BOLA-BASIC, the simplest variant of BOLA. BOLA finds wide use in industry, notably in the MPEG-DASH reference player used as the basis for video players at Akamai, BBC, Orange, and CBS. The overall goal of BOLA is to maximize each encoded chunk's video quality while minimizing rebuffering. To measure video quality, Puffer uses the structural similarity metric SSIM, whereas BOLA and other ABR algorithms like BBA, MPC, and Pensieve are more commonly implemented using bitrate (or a variant of bitrate). While bitrate is frequently used, BOLA allows the video provider to define its own proxy of video quality as the algorithm's "utility" function. However, using SSIM as utility proved surprisingly complex for BOLA-BASIC, despite the algorithm's simplicity. Given the rising popularity of SSIM and related quality metrics, we anticipate that a growing number of Puffer-like systems will face similar challenges. We hope developers of such systems find our experiences informative as they implement algorithms designed with bitrate-based utility in mind. | computer science |
Activity and self-generated motion are fundamental features observed in many living and non-living systems. Given that inter-particle adhesive forces are known to regulate particle dynamics, we investigate how adhesion strength controls the boundary growth and roughness in an active particle aggregate. Using particle based simulations incorporating both activity (birth, death and growth) and systematic physical interactions (elasticity and adhesion), we establish that inter-particle adhesion strength ($f^{ad}$) controls the surface roughness of a densely packed three-dimensional(3D) active particle aggregate expanding into a highly viscous medium. We discover that the surface roughness of a 3D active particle aggregate increases in proportion to the inter-particle adhesion strength, $f^{ad}$. We show that asymmetry in the radial and tangential active particle mean squared displacement (MSD) suppresses 3D surface roughness at lower adhesion strengths. By analyzing the statistical properties of particle displacements at the aggregate periphery, we determine that the 3D surface roughness is driven by the movement of active particle towards the core at high inter-particle adhesion strengths. Our results elucidate the physics controlling the expansion of adhesive 3D active particle collectives into a highly viscous medium, with implications into understanding stochastic interface growth in active matter systems characterized by self generated particle flux. | condensed matter |
In cavity optomechanics, nonlinear interactions between an optical field and a mechanical resonator mode enable a variety of unique effects in classical and quantum measurement and information processing. Here, we describe nonlinear optomechanical coupling in the membrane-in-the-middle (MIM) setup in a way that allows direct comparison to the intrinsic optomechanical nonlinearity in a standard, single-cavity optomechanical system. We find that the enhancement of nonlinear optomechanical coupling in the MIM system as predicted by Ludwig et al. arXiv:1202.0532 is limited to the degree of sideband resolution of the system. Moreover, we show that the selectivity of the MIM system of nonlinear over linear transduction has the same limit as in a single cavity system. These findings put constraints on the experiments in which it is advantageous to use a MIM system. We discuss dynamical backaction effects in this system and find that these effects per cavity photon are exactly as strong as in a single cavity system, while allowing for reduction of the required input power. We propose using the nonlinear enhancement and reduced input power in realistic MIM systems towards parametric squeezing and heralding of phonon pairs, and evaluate the limits to the magnitude of both effects. | physics |
Prognostic models in survival analysis are aimed at understanding the relationship between patients' covariates and the distribution of survival time. Traditionally, semi-parametric models, such as the Cox model, have been assumed. These often rely on strong proportionality assumptions of the hazard that might be violated in practice. Moreover, they do not often include covariate information updated over time. We propose a new flexible method for survival prediction: DeepHazard, a neural network for time-varying risks. Our approach is tailored for a wide range of continuous hazards forms, with the only restriction of being additive in time. A flexible implementation, allowing different optimization methods, along with any norm penalty, is developed. Numerical examples illustrate that our approach outperforms existing state-of-the-art methodology in terms of predictive capability evaluated through the C-index metric. The same is revealed on the popular real datasets as METABRIC, GBSG, and ACTG. | statistics |
In this paper, we present a sequential sampling-based algorithm for the two-stage distributionally robust linear programming (2-DRLP) models. The 2-DRLP models are defined over a general class of ambiguity sets with discrete or continuous probability distributions. The algorithm is a distributionally robust version of the well-known stochastic decomposition algorithm of Higle and Sen (Math. of OR 16(3), 650-669, 1991) for a two-stage stochastic linear program. We refer to the algorithm as the distributionally robust stochastic decomposition (DRSD) method. The key features of the algorithm include (1) it works with data-driven approximations of ambiguity sets that are constructed using samples of increasing size and (2) efficient construction of approximations of the worst-case expectation function that solves only two second-stage subproblems in every iteration. We identify conditions under which the ambiguity set approximations converge to the true ambiguity sets and show that the DRSD method asymptotically identifies an optimal solution, with probability one. We also computationally evaluate the performance of the DRSD method for solving distributionally robust versions of instances considered in stochastic programming literature. The numerical results corroborate the analytical behavior of the DRSD method and illustrate the computational advantage over an external sampling-based decomposition approach (distributionally robust L-shaped method). | mathematics |
Stochastic Neighbor Embedding (SNE) is a manifold learning and dimensionality reduction method with a probabilistic approach. In SNE, every point is consider to be the neighbor of all other points with some probability and this probability is tried to be preserved in the embedding space. SNE considers Gaussian distribution for the probability in both the input and embedding spaces. However, t-SNE uses the Student-t and Gaussian distributions in these spaces, respectively. In this tutorial and survey paper, we explain SNE, symmetric SNE, t-SNE (or Cauchy-SNE), and t-SNE with general degrees of freedom. We also cover the out-of-sample extension and acceleration for these methods. Some simulations to visualize the embeddings are also provided. | statistics |
Recent work of Qi et al. arXiv:2004.11240v7 proposes a set of axioms for tensor rank functions. The current paper presents examples showing that their axioms allow rank functions to have some undesirable properties, and a stronger set of axioms is suggested that eliminates these properties. Two questions raised by Qi et al. involving the submax rank function are also answered. | mathematics |
By using the Debye screened potential a generalized version of Newton's Shell Theorem is developed and analytical equations are derived to calculate i) the potential of a charged sphere surrounded by electrolyte, ii) the potential of two concentric charged spheres surrounded by electrolyte, and iii) the membrane potential of a charged lipid vesicle surrounded by electrolyte with high ion concentration. By numerical integration the potential of a lipid vesicle is calculated at any electrolyte concentration. | condensed matter |
We use recently derived Ward identities and lattice data for the light- and strange-quark condensates to reconstruct the scalar and pseudoscalar susceptibilities ($\chi_S^\kappa$, $\chi_P^K$) in the isospin 1/2 channel. We show that $\chi_S^\kappa$ develops a maximum above the QCD chiral transition, after which it degenerates with $\chi_P^K$. This result provides an alternative sign for $O(4)\times U_A(1)$ restoration that can be explored in lattice simulations and highlights the role of strangeness, which regulated by the strange-quark condensate helps to reconcile the current tension among lattice results regarding $U_A(1)$ restoration. We show that the peak structure of $\chi_S^\kappa$ can be described when it is saturated with the $K_0^*(700)$ (or $\kappa$) meson, the dominant lowest-energy state in the isospin 1/2 scalar channel, which in turn can be computed in Unitarized Chiral Perturbation Theory at finite temperature. This reveals the importance of thermal interactions and makes it possible to examine the $\chi_S^\kappa$ dependence on the light- and strange-quark masses. A consistent picture emerges controlled by the $m_l/m_s$ ratio that allows studying the behavior of the $O(4)\times U_A(1)$ transition in the chiral and SU(3) limits. | high energy physics phenomenology |
We consider MSSM with right-chiral neutrino superfields with Majorana masses, where the lightest right-handed sneutrino dominated scalars constitutes non-thermal dark matter (DM). The $\Delta\,L=2$ masses are subject to severe constraints coming from freeze-in relic density of such DM candidates as well as from sterile neutrino $\textit{freeze-in}$. In addition, big-bang Nucleosynthesis and $\textit{freeze-out}$ of the next-to-lightest superparticle shrink the viable parameter space of such a scenario. We examine various $\Delta\,L=2$ mass terms for families other than that $\Delta\,L=2$ masses are difficult to reconcile with a right-sneutrino DM, unless there is either (a) a hierarchy of about 3 orders of magnitudes among various supersymmetry-breaking mass parameters, or, (b) strong cancellation between the higgsino mass and the trilinear supersymmetry breaking mass parameter for sneutrinos. | high energy physics phenomenology |
The Transiting Exoplanet Survey Satellite (\textit{TESS}) mission was designed to perform an all-sky search of planets around bright and nearby stars. Here we report the discovery of two sub-Neptunes orbiting around the TOI 1062 (TIC 299799658), a V=10.25 G9V star observed in the TESS Sectors 1, 13, 27 & 28. We use precise radial velocity observations from HARPS to confirm and characterize these two planets. TOI 1062b has a radius of 2.265^{+0.095}_{-0.091} Re, a mass of 11.8 +\- 1.4 Me, and an orbital period of 4.115050 +/- 0.000007 days. The second planet is not transiting, has a minimum mass of 7.4 +/- 1.6 Me and is near the 2:1 mean motion resonance with the innermost planet with an orbital period of 8.13^{+0.02}_{-0.01} days. We performed a dynamical analysis to explore the proximity of the system to this resonance, and to attempt at further constraining the orbital parameters. The transiting planet has a mean density of 5.58^{+1.00}_{-0.89} g cm^-3 and an analysis of its internal structure reveals that it is expected to have a small volatile envelope accounting for 0.35% of the mass at maximum. The star's brightness and the proximity of the inner planet to the "radius gap" make it an interesting candidate for transmission spectroscopy, which could further constrain the composition and internal structure of TOI 1062b. | astrophysics |
The ability to image materials at the microscale from long-wavelength wave data is a major challenge to the geophysical, engineering and medical fields. Here, we present a framework to constrain microstructure geometry and properties from long-scale waves. To realistically quantify microstructures we use two-point statistics, from which we derive scale-dependent effective wave properties - wavespeed and attenuation - using strong-contrast expansions (SCE) for (visco)elastic wavefields. By evaluating various two-point correlation functions we observe that both effective wavespeeds and attenuation of long-scale waves predominantly depend on volume fraction and phase properties, and that especially attenuation at small scales is highly sensitive to the geometry of microstructure heterogeneity (e.g. geometric hyperuniformity) due to incoherent inference of sub-wavelength multiple scattering. Our goal is to infer microstructure properties from observed effective wave parameters. To this end, we use the supervised machine learning method of Random Forests (RF) to construct a Bayesian inference approach. We can accurately resolve two-point correlation functions sampled from various microstructural configurations, including: a bead pack, Berea sandstone and Ketton limestone samples. Importantly, we show that inversion of small scale-induced effective elastic waves yields the best results, particularly compared to single-wave-mode (e.g., acoustic only) information. Additionally, we show that the retrieval of microscale medium contrasts is more difficult - as it is highly ill-posed - and can only be achieved with specific a priori knowledge. Our results are promising for many applications, such as earthquake hazard monitoring,non-destructive testing, imaging fluid flow in porous media, quantifying tissue properties in medical ultrasound, or designing materials with tailor-made wave properties. | physics |
A key step in medical diagnosis is giving the patient a universally recognized label (e.g. Appendicitis) which essentially assigns the patient to a class(es) of patients with similar body failures. However, two patients having the same disease label(s) with high probability may still have differences in their feature manifestation patterns implying differences in the required treatments. Additionally, in many cases, the labels of the primary diagnoses leave some findings unexplained. Medical diagnosis is only partially about probability calculations for label X or Y. Diagnosis is not complete until the patient overall situation is clinically understood to the level that enables the best therapeutic decisions. Most machine learning models are data centric models, and evidence so far suggest they can reach expert level performance in the disease labeling phase. Nonetheless, like any other mathematical technique, they have their limitations and applicability scope. Primarily, data centric algorithms are knowledge blind and lack anatomy and physiology knowledge that physicians leverage to achieve complete diagnosis. This article advocates to complement them with intelligence to overcome their inherent limitations as knowledge blind algorithms. Machines can learn many things from data, but data is not the only source that machines can learn from. Historic patient data only tells us what the possible manifestations of a certain body failure are. Anatomy and physiology knowledge tell us how the body works and fails. Both are needed for complete diagnosis. The proposed Double Deep Learning approach, along with the initiative for Medical Wikipedia for Smart Machines, leads to AI diagnostic support solutions for complete diagnosis beyond the limited data only labeling solutions we see today. AI for medicine will forever be limited until their intelligence also integrates anatomy and physiology. | computer science |
Topological photonics provides an ideal platform for demonstrating novel band topology concepts, which are also promising for robust waveguiding, communication and computation applications. However, many challenges such as extremely large device footprint and functionality at short wavelengths remain to be solved which are required to make practical and useful devices that can also couple to electronic excitations in many important organic and inorganic semiconductors. In this letter, we report an experimental realization of Z_2 photonic topological insulators with their topological edge state energies spanning across the visible wavelength range including in the sub-500 nm regime. The photonic structures are based on deformed hexagonal lattices with preserved six-fold rotational symmetry patterned on suspended SiNx membranes. The experimentally measured energy-momentum dispersion of the topological lattices directly show topological band inversion by the swapping of the brightness of the bulk energy bands, and also the helical edge states when the measurement is taken near the topological interface. The robust topological transport of the helical edge modes in real space is demonstrated by successfully guiding circularly polarized light beams unidirectionally through sharp kinks without major signal loss. This work paves the way for small footprint photonic topological devices working in the short wavelength range that can also be utilized to couple to excitons for unconventional light-matter interactions at the nanoscale. | physics |
We predict a set of unusual quantum acoustic phenomena resulting from sound-matter interactions in a fully tunable solid-state platform, in which an array of solid-state spins in diamond are coupled to quantized acoustic waves in a one-dimensional (1D) optomechanical crystal. We find that, by a spatially varying laser drive that introduces a position-dependent phase in the optomechanical interaction, the mechanical band structure can be tuned in situ, consequently leading to unconventional quantum sound-matter interactions. We show that quasi-chiral sound-matter interactions can occur, with tunable ranges from bidirectional to quasi-unidirectional, when the spins are resonant with the bands. When the solid-state spins'frequency lies within the acoustic band-gap, we demonstrate the emergence of an exotic polariton bound state, which can mediate long-range tunable, odd-neighbor and complex spin-spin interactions. This work expands the present exploration of quantum phononics and can have wide applications in quantum simulation and quantum information processing. | quantum physics |
We describe the implementation of kinetic solvers in 1d2v phase space using adaptive Cartesian mesh. Spherical coordinates in velocity space are used to simplify the Lorentz and Fokker-Planck collisional operators. The key capabilities of the new solvers are illustrated for electron elastic scattering, acceleration, continuous energy loss in collisions, and ionization processes in weakly-ionized plasma. We have also implemented two-stream approach to reduce computational cost for studies of gas breakdown dynamics in the presence of runaway electrons. The benefits and limitations of the non-split-phase-space method for kinetic solvers with adaptive mesh in phase space are discussed. | physics |
Future vehicles will have rich computing resources to support autonomous driving and be connected by wireless technologies. Vehicular fog networks (VeFN) have thus emerged to enable computing resource sharing via computation task offloading, providing wide range of fog applications. However, the high mobility of vehicles makes it hard to guarantee the delay that accounts for both communication and computation throughout the whole task offloading procedure. In this article, we first review the state-of-the-art of task offloading in VeFN, and argue that mobility is not only an obstacle for timely computing in VeFN, but can also benefit the delay performance. We then identify machine learning and coded computing as key enabling technologies to address and exploit mobility in VeFN. Case studies are provided to illustrate how to adapt learning algorithms to fit for the dynamic environment in VeFN, and how to exploit the mobility with opportunistic computation offloading and task replication. | computer science |
We employ numerical simulations to study active transistor-like switches made from two-dimensional (2D) granular crystals containing two types of grains with the same size, but different masses. We tune the mass contrast and arrangement of the grains to maximize the width of the frequency band gap in the device. The input signal is applied to a single grain on one side of the device, and the output signal is measured from another grain on the other side of the device. Changing the size of one or many grains tunes the pressure, which controls the vibrational response of the device. Switching between the on and off states is achieved using two mechanisms: 1) pressure-induced switching where the interparticle contact network is the same in the on and off states, and 2) switching through contact breaking. In general, the performance of the acoustic switch, as captured by the gain ratio and switching time between the on and off states, is better for pressure-induced switching. We show that in these acoustic switches the gain ratio between the on and off states can be larger than $10^4$ and the switching time (multiplied by the driving frequency) is comparable to that obtained recently for sonic crystals and less than that for photonic transistor-like switches. Since the self-assembly of grains with different masses into 2D granular crystals is challenging, we describe simulations of circular grains with small circular knobs placed symmetrically around the perimeter mixed with circular grains without knobs. Using umbrella sampling techniques, we show that devices with grains with $3$ knobs most efficiently form the hexagonal crystals that yield the largest band gap. | condensed matter |
An extension of the two Higgs doublet model including inverse seesaw neutrinos and neutral Higgs bosons was constructed based on the $A_4$ symmetry in order to explain the recent neutrino oscillation data. This model can distinguish two well-known normal and inverted order schemes of neutrino data once both the effective masses $m_{\beta}$ in tritium beta decays and $\langle m\rangle$ in the neutrinoless double beta decay are observed. The lepton flavor violating decays of the charged leptons $e_b\rightarrow e_a\gamma$, $\mu\rightarrow3e$, the Standard model-like Higgs boson decays $h\rightarrow e_be_a$, and the $\mu$-e conversions in some nuclei arise from loop corrections. The experimental data of the branching ratio Br$(\mu\rightarrow e\gamma, 3e)$ predicts that the upper bounds of Br$(\tau \rightarrow \mu\gamma,e\gamma)$ and Br$(h\rightarrow e_{a}e_b)$ are much smaller than the planned experimental sensitivities. In contrast, the $\mu$-e conversions are the promising signals for experiments. | high energy physics phenomenology |
We give explicit expressions for the Fourier coefficients of Eisenstein series twisted by Dirichlet characters and modular symbols on $\Gamma_0(N)$ in the case where $N$ is prime and equal to the conductor of the Dirichlet character. We obtain these expressions by computing the spectral decomposition of automorphic functions closely related to these Eisenstein series. As an application, we then evaluate certain sums of modular symbols in a way which parallels past work of Goldfeld, O'Sullivan, Petridis, and Risager. In one case we find less cancellation in this sum than would be predicted by the common phenomenon of ``square root cancellation'', while in another case we find more cancellation. | mathematics |
In the existing literatures for the risk evaluation of electricity-gas integrated energy system (EGIES), the impacts of gas leakage in pipelines are ignored. This paper presents a method to incorporate the failure modes of gas pipeline leakage in EGIES risk evaluation. A Markov state transition model of gas pipeline with multi-state and multi-mode transition process, and a bi-level Monte Carlo sampling method for this model are developed. A stochastic topology change based network model of EGIES considering the pipeline leakage failure modes is presented. The risk indices for EGIES based on the load shedding, including those specifically for gas leakage risks, are also proposed. An EGIES with a modified RBTS and a 7-node gas system was used to demonstrate an application of the proposed method and models. The results indicate that pipeline leakage failures have significant impacts on the risk of EGIES. Ignoring pipeline leakage failures in the risk evaluation of EGIES will result in an overly underestimation of system risk and most likely a misleading conclusion in system planning. | electrical engineering and systems science |
In this paper we explore the possibility of observable gravitational waves as a manifestation of the QCD axion dynamics. In particular, we focus on dynamical axion models which solve the strong CP problem, and include the confinement of a QCD-like gauge group at the TeV scale. We study the resulting chiral symmetry breaking phase transition for models with $N_F=3$ and $N_F=4$ light flavors using the linear sigma model. This model describes the scalar meson spectrum and its interactions, with the diagonal field $\varphi$ as the order parameter. We find that the amplitude of the gravitational wave spectrum depends on the mass of the dynamical axion $\eta'$ via the ratio $m_{\eta'}/m_\varphi$. The resulting spectra may be observed at future mid-range gravitational wave experiments such as AION/MAGIS, DECIGO, and BBO. Moreover, the TeV states can be searched for at colliders and their quantum numbers characterized, providing a unique connection between axion physics, gravitational waves and collider searches. | high energy physics phenomenology |
New constructions in group homology allow us to manufacture high-dimensional manifolds with controlled simplicial volume. We prove that for every dimension bigger than 3 the set of simplicial volumes of orientable closed connected manifolds is dense in $\mathbb{R}_{\geq 0}$. In dimension 4 we prove that every non-negative rational number is the simplicial volume of some orientable closed connected 4-manifold. Our group theoretic results relate stable commutator length to the $l^1$-semi-norm of certain singular homology classes in degree 2. The output of these results is translated into manifold constructions using cross-products and Thom realisation. | mathematics |
The synthesis of a metasurface exhibiting a specific set of desired scattering properties is a time-consuming and resource-demanding process, which conventionally relies on many cycles of full-wave simulations. It requires an experienced designer to choose the number of the metallic layers, the scatterer shapes and dimensions, and the type and the thickness of the separating substrates. Here, we propose a generative machine learning (ML)-based approach to solve this one-to-many mapping and automate the inverse design of dual- and triple-layer metasurfaces. Using this approach, it is possible to solve multiobjective optimization problems by synthesizing thin structures composed of potentially brand-new scatterer designs, in cases where the inter-layer coupling between the layers is non-negligible and synthesis by traditional methods becomes cumbersome. Various examples to provide specific magnitude and phase responses of $x$- and $y$-polarized scattering coefficients across a frequency range as well as mask-based responses for different metasurface applications are presented to verify the practicality of the proposed method. | electrical engineering and systems science |
Recently, the BESIII Collaboration reported a new measurement of the $\eta_c \rho$ decay mode of $Z_c^{(\prime)}$, which motivated us to study the inner structure of $Z_c^{(\prime)}$ via investigating the hidden charm decays of these two $Z_c$ states. We consider the {triangle loop mechanism} contribution in the hidden charm decays of $Z_c^{(\prime)}$. Our estimations indicate that the triangle loop mechanism plays an important role in the decays of the $Z_c^{(\prime)}$, where our results are in agreement with the experimental observations in a reasonable parameter range. Furthermore, we point out that the $Z_c^{(\prime)}$ can be interpreted as the hadronic molecules, while the tetraquark scenario is less favored. | high energy physics phenomenology |
Many algorithms for score-based Bayesian network structure learning (BNSL), in particular exact ones, take as input a collection of potentially optimal parent sets for each variable in the data. Constructing such collections naively is computationally intensive since the number of parent sets grows exponentially with the number of variables. Thus, pruning techniques are not only desirable but essential. While good pruning rules exist for the Bayesian Information Criterion (BIC), current results for the Bayesian Dirichlet equivalent uniform (BDeu) score reduce the search space very modestly, hampering the use of the (often preferred) BDeu. We derive new non-trivial theoretical upper bounds for the BDeu score that considerably improve on the state-of-the-art. Since the new bounds are mathematically proven to be tighter than previous ones and at little extra computational cost, they are a promising addition to BNSL methods. | statistics |
Here, we focus on the data analysis of the growth of epidemic spread of Covid-19 in countries where different policies of containment were activated. It is known that the growth of pandemic spread at its threshold is exponential, but it is not known how to quantify the success of different containment policies. We identify that a successful approach gives an arrested phase regime following the Ostwald growth, where, over the course of time, one phase transforms into another metastable phase with a similar free energy as observed in oxygen interstitial diffusion in quantum complex matter and in crystallization of proteins. We introduce the s factor which provides a quantitative measure of the efficiency and speed of the adopted containment policy, which is very helpful not only to monitor the Covid-19 pandemic spread but also for other countries to choose the best containment policy. The results show that a policy based on joint confinement, targeted tests, and tracking positive cases is the most rapid pandemic containment policy; in fact, we found values of 9, 5, and 31 for the success s factor for China, South Korea, and Italy, respectively, where the lowest s factor indicates the best containment policy | physics |
Computer-aided surgical systems commonly use preoperative CT scans when performing pelvic osteotomies for intraoperative navigation. These systems have the potential to improve the safety and accuracy of pelvic osteotomies, however, exposing the patient to radiation is a significant drawback. In order to reduce radiation exposure, we propose a new smooth extrapolation method leveraging a partial pelvis CT and a statistical shape model (SSM) of the full pelvis in order to estimate a patient's complete pelvis. A SSM of normal, complete, female pelvis anatomy was created and evaluated from 42 subjects. A leave-one-out test was performed to characterise the inherent generalisation capability of the SSM. An additional leave-one-out test was conducted to measure performance of the smooth extrapolation method and an existing "cut-and-paste" extrapolation method. Unknown anatomy was simulated by keeping the axial slices of the patient's acetabulum intact and varying the amount of the superior iliac crest retained; from 0% to 15% of the total pelvis extent. The smooth technique showed an average improvement over the cut-and-paste method of 1.31 mm and 3.61 mm, in RMS and maximum surface error, respectively. With 5% of the iliac crest retained, the smoothly estimated surface had an RMS surface error of 2.21 mm, an improvement of 1.25 mm when retaining none of the iliac crest. This anatomical estimation method creates the possibility of a patient and surgeon benefiting from the use of a CAS system and simultaneously reducing the patient's radiation exposure. | computer science |
This paper presents new methodology for computationally efficient kernel density estimation. It is shown that a large class of kernels allows for exact evaluation of the density estimates using simple recursions. The same methodology can be used to compute density derivative estimates exactly. Given an ordered sample the computational complexity is linear in the sample size. Combining the proposed methodology with existing approximation methods results in extremely fast density estimation. Extensive experimentation documents the effectiveness and efficiency of this approach compared with the existing state-of-the-art. | statistics |
The sphere partition function of Calabi-Yau gauged linear sigma models (GLSMs) has been shown to compute the exact Kaehler potential of the Kaehler moduli space of a Calabi-Yau. We propose a universal expression for the sphere partition function evaluated in hybrid phases of Calabi-Yau GLSMs that are fibrations of Landau-Ginzburg orbifolds over some base manifold. Special cases include Calabi-Yau complete intersections in toric ambient spaces and Landau-Ginzburg orbifolds. The key ingredients that enter the expression are Givental's I/J-functions, the Gamma class and further data associated to the hybrid model. We test the proposal for one- and two-parameter abelian GLSMs, making connections, where possible, to known results from mirror symmetry and FJRW theory. | high energy physics theory |
In solid state physics, the Gr\"{u}neisen parameter (GP), originally introduced in the study of the effect of changing the volume of a crystal lattice on its vibrational frequency, has been widely used to investigate the characteristic energy scales of systems with respect to the changes of external potentials. On the other hand, the GP is little investigated in a strongly interacting quantum gas systems. Here we report on our general results on the origin of GP, new identity and caloric effects in quantum gases of ultracold atoms. We prove that the symmetry of the dilute quantum gas systems leads to a simple identity among three different types of GPs, quantifying caloric effect induced respectively by variations of volume, magnetic field and interaction. Using exact Bethe ansatz solutions, we present a rigorous study of these different GPs and the quantum refrigeration in one-dimensional Bose and Femi gases. Based on the exact equations of states of these systems, we obtain analytic results for the singular behaviour of the GPs and the caloric effects at quantum criticality. We also predict the existence of the lowest temperature for cooling near a quantum phase transition. It turns out that the interaction ramp-up and -down in quantum gases provides a promising protocol of quantum refrigeration in addition to the usual adiabatic demagnetization cooling in solid state materials. | condensed matter |
I study a one-matrix model of a real symmetric matrix with a potential which is a sum of two logarithmic functions and a harmonic one. This two-logarithm matrix model is the absolute square norm of a toy wave function which is obtained by replacing the tensor argument of the wave function of the canonical tensor model (CTM) with a matrix. I discuss a symmetry enhancement phenomenon in this matrix model and show that symmetries and dimensions of emergent spaces are stable only in a phase which exists exclusively for the positive cosmological constant case in the sense of CTM. This would imply the importance of the positivity of the cosmological constant in the emergence phenomena in CTM. | high energy physics theory |
We evaluate two different methods for the integration of prediction uncertainty into diagnostic image classifiers to increase patient safety in deep learning. In the first method, Monte Carlo sampling is applied with dropout at test time to get a posterior distribution of the class labels (Bayesian ResNet). The second method extends ResNet to a probabilistic approach by predicting the parameters of the posterior distribution and sampling the final result from it (Variational ResNet).The variance of the posterior is used as metric for uncertainty.Both methods are trained on a data set of optical coherence tomography scans showing four different retinal conditions. Our results shown that cases in which the classifier predicts incorrectly correlate with a higher uncertainty. Mean uncertainty of incorrectly diagnosed cases was between 4.6 and 8.1 times higher than mean uncertainty of correctly diagnosed cases. Modeling of the prediction uncertainty in computer-aided diagnosis with deep learning yields more reliable results and is anticipated to increase patient safety. | electrical engineering and systems science |
Constructing an ontology for quantum theory is challenging, in part due to unavoidable measurement back-action. The Aharonov-Albert-Vaidman weak measurement formalism provides a method to predict measurement results (weak values) in a regime where back-action is negligible. The weak value appears analogous to a classical conditional mean and in fact, has a number of features that further suggest it may be interpreted as being related to some underlying ontological model. However, the ontology appears bizarre since the weak values are complex and unbounded. Here, we study weak values in the context of a recent quantum optical experiment involving two-photon interactions. The results of the experiment are reinterpreted within a 'stochastic optics' model of light. The model is based on standard (Maxwell) electromagnetic theory, supplemented by stochastic fluctuations of the electromagnetic fields. We show that the conditional means of the electric field intensities correspond to the experimentally observed weak values. This is a provocative result, as it suggests that at least within this experiment, the weak value gives us information about the average of an ontological quantity (the intensity). We study the breakdown of the stochastic optics model, which occurs outside the experimentally probed regime, and in particular in the limit where the weak value predicts 'anomalous' results. | quantum physics |
Gate-induced modulation of the spin-orbit interaction (SOI) in a 1.5 nm-thick Pd thin film grown on a ferrimagnetic insulator was investigated. Efficient charge accumulation by ionic gating enables a substantial upshift in the Fermi level of the Pd film, which was corroborated by suppression of the resistivity in the Pd. Electromotive forces arising from the inverse spin Hall effect in Pd under spin pumping were substantially modulated by the gating, in consequence of the modulation of the spin Hall conductivity of Pd as in an ultrathin Pt film. The same experiment using a thin Cu film, for which the band structure is largely different from Pd and Pt and its SOI is quite small, provides further results supporting our claim. The results obtained help in developing a holistic understanding of the gate-tunable SOI in solids and confirm a previous explanation of the significant modulation of the spin Hall conductivity in an ultrathin Pt film by gating. | condensed matter |
In this paper, we consider the network slicing problem which attempts to map multiple customized virtual network requests (also called services) to a common shared network infrastructure and allocate network resources to meet diverse service requirements, and propose an efficient two-stage algorithm for solving this NP-hard problem. In the first stage, the proposed algorithm uses an iterative linear programming (LP) rounding procedure to place the virtual network functions of all services into cloud nodes while taking traffic routing of all services into consideration; in the second stage, the proposed algorithm uses an iterative LP refinement procedure to obtain a solution for traffic routing of all services with their end-to-end delay constraints being satisfied. Compared with the existing algorithms which either have an exponential complexity or return a low-quality solution, our proposed algorithm achieves a better trade-off between solution quality and computational complexity. In particular, the worst-case complexity of our proposed algorithm is polynomial, which makes it suitable for solving large-scale problems. Numerical results demonstrate the effectiveness and efficiency of our proposed algorithm. | computer science |
Scientific computing workflows generate enormous distributed data that is short-lived, yet critical for job completion time. This class of data is called intermediate data. A common way to achieve high data availability is to replicate data. However, an increasing scale of intermediate data generated in modern scientific applications demands new storage techniques to improve storage efficiency. Erasure Codes, as an alternative, can use less storage space while maintaining similar data availability. In this paper, we adopt erasure codes for storing intermediate data and compare its performance with replication. We also use the metric of Mean-Time-To-Data-Loss (MTTDL) to estimate the lifetime of intermediate data. We propose an algorithm to proactively relocate data redundancy from vulnerable machines to reliable ones to improve data availability with some extra network overhead. Furthermore, we propose an algorithm to assign redundancy units of data physically close to each other on the network to reduce the network bandwidth for reconstructing data when it is being accessed. | computer science |
The emergence of autonomous vehicles is expected to revolutionize road transportation in the near future. Although large-scale numerical simulations and small-scale experiments have shown promising results, a comprehensive theoretical understanding to smooth traffic flow via autonomous vehicles is lacking. In this paper, from a control-theoretic perspective, we establish analytical results on the controllability, stabilizability, and reachability of a mixed traffic system consisting of human-driven vehicles and autonomous vehicles in a ring road. We show that the mixed traffic system is not completely controllable, but is stabilizable, indicating that autonomous vehicles can not only suppress unstable traffic waves but also guide the traffic flow to a higher speed. Accordingly, we establish the maximum traffic speed achievable via controlling autonomous vehicles. Numerical results show that the traffic speed can be increased by over 6% when there are only 5% autonomous vehicles. We also design an optimal control strategy for autonomous vehicles to actively dampen undesirable perturbations. These theoretical findings validate the high potential of autonomous vehicles to smooth traffic flow. | mathematics |
We discuss selected aspects of classical relativistic scalar field theories with nonzero chemical potential. First, we offer a review of classical field theory at nonzero density within the Lagrangian formalism. The aspects covered include the question of equivalence of descriptions of finite-density states using a chemical potential or time-dependent field configurations, the choice of Hamiltonian whose minimization yields the finite-density equilibrium state, and the issue of breaking of Lorentz invariance. Second, we demonstrate how the low-energy effective field theory for Nambu-Goldstone (NG) modes arising from the spontaneous breakdown of global internal symmetries can be worked out explicitly by integrating out the heavy (Higgs) fields. This makes it possible to analyze the spectrum of NG modes and their interactions without having to deal with mixing of NG and Higgs fields, ubiquitous in the linear-sigma-model description of spontaneous symmetry breaking. | high energy physics theory |
We employ the curvature expansion of the quantum effective action for gravity-matter systems to construct graviton-mediated scattering amplitudes for non-minimally coupled scalar fields in a Minkowski background. By design, the formalism parameterises all quantum corrections to these processes and is manifestly gauge-invariant. The conditions resulting from UV-finiteness, unitarity, and causality are analysed in detail and it is shown by explicit construction that the quantum effective action provides sufficient room to meet these structural requirements without introducing non-localities or higher-spin degrees of freedom. Our framework provides a bottom-up approach to all quantum gravity programs seeking for the quantisation of gravity within the framework of quantum field theory. Its scope is illustrated by specific examples, including effective field theory, Stelle gravity, infinite derivative gravity, and Asymptotic Safety. | high energy physics theory |
We study the effects of electronic correlations on fragile topology using dynamical mean-field theory. Fragile topological insulators (FTIs) offer obstruction to the formation of exponentially localized Wannier functions, but they can be trivialized by adding certain trivial degrees of freedom. For the same reason, FTIs do not host symmetry-protected flow of edge states between bulk bands in cylindrical boundary conditions but are expected to have a spectral flow between the fragile bands and other bands under certain twisted boundary conditions. We here analyze commonly observed effects of strong correlations, such as the Mott-insulator transition and magnetism, on a known model hosting fragile topology. We show that in the nonmagnetic case, fragile topology, along with the twisted boundary states, is stable with interactions below a critical interaction strength. Above this interaction strength, a transition to the Mott insulating phase occurs, and the twisted boundary states disappear. Furthermore, by applying a homogeneous magnetic field, the fragile topology is destroyed. However, we show that a magnetic field can induce a topological phase transition which converts a fragile topological insulator to a Chern insulator. Finally, we study ferromagnetic solutions of the fragile topological model. | condensed matter |
A new prior is proposed for learning representations of high-level concepts of the kind we manipulate with language. This prior can be combined with other priors in order to help disentangling abstract factors from each other. It is inspired by cognitive neuroscience theories of consciousness, seen as a bottleneck through which just a few elements, after having been selected by attention from a broader pool, are then broadcast and condition further processing, both in perception and decision-making. The set of recently selected elements one becomes aware of is seen as forming a low-dimensional conscious state. This conscious state is combining the few concepts constituting a conscious thought, i.e., what one is immediately conscious of at a particular moment. We claim that this architectural and information-processing constraint corresponds to assumptions about the joint distribution between high-level concepts. To the extent that these assumptions are generally true (and the form of natural language seems consistent with them), they can form a useful prior for representation learning. A low-dimensional thought or conscious state is analogous to a sentence: it involves only a few variables and yet can make a statement with very high probability of being true. This is consistent with a joint distribution (over high-level concepts) which has the form of a sparse factor graph, i.e., where the dependencies captured by each factor of the factor graph involve only very few variables while creating a strong dip in the overall energy function. The consciousness prior also makes it natural to map conscious states to natural language utterances or to express classical AI knowledge in a form similar to facts and rules, albeit capturing uncertainty as well as efficient search mechanisms implemented by attention mechanisms. | computer science |
We investigate a resonant leptogenesis scenario by quasi-degenerate right-handed neutrinos which have TeV-scale masses. Especially, we consider the case when two right-handed neutrinos are responsible to leptogenesis and the seesaw mechanism for active neutrino masses, and assume that the CP violation occurs only in the mixing matrix of active neutrinos. In this case the sign of the baryon asymmetry depends on the Dirac and Majorana CP phases as well as the mixing angle of the right-handed neutrinos. It is shown how the yield of the baryon asymmetry correlates with these parameters. In addition, we find that the effective neutrino mass in the neutrinoless double beta decay receives an additional constraint in order to account for the observed baryon asymmetry depending on the masses and mixing angle of right-handed neutrinos. | high energy physics phenomenology |
Predicting the direction of assets have been an active area of study and a difficult task. Machine learning models have been used to build robust models to model the above task. Ensemble methods is one of them showing results better than a single supervised method. In this paper, we have used generative and discriminative classifiers to create the stack, particularly 3 generative and 6 discriminative classifiers and optimized over one-layer Neural Network to model the direction of price cryptocurrencies. Features used are technical indicators used are not limited to trend, momentum, volume, volatility indicators, and sentiment analysis has also been used to gain useful insight combined with the above features. For Cross-validation, Purged Walk forward cross-validation has been used. In terms of accuracy, we have done a comparative analysis of the performance of Ensemble method with Stacking and Ensemble method with blending. We have also developed a methodology for combined features importance for the stacked model. Important indicators are also identified based on feature importance. | statistics |
Networked dynamic systems are often abstracted as directed graphs, where the observed system processes form the vertex set and directed edges are used to represent non-zero transfer functions. Recovering the exact underlying graph structure of such a networked dynamic system, given only observational data, is a challenging task. Under relatively mild well-posedness assumptions on the network dynamics, there are state-of-the-art methods which can guarantee the absence of false positives. However, in this article we prove that under the same well-posedness assumptions, there are instances of networks for which any method is susceptible to inferring false negative edges or false positive edges. Borrowing a terminology from the theory of graphical models, we say those systems are unfaithful to their networks. We formalize a variant of faithfulness for dynamic systems, called Granger-faithfulness, and for a large class of dynamic networks, we show that Granger-unfaithful systems constitute a Lebesgue zero-measure set. For the same class of networks, under the Granger-faithfulness assumption, we provide an algorithm that reconstructs the network topology with guarantees for no false positive and no false negative edges in its output. We augment the topology reconstruction algorithm with orientation rules for some of the inferred edges, and we prove the rules are consistent under the Granger-faithfulness assumption. | electrical engineering and systems science |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.