text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
We used a thermodynamic integration scheme, which is specifically designed for disordered systems, to compute the interfacial free energy of the solid-liquid interface in the hard-sphere model. We separated the bulk contribution to the total free energy from the interface contribution, performed a finite-size scaling analysis and obtained for the (100)-interface $\gamma=0.591(11)k_{B}T\sigma^{-2}$. | physics |
We constructs a new network by superposition of hexahedron , which are scale-free, highly sparse,disassortative ,and maximal planar graphs. The network degree distribution, agglomeration coefficient and degree of correlation are computed separately using the iterative method, and these characteristics are found to be very rich. The method of network characteristic analysis can be applied to some actual systems, so as to study the complexity of real network system under the framework of complex network theory. | physics |
Deep reinforcement learning (RL) has achieved breakthrough results on many tasks, but agents often fail to generalize beyond the environment they were trained in. As a result, deep RL algorithms that promote generalization are receiving increasing attention. However, works in this area use a wide variety of tasks and experimental setups for evaluation. The literature lacks a controlled assessment of the merits of different generalization schemes. Our aim is to catalyze community-wide progress on generalization in deep RL. To this end, we present a benchmark and experimental protocol, and conduct a systematic empirical study. Our framework contains a diverse set of environments, our methodology covers both in-distribution and out-of-distribution generalization, and our evaluation includes deep RL algorithms that specifically tackle generalization. Our key finding is that `vanilla' deep RL algorithms generalize better than specialized schemes that were proposed specifically to tackle generalization. | computer science |
Conformal field theory has turned out to be a powerful tool to derive interesting lattice models with analytical ground states. Here, we investigate a class of critical, one-dimensional lattice models of fermions and hardcore bosons related to the Laughlin states. The Hamiltonians of the exact models involve interactions over long distances that are difficult to realize experimentally. This motivates us to study the properties of models with the same type of interactions, but now only between nearest and possibly next-nearest neighbor sites. Based on computations of wavefunction overlaps, entanglement entropies, and two-site correlation functions for systems of up to 32 sites, we find that the ground state is close to the ground state of the exact model. There is also a high overlap per site between the lowest excited states for the local and the exact models, although the energies of the low-lying excited states are modified to some extent for the system sizes considered. We also briefly discuss possibilities for realizing the local models in ultracold atoms in optical lattices. | quantum physics |
Having so far only indirect evidence for the existence of Dark Matter a plethora of experiments aims at direct detection of Dark Matter through the scattering of Dark Matter particles off atomic nuclei. For the correct interpretation and identification of the underlying nature of the Dark Matter constituents higher-order corrections to the cross section of Dark Matter-nucleon scattering are important, in particular in models where the tree-level cross section is negligibly small. In this work we revisit the electroweak corrections to the dark matter-nucleon scattering cross section in a model with a pseudo Nambu-Goldstone boson as the Dark Matter candidate. Two calculations that already exist in the literature, apply different approaches resulting in different final results for the cross section in some regions of the parameter space leading us to redo the calculation and analyse the two approaches to clarify the situation. We furthermore update the experimental constraints and examine the regions of the parameter space where the cross section is above the neutrino floor but which can only be probed in the far future. | high energy physics phenomenology |
The transfer matrix is a powerful technique that can be applied to statistical mechanics systems as for example in the calculus of the entropy of the ice model. One interesting way to study such systems is to map it onto a 3-color problem. In this paper, we explicitly build the transfer matrix for the 3-color problem in order to calculate the number of possible configurations for finite systems with free, periodic in one direction and toroidal boundary conditions (periodic in both directions) | condensed matter |
A simple model for the explanation of the muon anomalous magnetic moment was proposed by the present authors within the context of the minimal supersymmetric standard model [1607.05705, 1608.06618]: "Higgs-anomaly mediation". In the setup, squarks, sleptons, and gauginos are massless at tree-level, but the Higgs doublets get large negative soft supersymmetry (SUSY) breaking masses squared $m_{H_u}^2 \simeq m_{H_d}^2 < 0$ at a certain energy scale, $M_{\rm inp}$. The sfermion masses are radiatively generated by anomaly mediation and Higgs-loop effects, and gaugino masses are solely determined by anomaly mediation. Consequently, the smuons and bino are light enough to explain the muon $g-2$ anomaly while the third generation sfermions are heavy enough to explain the observed Higgs boson mass. The scenario avoids the SUSY flavor problem as well as various cosmological problems, and is consistent with the radiative electroweak symmetry breaking. In this paper, we show that, although the muon $g-2$ explanation in originally proposed Higgs-anomaly mediation with $M_{\rm inp}\sim 10^{16}\,$GeV is slightly disfavored by the latest LHC data, the muon $g-2$ can still be explained at $1\sigma$ level when Higgs mediation becomes important at the intermediate scale, $M_{\rm inp} \sim 10^{12}\,$GeV. The scenario predicts light SUSY particles that can be fully covered by the LHC and future collider experiments. We also provide a simple realization of $m_{H_u}^2 \simeq m_{H_d}^2 < 0$ at the intermediate scale. | high energy physics phenomenology |
The neutrino oscillation patterns can be modified by neutrino interactions with external environments including electromagnetic fields that can influence on neutrinos in the case neutrinos have nonzero electromagnetic properties [1]. The phenomenon of neutrino oscillations can proceed only in the case of the coherent superposition of neutrino mass states. An external environment can modify a neutrino evolution in a way that conditions for the coherent superposition of neutrino mass states are violated. Such a violation is called quantum decoherence of neutrino states and leads to the suppression of flavor neutrino oscillations [2, 3]. Note that neutrino decoherence appeared due to the wave separation of different mass states is usually not related to quantum neutrino decoherence, the effect that is not considered below. We consider the neutrino quantum decoherence due to neutrino radiative decay in the presence of an electron medium and radiation field. The corresponding damping of neutrino oscillations is calculated. | high energy physics phenomenology |
The null controllability of the heat equation is known for decades [19,23,30]. The finite time stabilizability of the one dimensional heat equation was proved by Coron--Nguy\^en [13], while the same question for high dimensional spaces remained widely open. Inspired by Coron--Tr\'elat [14] we find explicit stationary feedback laws that quantitatively exponentially stabilize the heat equation with decay rate $\lambda$ and $Ce^{C\sqrt{\lambda}}$ estimates, where Lebeau--Robbiano's spectral inequality [30] is naturally used. Then a piecewise controlling argument leads to null controllability with optimal cost $Ce^{C/T}$, as well as finite time stabilization. | mathematics |
Odd-frequency superconductivity is an exotic phase of matter in which Cooper pairing between electrons is entirely dynamical in nature. Majorana zero modes exhibit pure odd-frequency superconducting correlations due to their specific properties. Thus, by tunnel-coupling an array of Majorana zero modes to a spin-polarized wire, it is in principle possible to engineer a bulk one-dimensional odd-frequency spinless $s$-wave superconductor. We here point out that each tunnel coupling element, being dependent on a large number of material-specific parameters, is generically complex with sample variability in both its magnitude and phase. Using this, we demonstrate that, upon averaging over phase-disorder, the induced superconducting, including odd-frequency, correlations in the spin-polarized wire are significantly suppressed. We perform both a rigorous analytical evaluation of the disorder-averaged $T$-matrix in the wire, as well as numerical calculations based on a tight-binding model, and find that the anomalous, i.e. superconducting, part of the $T$-matrix is highly suppressed with phase disorder. We also demonstrate that this suppression is concurrent with the filling of the single-particle excitation gap by smearing the near-zero frequency peaks, due to formation of bound states that satisfy phase-matching conditions between spatially separated Majorana zero modes. Our results convey important constraints on the parameter control needed in practical realizations of Majorana zero mode structures and suggest that the achievement of a bulk 1D odd-$\omega$ superconductivity from MZMs demand full control of the system parameters. | condensed matter |
Using a currently most representative sample of 477 late-type galaxies within 11 Mpc of the Milky Way with measured star-formation rates ($SFR$s) from the far ultraviolet ($FUV$) and H$\alpha$ emission line fluxes, we select galaxies with the extreme ratios: $SFR(H\alpha)/SFR(FUV) > 2$ and $SFR(H\alpha)/SFR(FUV) < 1/20$. Each subsample amounts to $\sim5$\% of the total number and consists of dwarf galaxies with the stellar masses $M^*/M_{\odot} = (5.5 - 9.5)$~dex. In spite of a huge difference in their $SFR(H\alpha)$ activity on a scale of $\sim10$~ Myr, the temporarily "excited" and temporarily "quiescent" galaxies follow one and the same relation between $SFR(FUV)$ and $M^*$ on a scale of $\sim100$~Myr. Their average specific star-formation rate $\log[SFR(FUV)/M^*] = -10.1\pm0.1$ (yr$^{-1}$) coinsides with the Hubble parameter $\log(H_0)= -10.14$ (yr$^{-1}$). On a scale of $t \sim10$~Myr, variations of $SFR$ have a moderate flash amplitude of less than 1 order above the main-sequence and a fading amplitude to 2 orders below the average level. In general, both temporarily excited and temporarily quiescent galaxies have almost similar gas fractions as normal main-sequence galaxies, being able to maintain the current rate of star-formation on another Hubble time scale. Ranging the galaxies according to the density contrast produced by the nearest massive neighbor exhibits only a low average excess of $SFR$ caused by tidal interactions. | astrophysics |
We want to study how the velocity segregation and the radial profile of the velocity dispersion depend on the prominence of the brightest cluster galaxies (BCGs). We divide a sample of 102 clusters and groups of galaxies into four bins of magnitude gap between the two brightest cluster members. We then compute the velocity segregation in bins of absolute and relative magnitudes. Moreover, for each bin of magnitude gap we compute the radial profile of the velocity dispersion. When using absolute magnitudes, the segregation in velocity is limited to the two brightest bins and no significant difference is found for different magnitude gaps. However, when we use relative magnitudes, a trend appears in the brightest bin: the larger the magnitude gap, the larger the velocity segregation. We also show that this trend is mainly due to the presence, in the brightest bin, of satellite galaxies in systems with small magnitude gaps: in fact, if we study separately central galaxies and satellites, this trend is mitigated and central galaxies are more segregated than satellites for any magnitude gap. A similar result is found in the radial velocity dispersion profiles: a trend is visible in central regions (where the BCGs dominate) but, if we analyse the profile using satellites alone, the trend disappears. In the latter case, the shape of the velocity dispersion profile in the centre of systems with different magnitude gaps show three types of behaviours: systems with the smallest magnitude gaps have an almost flat profile from the centre to the external regions; systems with the largest magnitude gaps show a monothonical growth from the low values of the central part to the flat ones in the external regions; finally, systems with $1.0 < \Delta m_{12} \le 1.5$ show a profile that peaks in the centres and then decreases towards the external regions. We suggest that two mechanisms could be respons.... | astrophysics |
We extend the branch point twist field approach for the calculation of entanglement entropies to time-dependent problems in 1+1-dimensional massive quantum field theories. We focus on the simplest example: a mass quench in the Ising field theory from initial mass $m_0$ to final mass $m$. The main analytical results are obtained from a perturbative expansion of the twist field one-point function in the post-quench quasi-particle basis. The expected linear growth of the R\'enyi entropies at large times $mt\gg 1$ emerges from a perturbative calculation at second order. We also show that the R\'enyi and von Neumann entropies, in infinite volume, contain subleading oscillatory contributions of frequency $2m$ and amplitude proportional to $(mt)^{-3/2}$. The oscillatory terms are correctly predicted by an alternative perturbation series, in the pre-quench quasi-particle basis, which we also discuss. A comparison to lattice numerical calculations carried out on an Ising chain in the scaling limit shows very good agreement with the quantum field theory predictions. We also find evidence of clustering of twist field correlators which implies that the entanglement entropies are proportional to the number of subsystem boundary points. | high energy physics theory |
The emergence of voice-assistant devices ushers in delightful user experiences not just on the smart home front, but also in diverse educational environments from classrooms to personalized-learning/tutoring. However, the use of voice as an interaction modality also could result in exposure of user's identity, and hinders the broader adoption of voice interfaces; this is especially important in environments where children are present and their voice privacy needs to be protected. To this end, building on state-of-the-art techniques proposed in the literature, we design and evaluate a practical and efficient framework for voice privacy at the source. The approach combines speaker identification (SID) and speech conversion methods to randomly disguise the identity of users right on the device that records the speech, while ensuring that the transformed utterances of users can still be successfully transcribed by Automatic Speech Recognition (ASR) solutions. We evaluate the ASR performance of the conversion in terms of word error rate and show the promise of this framework in preserving the content of the input speech. | electrical engineering and systems science |
Accurate, fast, and reliable parameter estimation is crucial for modeling, control, and optimization of solar photovoltaic (PV) systems. In this paper, we focus on the two most widely used benchmark datasets and try to answer (i) whether the global minimum in terms of root mean square error (RMSE) has already been reached; and (ii) whether a significantly simpler metaheuristic, in contrast to currently sophisticated ones, is capable of identifying PV parameters with comparable performance, e.g., attaining the same RMSE. We address the former using an interval analysis based branch and bound algorithm and certify the global minimum rigorously for the single diode model (SDM) as well as locating a fairly tight upper bound for the double diode model (DDM) on both datasets. These obtained values will serve as useful references for metaheuristic methods, since none of them can guarantee or recognize the global minimum even if they have literally discovered it. However, this algorithm is excessively slow and unsuitable for time-sensitive applications (despite the great insights on RMSE that it yields). Regarding the second question, extensive examination and comparison reveal that, perhaps surprisingly, a classic and remarkably simple differential evolution (DE) algorithm can consistently achieve the certified global minimum for the SDM and obtain the best known result for the DDM on both datasets. Thanks to its extreme simplicity, the DE algorithm takes only a fraction of the running time required by other contemporary metaheuristics and is thus preferable in real-time scenarios. This unusual (and certainly notable) finding also indicates that the employment of increasingly complicated metaheuristics might possibly be somewhat overkill for regular PV parameter estimation. Finally, we discuss the implications of these results and suggest promising directions for future development. | electrical engineering and systems science |
We compare numerically the performance of reversible and non-reversible Markov Chain Monte Carlo algorithms for high dimensional oil reservoir problems; because of the nature of the problem at hand, the target measures from which we sample are supported on bounded domains. We compare two strategies to deal with bounded domains, namely reflecting proposals off the boundary and rejecting them when they fall outside of the domain. We observe that for complex high dimensional problems reflection mechanisms outperform rejection approaches and that the advantage of introducing non-reversibility in the Markov Chain employed for sampling is more and more visible as the dimension of the parameter space increases. | statistics |
Planar cell polarity (PCP), the coherent in-plane polarization of a tissue on multicellular length scales, provides directional information that guides a multitude of developmental processes at cellular and tissue levels. While it is manifest that cells utilize both intracellular and intercellular mechanisms, how the two produce the collective polarization remains an active area of investigation. We study the role of intracellular interactions in the large-scale spatial coherence of cell polarities, and scrutinize the role of intracellular interactions in the emergence of tissue-wide polarization. We demonstrate that nonlocal cytoplasmic interactions are necessary and sufficient for the robust long-range polarization, and are essential to the faithful detection of weak directional signals. In the presence of nonlocal interactions, signatures of geometrical information in tissue polarity become manifest. We investigate the deleterious effects of geometric disorder, and determine conditions on the cytoplasmic interactions that guarantee the stability of polarization. These conditions get progressively more stringent upon increasing the geometric disorder. Another situation where the role of geometrical information might be evident is elongated tissues. Strikingly, our model recapitulates an observed influence of tissue elongation on the orientation of polarity. Eventually, we introduce three classes of mutants: lack of membrane proteins, cytoplasmic proteins, and local geometrical irregularities. We adopt core-PCP as a model pathway, and interpret the model parameters accordingly, through comparing the in silico and in vivo phenotypes. This comparison helps us shed light on the roles of the cytoplasmic proteins in cell-cell communication, and make predictions regarding the cooperation of cytoplasmic and membrane proteins in long-range polarization. | physics |
It was recently shown that in warped compactifications based on a Klebanov-Strassler throat there is a light complex structure field, governing the size of the throat and the redshift at its tip. We show that after uplift of the cosmological constant by an anti-D3 brane at the tip of the throat, the contribution to supersymmetry breaking coming from the new light field is large. We work out the mass scales, in particular the condition for this field to be heavier than the K\"ahler modulus. We check that for the range of parameters relevant for the destabilization we find agreement with de Sitter swampland conjecture. Adding matter fields on distant branes, we discuss the effects on supersymmetry breaking in the observable sector. A hierarchically small scale of supersymmetry breaking translates generically into large values of localized D3 charges in the manifold. | high energy physics theory |
We initiate a study of non-supersymmetric Born-Infeld electrodynamics in 4d at the quantum level. Explicit all-multiplicity expressions are calculated for the purely rational one-loop amplitudes in the self-dual ($++\ldots+$) and next-to-self-dual ($-+\ldots+$) helicity sectors. Using a supersymmetric decomposition, $d$-dimensional unitarity cuts of the integrand factorize into tree-amplitudes in a 4d model of Born-Infeld photons coupled to a massive complex scalar. The two-scalar tree-amplitudes needed to construct the Born-Infeld integrand are computed using two complimentary approaches: (1) as a double-copy of Yang-Mills coupled to a massive adjoint scalar with a dimensionally reduced form of Chiral Perturbation Theory, and (2) by imposing consistency with low-energy theorems under a reduction from 4d to 3d and T-duality. The Born-Infeld integrand is integrated in $d=4-2\epsilon$ dimensions at order $\mathcal{O}(\epsilon^0)$ using the dimension-shifting formalism. We comment on the implications for electromagnetic duality in quantum Born-Infeld theory. | high energy physics theory |
In applications of climate information, coarse-resolution climate projections commonly need to be downscaled to a finer grid. One challenge of this requirement is the modeling of sub-grid variability and the spatial and temporal dependence at the finer scale. Here, a post-processing procedure is proposed for temperature projections that addresses this challenge. The procedure employs statistical bias correction and stochastic downscaling in two steps. In a first step, errors that are related to spatial and temporal features of the first two moments of the temperature distribution at model scale are identified and corrected. Secondly, residual space-time dependence at the finer scale is analyzed using a statistical model, from which realizations are generated and then combined with appropriate climate change signal to form the downscaled projection fields. Using a high-resolution observational gridded data product, the proposed approach is applied in a case study where projections of two regional climate models from the EURO-CORDEX ensemble are bias-corrected and downscaled to a 1x1 km grid in the Trondelag area of Norway. A cross-validation study shows that the proposed procedure generates results that better reflect the marginal distributional properties of the data product and have better consistency in space and time than empirical quantile mapping. | statistics |
This paper presents two approaches to mathematical modelling of a synthetic seismic pulse, and a comparison between them. First, a new analytical model is developed in two-dimensional Cartesian coordinates. Combined with an initial condition of sufficient symmetry, this provides a valuable check for the validity of the numerical method that follows. A particular initial condition is found which allows for a new closed-form solution. A numerical scheme is then presented which combines a spectral (Fourier) representation for displacement components and wave-speed parameters, a fourth order Runge-Kutta integration method, and an absorbing boundary layer. The resulting large system of differential equations is solved in parallel on suitable enhanced performance desktop hardware in a new software implementation. This provides an alternative approach to forward modelling of waves within isotropic media which is efficient, and tailored to rapid and flexible developments in modelling seismic structure, for example, shallow depth environmental applications. Visual comparisons of the analytic solution and the numerical scheme are presented. | physics |
This paper is concerned with the entire solutions of a two-dimensional nonlocal periodic lattice dynamical system. With bistable assumption, it is well known that the system has three different types of traveling fronts. The existence of merging-front entire solutions originating from two fronts for the system have been established by Dong, Li \& Zhang [{\it Comm. Pur Appl. Anal.}, {\bf17}(2018), 2517-2545]. Under certain conditions on the wave speeds, and by some auxiliary rational functions with certain properties to construct appropriate super- and sub solutions of the system, we establish two new types of entire solutions which originating from three fronts. | mathematics |
Basis tensor gauge theory is a vierbein analog reformulation of ordinary gauge theories in which the difference of local field degrees of freedom has the interpretation of an object similar to a Wilson line. Here we present a non-Abelian basis tensor gauge theory formalism. Unlike in the Abelian case, the map between the ordinary gauge field and the basis tensor gauge field is nonlinear. To test the formalism, we compute the beta function and the two-point function at the one-loop level in non-Abelian basis tensor gauge theory and show that it reproduces the well-known results from the usual formulation of non-Abelian gauge theory. | high energy physics theory |
The use of asserts in code has received increasing attention in the software engineering community in the past few years, even though it has been a recognized programming construct for many decades. A previous empirical study by Casalnuovo showed that methods containing asserts had fewer defects than those that did not. In this paper, we analyze the test classes of two industrial telecom Java systems to lend support to, or refute that finding. We also analyze the physical position of asserts in methods to determine if there is a relationship between assert placement and method defect-proneness. Finally, we explore the role of test method size and the relationship it has with asserts. In terms of the previous study by Casalnuovo, we found only limited evidence to support the earlier results. We did however find that defective methods with one assert tended to be located at significantly lower levels of the class position-wise than non-defective methods. Finally, method size seemed to correlate strongly with asserts, but surprisingly less so when we excluded methods with just one assert. The work described highlights the need for more studies into this aspect of code, one which has strong links with code comprehension. | computer science |
The hardware overhead associated with microwave control is a major obstacle to scale-up of superconducting quantum computing. An alternative approach involves irradiation of the qubits with trains of Single Flux Quantum (SFQ) pulses, pulses of voltage whose time integral is precisely equal to the superconducting flux quantum. Here we describe the derivation and validation of compact SFQ pulse sequences in which classical bits are clocked to the qubit at a frequency that is roughly a factor 5 higher than the qubit oscillation frequency, allowing for variable pulse-to-pulse timing. The control sequences are constructed by repeated streaming of short subsequence registers that are designed to suppress leakage out of the computational manifold. With a single global clock, high-fidelity (> 99.99%) control of qubits resonating at over 20 distinct frequencies is possible. SFQ pulses can be stored locally and delivered to the qubits via a proximal classical Josephson digital circuit, offering the possibility of a streamlined, low-footprint classical coprocessor for monitoring errors and feeding back to the qubit array. | quantum physics |
Gravitational microlensing is one of the few means of finding primordial black holes (PBHs), if they exist. Recent LIGO detections of 30 Msun black holes have re-invigorated the search for PBHs in the 10-100 Msun mass regime. Unfortunately, individual PBH microlensing events cannot easily be distinguished from stellar lensing events from photometry alone. However, the distribution of microlensing timescales (tE, the Einstein radius crossing time) can be analyzed in a statistical sense using models of the Milky Way with and without PBHs. While previous works have presented both theoretical models and observational constrains for PBHs (e.g. Calcino et al. 2018; Niikura et al. 2019), surprisingly, they rarely show the observed quantity -- the tE distribution -- for different abundances of PBHs relative to the total dark matter mass (fPBH). We present a simple calculation of how the tE distribution changes between models with and without PBHs. | astrophysics |
Spoken language understanding (SLU) datasets, like many other machine learning datasets, usually suffer from the label imbalance problem. Label imbalance usually causes the learned model to replicate similar biases at the output which raises the issue of unfairness to the minority classes in the dataset. In this work, we approach the fairness problem by maximizing the F-measure instead of accuracy in neural network model training. We propose a differentiable approximation to the F-measure and train the network with this objective using standard backpropagation. We perform experiments on two standard fairness datasets, Adult, and Communities and Crime, and also on speech-to-intent detection on the ATIS dataset and speech-to-image concept classification on the Speech-COCO dataset. In all four of these tasks, F-measure maximization results in improved micro-F1 scores, with absolute improvements of up to 8% absolute, as compared to models trained with the cross-entropy loss function. In the two multi-class SLU tasks, the proposed approach significantly improves class coverage, i.e., the number of classes with positive recall. | electrical engineering and systems science |
An extension of the van der Waals hadron resonance gas (VDWHRG) model which includes in-medium thermal modification of hadron masses, the TVDWHRG model, is considered in this paper. Based on the 2+1 flavor Polyakov Linear Sigma Model(PLSM) and the scaling mass rule for hadrons we obtain the temperature behavior of all hadron masses for different fixed baryon chemical potentials $\mu_{B}$. We calculate various thermodynamic observables at $\mu_{B}=0$ GeV in TVDWHRG model. An improved agreement with the lattice data by TVDWHRG model in the crossover region ($T\sim 0.16-0.19$ GeV) is observed as compared to those by VDWHRG and Ideal HRG (IHRG) models. We further discuss the effects of in-medium modification of hadron masses and VDW interactions on the transport coefficients such as shear viscosity ($\eta$), scaled thermal ($\lambda/T^{2}$) and electrical ($\sigma_{el}/T$ conductivities in IHRG model at different $\mu_{B}$, by utilizing quasi-particle kinetic theory with relaxation time approximation. | high energy physics phenomenology |
We investigate the phenomenology of singlet scalar dark matter in a simple $\rm U(1)_{B-L}$ gauge extension of standard model, made anomaly free with four exotic fermions. The enriched scalar sector and the new gauge boson $Z^\prime$, associated with $\rm U(1)$ gauge extension, connect the dark sector to the visible sector. We compute relic density, consistent with Planck limit and $Z^\prime$ mediated dark matter-nucleon cross section, compatible with PandaX bound. The mass of $Z^\prime$ and the corresponding gauge coupling are constrained from LEP-II and LHC dilepton searches. We also briefly scrutinize the tree level neutrino mass with dimension five operator. Furthermore, resonant leptogenesis phenomena is discussed with TeV scale exotic fermions to produce the observed baryon asymmetry of the Universe. Further, we briefly explain the impact of flavor in leptogenesis and we also project the combined constraints on Yukawa, consistent with oscillation data and observed baryon asymmetry. Additionally, we restrict the new gauge parameters by using the existing data on branching ratios of rare $B(\tau)$ decay modes. We see that the constraints from dark sector are much more stringent from flavor sector. | high energy physics phenomenology |
We consider the problem of estimating and inferring treatment effects in randomized experiments. In practice, stratified randomization, or more generally, covariate-adaptive randomization, is routinely used in the design stage to balance the treatment allocations with respect to a few variables that are most relevant to the outcomes. Then, regression is performed in the analysis stage to adjust the remaining imbalances to yield more efficient treatment effect estimators. Building upon and unifying the recent results obtained for ordinary least squares adjusted estimators under covariate-adaptive randomization, this paper presents a general theory of regression adjustment that allows for arbitrary model misspecification and the presence of a large number of baseline covariates. We exemplify the theory on two Lasso-adjusted treatment effect estimators, both of which are optimal in their respective classes. In addition, nonparametric consistent variance estimators are proposed to facilitate valid inferences, which work irrespective of the specific randomization methods used. The robustness and improved efficiency of the proposed estimators are demonstrated through a simulation study and a clinical trial example. This study sheds light on improving treatment effect estimation efficiency by implementing machine learning methods in covariate-adaptive randomized experiments. | statistics |
The Maxwell-Bloch equations are a valuable tool to model light-matter interaction, where the application examples range from the description of pulse propagation in two-level media to the elaborate simulation of optoelectronic devices, such as the quantum cascade laser (QCL). In this work, we present mbsolve, an open-source solver tool for the Maxwell-Bloch equations. Here, we consider the one-dimensional Maxwell's equations, which are coupled to the Lindblad equation. The resulting generalized Maxwell-Bloch equations are treated without invoking the rotating wave approximation (RWA). Since this full-wave treatment is computationally intensive, we provide a flexible framework to implement different numerical methods and/or parallelization techniques. On this basis, we offer two solver implementations that use OpenMP for parallelization. | quantum physics |
In this paper we present a detailed analysis of the contribution of the Light -- by -- Light (LbL), Durham and double diffractive processes for the diphoton production in ultraperipheral $PbPb$ collisions at the Large Hadron Collider (LHC), High -- Energy LHC (HE -- LHC) and Future Circular Collider (FCC). The acceptance of the central and forward LHC detectors is taken into account and predictions for the invariant mass, rapidity, transverse momentum and acoplanarity distributions are presented. Our results indicate that the contribution of the Durham and double diffractive processes can be strongly suppressed by the exclusivity cuts, which will allow to perform a precise analysis of the LbL scattering, as well the search of beyond Standard Model physics in this final state. | high energy physics phenomenology |
We have examined the spin polarization of the electron current in a ferromagnetic metal induced by the spin-dependent surface screening at the dielectric-ferromagnetic metal (D-FM) interface. In an applied ac voltage, the dynamic band splitting driven by the changes in the screening charge at the D-FM interface develops spin accumulation. The resultant spin accumulation gradient produces a time-dependent spin current. We have derived the rate of the spin accumulation on the rate of the screening charge density accumulation within the Stoner band model. The spin-charge dynamics in the system is then modeled by a set of diffusive equations with the contributions from spin-dependent surface screening ands spin-dependent conductivity. We show for MgO-Cu-Co-MgO system that the spin-dependent screening in thin Co film produces spin accumulation 7 times higher than that obtained by the spin-dependent conductivity in thick Co films. We propose an experimental approach to validate our numerical predictions and to distinguish between spin accumulation induced by the spin-dependent conductivity and by the spin-dependent surface screening. | condensed matter |
Mercury's magnetosphere is known to be affected by enhanced ram pressures and magnetic fields inside interplanetary coronal mass ejections (ICMEs). Here we report detailed observations of an ICME compressing Mercury's dayside magnetosphere to the surface. A fast CME launched from the Sun on November 29 2013 impacted first MESSENGER, which was orbiting Mercury, on November 30 and later STEREO-A near 1 AU on December 1. Following the ICME impact, MESSENGER remained in the solar wind as the spacecraft traveled inwards and northwards towards Mercury's surface until it reached and passed its closest approach to the planet (at 371 km altitude) without crossing into the magnetosphere. The magnetospheric crossing finally occurred 1 minute before reaching the planet's nightside at 400 km altitude and 84$^\circ$N latitude, indicating the lack of dayside magnetosphere on this orbit. In addition, the peak magnetic field measured by MESSENGER at this time was 40% above the values measured in the orbits just prior to and after the ICME, a consequence of the magnetospheric compression. Using both a proxy method at Mercury and measurements at STEREO-A, we show that the extremely high ram pressure associated with this ICME was more than high enough to collapse Mercury's weak magnetosphere. As a consequence, the ICME plasma likely interacted with Mercury's surface, evidenced by enhanced sodium ions in the exosphere. The collapse of Mercury's dayside magnetosphere has important implications for the habitability of close-in exoplanets around M dwarf stars, as such events may significantly contribute to planetary atmospheric loss in these systems. | physics |
Measuring the cosmic ray flux over timescales comparable to the age of the solar system, $\sim 4.5\,$Gyr, could provide a new window on the history of the Earth, the solar system, and even our galaxy. We present a technique to indirectly measure the rate of cosmic rays as a function of time using the imprints of atmospheric neutrinos in paleo-detectors, natural minerals which record damage tracks from nuclear recoils. Minerals commonly found on Earth are $\lesssim 1\,$Gyr old, providing the ability to look back across cosmic ray history on timescales of the same order as the age of the solar system. Given a collection of differently aged samples dated with reasonable accuracy, this technique is particularly well-suited to measuring historical changes in the cosmic ray flux at Earth and is broadly applicable in astrophysics and geophysics. | high energy physics phenomenology |
We study a natural extension of classical empirical risk minimization, where the hypothesis space is a random subspace of a given space. In particular, we consider possibly data dependent subspaces spanned by a random subset of the data, recovering as a special case Nystr\"om approaches for kernel methods. Considering random subspaces naturally leads to computational savings, but the question is whether the corresponding learning accuracy is degraded. These statistical-computational tradeoffs have been recently explored for the least squares loss and self-concordant loss functions, such as the logistic loss. Here, we work to extend these results to convex Lipschitz loss functions, that might not be smooth, such as the hinge loss used in support vector machines. This extension requires developing new proofs, that use different technical tools. Our main results show the existence of different settings, depending on how hard the learning problem is, for which computational efficiency can be improved with no loss in performance. Theoretical results are illustrated with simple numerical experiments. | statistics |
This paper seeks to further explore the distribution of the real roots of random polynomials with non-centered coefficients. We focus on polynomials where the typical values of their coefficients have power growth (or mild decay) and count the average number of real zeros inside local and global intervals. Almost all previous results require coefficients with zero mean, and it is highly non-trivial to extend these results to the general case. Our approach is based on a novel comparison principle, allowing us to reduce the general situation to the mean-zero setting. As applications, we obtain new results for the Kac polynomials, hyperbolic random polynomials, their derivatives, and generalizations of these polynomials. The proof features new logarithmic integrability estimates for random polynomials (both local and global) and fairly sharp estimates for the local number of real zeros. | mathematics |
Existing approaches to modeling the dynamics of brain tumor growth, specifically glioma, employ biologically inspired models of cell diffusion, using image data to estimate the associated parameters. In this work, we propose an alternative approach based on recent advances in probabilistic segmentation and representation learning that implicitly learns growth dynamics directly from data without an underlying explicit model. We present evidence that our approach is able to learn a distribution of plausible future tumor appearances conditioned on past observations of the same tumor. | electrical engineering and systems science |
We give an algebraic formulation based on Clifford algebras and algebraic spinors for quantum information. In this context, logic gates and concepts such as chirality, charge conjugation, parity and time reversal are introduced and explored in connection with states of qubits. Supersymmetry and M-superalgebra are also analysed with our formalism. Specifically we use extensively the algebras $Cl_{3,0}$ and $Cl_{1,3}$ as well as tensor products of Clifford algebras. | quantum physics |
Higher-order tree-level processes in strong laser fields, i.e. cascades, are in general extremely difficult to calculate, but in some regimes the dominant contribution comes from a sequence of first-order processes, i.e. nonlinear Compton scattering and nonlinear Breit-Wheeler pair production. At high intensity the field can be treated as locally constant, which is the basis for standard particle-in-cell codes. However, the locally-constant-field (LCF) approximation and these particle-in-cell codes cannot be used when the intensity is only moderately high, which is a regime that is experimentally relevant. We have shown that one can still use a sequence of first-order processes to estimate higher orders at moderate intensities provided the field is sufficiently long. An important aspect of our new "gluing" approach is the role of the spin/polarization of intermediate particles, which is more nontrivial compared to the LCF regime. | high energy physics phenomenology |
In this article, we theoretically investigate the first- and second-order quantum dissipative phase transitions of a three-mode cavity with a Hubbard interaction. In both types, there is a mean-field limit cycle phase where the local U(1)symmetry and the time-translational symmetry (TTS) of the Liouvillian super-operator are spontaneously broken (SSB). This SSB manifests itself through the appearance of an unconditionally and fully squeezed state at the cavity output, connected to the well-known Goldstone mode. By employing the Wigner function formalism hence, properly including the quantum noise, we show that away from the thermodynamic limit and within the quantum regime, fluctuations notably limit the coherence time of the Goldstone mode due to the phase diffusion. Our theoretical predictions suggest that interacting multimode photonic systems are rich, versatile testbeds for investigating the crossovers between the mean-field picture and quantum phase transitions. A problem that can be investigated in various platforms including superconducting circuits, semiconductor microcavities, atomic Rydberg polaritons, and cuprite excitons. | quantum physics |
The measurements of the top quark flavor changing neutral current interactions are one of the most important goals of the top quark physics program in the present and the future collider experiments. These measurements provide direct information on non-standard interactions of the top quark. Within the framework of new physics beyond the Standard Model, these interactions can be defined by an effective Lagrangian. In this study, we have investigated the potential of the future $\mu p$ colliders on the top quark flavor changing neutral current interactions through the subprocesses $\gamma q \rightarrow t \rightarrow W b$ where $q=u,c$. These subprocesses have been produced through the main reaction $\mu p \rightarrow \mu \gamma p \rightarrow \mu W b X $ at the LHC$-\mu p$, the FCC$-\mu p$ and the SPPC-$\mu p$. For the main reaction, the total cross sections have been calculated as a function of the anomalous $ tq\gamma $ couplings. In addition, sensitivities on BR($t \rightarrow q \gamma$) at $95\%$ Confidence Level have been calculated. We obtain that the best constraints on BR($t \rightarrow q \gamma$) are at the order of 10$^{-7}$ which is four orders of magnitude better than the LHC's experimental results. | high energy physics phenomenology |
We use powder x-ray diffraction to study the effect of pressure on the crystal structure of the honeycomb rhodate Li$_2$RhO$_3$. We observe low-pressure ($P$$<$$P_{c1}$ = 6.5 GPa) and high-pressure ($P$$>$$P_{c2}$ = 14 GPa) regions corresponding to the monoclinic $C2/m$ symmetry, while a phase mixture is observed at intermediate pressures. At $P$$>$$P_{c2}$, the honeycomb structure becomes distorted and features short Rh--Rh bonds forming zigzag chains stretched along the crystallographic $a$ direction. This is in contrast to dimerized patterns observed in triclinic high-pressure polymorphs of $\alpha$-Li$_2$IrO$_3$ and $\alpha$-RuCl$_3$. Density-functional theory calculations at various pressure conditions reveal that the observed rhodium zigzag-chain pattern is not expected under hydrostatic pressure but can be reproduced by assuming anisotropic pressure conditions. | condensed matter |
We present a measurement of the extragalactic background light (EBL) based on a joint likelihood analysis of 32 gamma-ray spectra for 12 blazars in the redshift range z = 0.03 to 0.944, obtained by the MAGIC telescopes and Fermi-LAT. The EBL is the part of the diffuse extragalactic radiation spanning the ultraviolet, visible and infrared bands. Major contributors to the EBL are the light emitted by stars through the history of the universe, and the fraction of it which was absorbed by dust in galaxies and re-emitted at longer wavelengths. The EBL can be studied indirectly through its effect on very-high energy photons that are emitted by cosmic sources and absorbed via photon-photon interactions during their propagation across cosmological distances. We obtain estimates of the EBL density in good agreement with state-of-the-art models of the EBL production and evolution. The 1-sigma upper bounds, including systematic uncertainties, are between 13% and 23% above the nominal EBL density in the models. No anomaly in the expected transparency of the universe to gamma rays is observed in any range of optical depth.We also perform a wavelength-resolved EBL determination, which results in a hint of an excess of EBL in the 0.18 - 0.62 $\mu$m range relative to the studied models, yet compatible with them within systematics. | astrophysics |
Early diagnosis of diabetic retinopathy for treatment of the disease has been failing to reach diabetic people living in rural areas. Shortage of trained ophthalmologists, limited availability of healthcare centers, and expensiveness of diagnostic equipment are among the reasons. Although many deep learning-based automatic diagnosis of diabetic retinopathy techniques have been implemented in the literature, these methods still fail to provide a point-of-care diagnosis. This raises the need for an independent diagnostic of diabetic retinopathy that can be used by a non-expert. Recently the usage of smartphones has been increasing across the world. Automated diagnoses of diabetic retinopathy can be deployed on smartphones in order to provide an instant diagnosis to diabetic people residing in remote areas. In this paper, inception based convolutional neural network and binary decision tree-based ensemble of classifiers have been proposed and implemented to detect and classify diabetic retinopathy. The proposed method was further imported into a smartphone application for mobile-based classification, which provides an offline and automatic system for diagnosis of diabetic retinopathy. | electrical engineering and systems science |
Directional amplification, in which signals are selectively amplified depending on their propagation direction, has attracted much attention as key resource for applications, including quantum information processing. Recently, several, physically very different, directional amplifiers have been proposed and realized in the lab. In this work, we present a unifying framework based on topology to understand non-reciprocity and directional amplification in driven-dissipative cavity arrays. Specifically, we unveil a one-to-one correspondence between a non-zero topological invariant defined on the spectrum of the dynamic matrix and regimes of directional amplification, in which the end-to-end gain grows exponentially with the number of cavities. We compute analytically the scattering matrix, the gain and reverse gain, showing their explicit dependence on the value of the topological invariant. Parameter regimes achieving directional amplification can be elegantly obtained from a topological `phase diagram', which provides a guiding principle for the design of both phase-preserving and phase-sensitive multimode directional amplifiers. | condensed matter |
The origin of multiple stellar populations in Globular Clusters (GCs) is one of the greatest mysteries of modern stellar astrophysics. N-body simulations suggest that the present-day dynamics of GC stars can constrain the events that occurred at high redshift and led to the formation of multiple populations. Here, we combine multi-band photometry from the Hubble Space Telescope (HST) and ground-based facilities with HST and Gaia Data Release 2 proper motions to investigate the spatial distributions and the motions in the plane of the sky of multiple populations in the type II GCs NGC 5139 ($\omega\,$Centauri) and NGC 6656 (M 22). We first analyzed stellar populations with different metallicities. Fe-poor and Fe-rich stars in M 22 share similar spatial distributions and rotation patterns and exhibit similar isotropic motions. Similarly, the two main populations with different iron abundance in $\omega\,$Centauri share similar ellipticities and rotation patterns. When analyzing different radial regions, we find that the rotation amplitude decreases from the center towards the external regions. Fe-poor and Fe-rich stars of $\omega\,$Centauri are radially anisotropic in the central region and show similar degrees of anisotropy. We also investigate the stellar populations with different light-element abundances and find that their N-rich stars exhibit higher ellipticity than N-poor stars. In $\omega\,$Centauri Centauri both stellar groups are radially anisotropic. Interestingly, N-rich, Fe-rich stars exhibit different rotation patterns than N-poor stars with similar metallicities. The stellar populations with different nitrogen of M 22 exhibit similar rotation patterns and isotropic motions. We discuss these findings in the context of the formation of multiple populations. | astrophysics |
In a normal indoor environment, Raman spectrum encounters noise often conceal spectrum peak, leading to difficulty in spectrum interpretation. This paper proposes deep learning (DL) based noise reduction technique for Raman spectroscopy. The proposed DL network is developed with several training and test sets of noisy Raman spectrum. The proposed technique is applied to denoise and compare the performance with different wavelet noise reduction methods. Output signal-to-noise ratio (SNR), root-mean-square error (RMSE) and mean absolute percentage error (MAPE) are the performance evaluation index. It is shown that output SNR of the proposed noise reduction technology is 10.24 dB greater than that of the wavelet noise reduction method while the RMSE and the MAPE are 292.63 and 10.09, which are much better than the proposed technique. | electrical engineering and systems science |
The hypothesis of limiting fragmentation (LF) or it is called otherwise recently, as extended longitudinal scaling, is an interesting phenomena in high energy multiparticle production process. This paper discusses about different regions of phase space and their importance in hadron production, giving special emphasis on the fragmentation region. Although it was conjectured as a universal phenomenon in high energy physics, with the advent of higher center-of-mass energies, it has become prudent to analyse and understand the validity of such hypothesis in view of the increasing inelastic nucleon-nucleon cross-section ($\sigma_{\rm in}$). In this work, we revisit the phenomenon of limiting fragmentation for nucleus-nucleus (A+A) collisions in the pseudorapidity distribution of charged particles at various energies. We use energy dependent $\sigma_{\rm in}$ to transform the charged particle pseudorapidity distributions ($dN^{\rm AA}_{ch}/d\eta$) into differential cross-section per unit pseudorapidity ($d\sigma^{\rm AA}/d\eta$) of charged particles and study the phenomenon of LF. We find that in $d\sigma^{\rm AA}/d\eta$ LF seems to be violated at LHC energies while considering the energy dependent $\sigma_{\rm in}$. We also perform a similar study using A Multi-Phase Transport (AMPT) Model with string melting scenario and also find that LF is violated at LHC energies. | high energy physics phenomenology |
We introduce a new approach to a linear-circular regression problem that relates multiple linear predictors to a circular response. We follow a modeling approach of a wrapped normal distribution that describes angular variables and angular distributions and advances it for a linear-circular regression analysis. Some previous works model a circular variable as projection of a bivariate Gaussian random vector on the unit square, and the statistical inference of the resulting model involves complicated sampling steps. The proposed model treats circular responses as the result of the modulo operation on unobserved linear responses. The resulting model is a mixture of multiple linear-linear regression models. We present two EM algorithms for maximum likelihood estimation of the mixture model, one for a parametric model and another for a non-parametric model. The estimation algorithms provide a great trade-off between computation and estimation accuracy, which was numerically shown using five numerical examples. The proposed approach was applied to a problem of estimating wind directions that typically exhibit complex patterns with large variation and circularity. | statistics |
X-ray photon detection is important for a wide range of applications. The highest demand, however, comes from medical imaging, which requires cost-effective, high-resolution detectors operating at low photon flux, therefore stimulating the search for novel materials and new approaches. Recently, hybrid halide perovskite CH3NH3PbI3 (MAPbI3) has attracted considerable attention due to its advantageous optoelectronic properties and low fabrication costs. The presence of heavy atoms, providing a high scattering cross-section for photons, makes this material a perfect candidate for X-ray detection. Despite the already-successful demonstrations of efficiency in detection, its integration into standard microelectronics fabrication processes is still pending. Here, we demonstrate a promising method for building X-ray detector units by 3D aerosol jet printing with a record sensitivity of 2.2 x 108 {\mu}C Gyair-1cm-2 when detecting 8 keV photons at dose-rates below 1 Gy/s (detection limit 0.12 Gy/s), a four-fold improvement on the best-in-class devices. An introduction of MAPbI3-based detection into medical imaging would significantly reduce health hazards related to the strongly ionizing X-rays photons. | condensed matter |
We prove that pointwise finite-dimensional S^1 persistence modules over an arbitrary field decompose uniquely, up to isomorphism, into the direct sum of a bar code and finitely-many Jordan cells. These persistence modules have also been called angle-valued or circular persistence modules. We allow either a cyclic order or partial order on S^1 and do not have additional finiteness requirements on the modules. We also show that a pointwise finite-dimensional S^1 persistence module is indecomposable if and only if it is a bar or Jordan cell (a string or a band module, respectively, in representation theory). Along the way we classify the isomorphism classes of such indecomposable modules. | mathematics |
Recent technological advancements have enabled detailed investigation of associations between the molecular architecture and tumor heterogeneity, through multi-source integration of radiological imaging and genomic (radiogenomic) data. In this paper, we integrate and harness radiogenomic data in patients with lower grade gliomas (LGG), a type of brain cancer, in order to develop a regression framework called RADIOHEAD (RADIOgenomic analysis incorporating tumor HEterogeneity in imAging through Densities) to identify radiogenomic associations. Imaging data is represented through voxel intensity probability density functions of tumor sub-regions obtained from multimodal magnetic resonance imaging, and genomic data through molecular signatures in the form of pathway enrichment scores corresponding to their gene expression profiles. Employing a Riemannian-geometric framework for principal component analysis on the set of probability densities functions, we map each probability density to a vector of principal component scores, which are then included as predictors in a Bayesian regression model with the pathway enrichment scores as the response. Variable selection compatible with the grouping structure amongst the predictors induced through the tumor sub-regions is carried out under a group spike-and-slab prior. A Bayesian false discovery rate mechanism is then used to infer significant associations based on the posterior distribution of the regression coefficients. Our analyses reveal several pathways relevant to LGG etiology (such as synaptic transmission, nerve impulse and neurotransmitter pathways), to have significant associations with the corresponding imaging-based predictors. | statistics |
In this paper we develop a theory of Besov and Triebel--Lizorkin spaces on general noncompact Lie groups endowed with a sub-Riemannian structure. Such spaces are defined by means of hypoelliptic sub-Laplacians with drift, and endowed with a measure whose density with respect to a right Haar measure is a continuous positive character of the group. We prove several equivalent characterizations of their norms, we establish comparison results also involving Sobolev spaces of recent introduction, and investigate their complex interpolation and algebra properties. | mathematics |
Similarity and metric learning provides a principled approach to construct a task-specific similarity from weakly supervised data. However, these methods are subject to the curse of dimensionality: as the number of features grows large, poor generalization is to be expected and training becomes intractable due to high computational and memory costs. In this paper, we propose a similarity learning method that can efficiently deal with high-dimensional sparse data. This is achieved through a parameterization of similarity functions by convex combinations of sparse rank-one matrices, together with the use of a greedy approximate Frank-Wolfe algorithm which provides an efficient way to control the number of active features. We show that the convergence rate of the algorithm, as well as its time and memory complexity, are independent of the data dimension. We further provide a theoretical justification of our modeling choices through an analysis of the generalization error, which depends logarithmically on the sparsity of the solution rather than on the number of features. Our experiments on datasets with up to one million features demonstrate the ability of our approach to generalize well despite the high dimensionality as well as its superiority compared to several competing methods. | statistics |
An important feature of successful supervised machine learning applications is to be able to explain the predictions given by the regression or classification model being used. However, most state-of-the-art models that have good predictive power lead to predictions that are hard to interpret. Thus, several model-agnostic interpreters have been developed recently as a way of explaining black-box classifiers. In practice, using these methods is a slow process because a novel fitting is required for each new testing instance, and several non-trivial choices must be made. We develop NLS (neural local smoother), a method that is complex enough to give good predictions, and yet gives solutions that are easy to be interpreted without the need of using a separate interpreter. The key idea is to use a neural network that imposes a local linear shape to the output layer. We show that NLS leads to predictive power that is comparable to state-of-the-art machine learning models, and yet is easier to interpret. | statistics |
In this work we present symmetry transformations relating bosons to fermions which cannot be represented as a supersymmetric algebra. We present a symmetry transformation relating a complex scalar and a fermion in four dimensions and construct a theory defined by an action that respects the symmetry quantum mechanically. We next invoke gauge symmetry by adding a gauge field and a corresponding fermion and construct two different symmetry transformations with corresponding actions such that the corresponding theories respect the fermion boson symmetry transformations quantum mechanically. Unlike in a supersymmetric theory, the vacuum energy in the above theories could be negative. Phenomenological implications of the theories are open to research. | high energy physics theory |
Using continued fraction expansions of certain polygamma functions as a main tool, we find orthogonal polynomials with respect to the odd-index Bernoulli polynomials $B_{2k+1}(x)$ and the Euler polynomials $E_{2k+\nu}(x)$, for $\nu=0, 1, 2$. In the process we also determine the corresponding Jacobi continued fractions (or J-fractions) and Hankel determinants. In all these cases the Hankel determinants are polynomials in $x$ which factor completely over the rationals. | mathematics |
A novel ultra-long distributed vibration sensing (DVS) system using forward transmission and coherent detection is proposed and experimentally demonstrated. In the proposed scheme, a pair of multi-span optical fibers are deployed for sensing, and a loop-back configuration is used by connecting the two fibers at the far end. The homodyne coherent detection is used to retrieve the phase and state-of-polarization (SOP) fluctuations caused by a vibration while the localization of the vibration is realized by tracking the phase changes along the two fibers. The proposed scheme has the advantage of high signal-to-noise ratio (SNR) and ultra-long sensing range due to the nature of forward transmission and coherent detection. In addition, using forward rather than backward scattering allows detection of high frequency vibration signal over a long sensing range. More than 50dB sensing SNR can be obtained after long-haul transmission. Meanwhile, localization of 400 Hz, 1 kHz and 10 kHz vibrations has been experimentally demonstrated with a spatial resolution of less than 50 m over a total of 1008 km sensing fiber. The sensing length can be further extended to even trans-oceanic distances using more fiber spans and erbium-doped fiber amplifiers (EDFAs), making it a promising candidate for proactive fault detection and localization in long-haul and ultra-long-haul fiber links. | electrical engineering and systems science |
In this note, we study polynomial and rational lemniscates as trajectories of related quadratic differentials. Many classic results can be then proved easily... | mathematics |
The IceCube neutrino observatory uses $1\,\mathrm{km}^{3}$ of the natural Antarctic ice near the geographic South Pole as optical detection medium. When charged particles, such as particles produced in neutrino interactions, pass through the ice with relativistic speed, Cherenkov light is emitted. This is detected by IceCube's optical modules and from all these signals a particle signature is reconstructed. A new kind of signature can be detected using light emission from luminescence. This detection channel enables searches for exotic particles (states) which do not emit Cherenkov light and currently cannot be probed by neutrino detectors. Luminescence light is induced by highly ionizing particles passing through matter due to excitation of surrounding atoms. This process is highly dependent on the ice structure, impurities, pressure and temperature which demands an in-situ measurement of the detector medium. For the measurements at IceCube, a $1.7\,\mathrm{km}$ deep hole was used which {vertically} overlaps with the glacial ice layers found in the IceCube volume over a range of $350\,\mathrm{m}$. The experiment as well as the measurement results are presented. The impact {of the results, which enable new kind of} searches for new physics with neutrino telescopes, are discussed. | astrophysics |
Can we use deep learning to predict when deep learning works? Our results suggest the affirmative. We created a dataset by training 13,500 neural networks with different architectures, on different variations of spiral datasets, and using different optimization parameters. We used this dataset to train task-independent and architecture-independent generalization gap predictors for those neural networks. We extend Jiang et al. (2018) to also use DNNs and RNNs and show that they outperform the linear model, obtaining $R^2=0.965$. We also show results for architecture-independent, task-independent, and out-of-distribution generalization gap prediction tasks. Both DNNs and RNNs consistently and significantly outperform linear models, with RNNs obtaining $R^2=0.584$. | statistics |
We present a novel approach for the analysis of multivariate case-control georeferenced data using Bayesian inference in the context of disease mapping, where the spatial distribution of different types of cancers is analyzed. Extending other methodology in point pattern analysis, we propose a log-Gaussian Cox process for point pattern of cases and the controls, which accounts for risk factors, such as exposure to pollution sources, and includes a term to measure spatial residual variation. For each disease, its intensity is modeled on a baseline spatial effect (estimated from both controls and cases), a disease-specific spatial term and the effects on covariates that account for risk factors. By fitting these models the effect of the covariates on the set of cases can be assessed, and the residual spatial terms can be easily compared to detect areas of high risk not explained by the covariates. Three different types of effects to model exposure to pollution sources are considered. First of all, a fixed effect on the distance to the source. Next, smooth terms on the distance are used to model non-linear effects by means of a discrete random walk of order one and a Gaussian process in one dimension with a Mat\'ern covariance. Models are fit using the integrated nested Laplace approximation (INLA) so that the spatial terms are approximated using an approach based on solving Stochastic Partial Differential Equations (SPDE). Finally, this new framework is applied to a dataset of three different types of cancer and a set of controls from Alcal\'a de Henares (Madrid, Spain). Covariates available include the distance to several polluting industries and socioeconomic indicators. Our findings point to a possible risk increase due to the proximity to some of these industries. | statistics |
The unstable radium nucleus is appealing for probing new physics due to its high mass, octupole deformation and energy level structure. Ion traps, with long hold times and low particle numbers, are excellent for work with radioactive species, such as radium and radium-based molecular ions, where low activity, and hence low total numbers, is desirable. We address the challenges associated with the lack of stable isotopes in a tabletop experiment with a low-activity ($\sim 10 \ \mu\mathrm{Ci}$) source where we laser-cool trapped radium ions. With a laser-cooled radium ion we measured the $7p\ ^2P_{1/2}^o$ state's branching fractions to the ground state, $7s\ ^2S_{1/2}$, and a metastable excited state, $6d\ ^2D_{3/2}$, to be $p=0.9104(7)$ and $0.0896(7)$, respectively. With a nearby tellurium reference line we measured the $7s\ ^2S_{1/2} \rightarrow 7p\ ^2P_{1/2}^o$ transition frequency, 640.09663(6) THz. | physics |
I Zwicky 1 is the prototype optical narrow line Seyfert 1 galaxy. It is also a nearby ($z=0.0611$), luminous QSO, accreting close to the Eddington limit. XMM-Newton observations of I Zw 1 in 2015 reveal the presence of a broad and blueshifted P-Cygni iron K profile, as observed through a blue-shifted absorption trough at 9 keV and a broad excess of emission at 7 keV in the X-ray spectra. The profile can be well fitted with a wide angle accretion disk wind, with an outflow velocity of at least $-0.25c$. In this respect, I Zw 1 may be an analogous to the prototype fast wind detected in the QSO, PDS 456, while its overall mass outflow rate is scaled down by a factor $\times50$ due to its lower black hole mass. The mechanical power of the fast wind in I Zw 1 is constrained to within $5-15$% of Eddington, while its momentum rate is of the order unity. Upper-limits placed on the energetics of any molecular outflow, from its CO profile measured by IRAM, appear to rule out the presence of a powerful, large scale, energy conserving wind in this AGN. We consider whether I Zw 1 may be similar to a number of other AGN, such as PDS 456, where the large scale galactic outflow is much weaker than what is anticipated from models of energy conserving feedback. | astrophysics |
In this paper, the Cauchy's problem for fractional MHD system with the Hall and ion-slip effects is considered. By exploring the structure of semilinear and quasilinear terms, we prove the global existence of solutions for a class of large initial data. Both the velocity and magnetic fields could be arbitrarily large in $H^3(\mathbb{R}^3)$. | mathematics |
We give a precise estimate for the number of lattice points in certain bounded subsets of $\mathbb{R}^{n}$ that involve `hyperbolic spikes' and occur naturally in multiplicative Diophantine approximation. We use Wilkie's o-minimal structure $\mathbb{R}_{\exp}$ and expansions thereof to formulate our counting result in a general setting. We give two different applications of our counting result. The first one establishes nearly sharp upper bounds for sums of reciprocals of fractional parts, and thereby sheds light on a question raised by L\^e and Vaaler, extending previous work of Widmer and of the author. The second application establishes new examples of linear subspaces of Khintchine type thereby refining a theorem by Huang and Liu. For the proof of our counting result we develop a sophisticated partition method which is crucial for further upcoming work on sums of reciprocals of fractional parts over distorted boxes. | mathematics |
Diffraction-based methods have become an invaluable tool for the detailed assessment of residual strain and stress within experimental mechanics. These methods typically measure a component of the average strain within a gauge volume. It is common place to treat these measurements as point measurements and to interpolate and extrapolate their values over the region of interest. Such interpolations are not guaranteed to satisfy the physical properties of equilibrium and applied loading conditions. In this paper, we provide a numerically robust algorithm for inferring two dimensional, biaxial strain fields over a region of interest from diffraction-based measurements that satisfies equilibrium and any known loading conditions. By correctly treating the measurements as gauge volume averages rather than point-wise the algorithm has better performance when large gauge volumes and subsequently shorter beam-times are used. This algorithm is demonstrated on simulation and experimental data and compared to natural neighbour interpolation with linear extrapolation and is shown to provide a more accurate strain field. | physics |
Electron correlations play a central role in iron-based superconductors. In these systems, multiple Fe $3d$-orbitals are active in the low-energy physics, and they are not all degenerate. For these reasons, the role of orbital-selective correlations has been an active topic in the study of the iron-based systems. In this paper, we survey the recent developments on the subject. For the normal state, we emphasize the orbital-selective Mott physics that has been extensively studied, especially in the iron chalcogenides, in the case of electron filling $n \sim 6$. In addition, the interplay between orbital selectivity and electronic nematicity is addressed. For the superconducting state, we summarize the initial ideas for orbital-selective pairing, and discuss the recent explosive activities along this direction. We close with some perspectives on several emerging topics. These include the evolution of the orbital-selective correlations, magnetic and nematic orders and superconductivity as the electron filling factor is reduced from $6$ to $5$, as well as the interplay between electron correlations and topological bandstructure in iron-based superconductors. | condensed matter |
A broadband sound absorption attained by a deep-subwavelength structure is of great interest to the noise control community especially for extremely low frequencies (20-100 Hz) in room acoustics. Coupling multiple different resonant unit cells has been an effective strategy to achieve a broadband sound absorption. In this paper, we report on an analytical, numerical and experimental study of a low-frequency broadband (50-63 Hz, one third octave band), high absorption (average absorption coefficient around 93%), near-omnidirectional (0{\deg}-75{\deg}) acoustic metasurface absorber composed of 4 coupled unit cells at a thickness of 15.4 cm (1/45 of the wavelength at 50 Hz). The absorption by such a deep-subwavelength structure occurs due to a strong coupling between unit cells, which is realized by carefully engineering geometric parameters of each unit cell, especially the judicious assignment of lateral size to each unit cell. To further broaden the bandwidth (50-100 Hz, one octave band), a design with 19 unit cells coupled in a supercell is analytically studied to achieve an average absorption coefficient of 85% for a wide angle range (0{\deg}-75{\deg}) at a thickness of 20 cm (1/34 of wavelength at 50 Hz). Two additional degrees of freedom, the lateral size of supercell and the number of unit cells in the supercell, are demonstrated to facilitate such a causally optimal design which is close to the ideally causal optimality. The proposed design methodology may solve the long-standing issue for low frequency absorption in room acoustics. | physics |
The study of DD reactions, especially with polarized reactants, helps for better understanding of the processes taking place in nuclear astrophysics and fusion reactors. At PNPI Gatchina, Russia, the PolFusion experiment with crossing of two polarized beams, i.e. a deuteron and a deuterium beam, is able to measure angular distributions of the differential cross section and, therefore, the spin-correlations coefficients with different combinations of the adjustable nuclear polarization of both beams with a center-of-mass energy between 10 to 100 keV. Some improvements and fine-tuning of the polarized ion source are performed and presented. The atomic beam source for the jet target has been modified as well. An unpolarized experiment with a 10 keV ion beam and heavy water vapor as a target has been carried out with successful registration of the fusion products | physics |
Lebesgue sampling is based on collecting information depending on the values of the signal. Although the interpolation methods for periodic sampling have been a topic of research for a long time, there is a lack of study in methods capable of taking advantage of the Lebesgue sampling characteristics to reconstruct time series more accurately. Indeed, Lebesgue sampling contains additional information about the shape of the signal in-between two sampled points. Using this information would allow us to generate an interpolated signal closer to the original one. That is to say, the average distance between the interpolated signal and the original signal will be smaller than a signal interpolated with other interpolation methods. In this paper, we propose two novel time series interpolation methods specifically designed for Lebesgue sampling called ZeLiC and ZeChipC. ZeLiC is an algorithm that combines both Zero-order hold interpolation and Linear interpolation to reconstruct time series. ZeChipC is a similar idea, it is a combination of Zero-order hold and PCHIP interpolation. Zero-order hold interpolation is favourable for interpolating abrupt changes while Linear and PCHIP interpolation are more suitable for smooth transitions. In order to apply one method or the other, we have introduced a new concept called tolerated region. ZeLiC and ZeChipC include a new functionality to adapt the reconstructed signal to concave/convex regions. The proposed methods have been compared with the state-of-the-art interpolation methods using Lebesgue sampling and have offered higher average performance. Additionally, we have compared the performance of the methods using both Riemann and Lebesgue sampling using an approximate number of sampled points. The performance of the combination "Lebesgue sampling with ZeChipC interpolation method" is clearly much better than any other combination. | electrical engineering and systems science |
It is known that wormhole geometry could be found solving the Einstein field equations by tolerating the violation of null energy condition (NEC). Violation of NEC is not possible for the physical matter distributions, however, can be achieved by considering distributions of "exotic matter". The main purpose of this work is to find generating functions comprising the wormhole like geometry and discuss the nature of these generating functions. We have used the Herrera et al. \cite{1} approaches of obtaining generating functions in the background of wormhole spacetime. Here we have adopt two approaches of solving the field equations to find wormhole geometry. In the first method, we have assumed the redshift function $f(r)$ and the shape function $b(r)$ and solve for the generating functions. In an another attempt we assume generating functions and redshift functions and then try to find shape functions of the wormholes. | physics |
We study a model of two scalar fields with a hyperbolic field space and show that it reduces to a single-field Dirac-Born-Infeld (DBI) model in the limit where the field space becomes infinitely curved. We apply the de Sitter swampland conjecture to the two-field model and take the same limit. It is shown that in the limit, all quantities appearing in the swampland conjecture remain well-defined within the single-field DBI model. Based on a consistency argument, we then speculate that the condition derived in this way can be considered as the de Sitter swampland conjecture for a DBI scalar field by its own. The condition differs from those proposed in the literature and only the one in the present paper passes the consistency argument. As a byproduct, we also point out that one of the inequalities in the swampland conjecture for a multi-field model with linear kinetic terms should involve the lowest mass squared for scalar perturbations and that this quantity can be significantly different from the lowest eigenvalue of the Hessian of the potential in the local orthonormal frame if the field space is highly curved. Finally, we propose an extension of the de Sitter swampland conjecture to a more general scalar field with the Lagrangian of the form $P(X,\varphi)$, where $X=-(\partial\varphi)^2/2$. | high energy physics theory |
Results are presented from numerical simulations of the flat-space nonlinear Maxwell-Klein-Gordon equations demonstrating deep inelastic scattering of $m=1$ vortices for a range of Ginzburg-Landau (or Abelian-Higgs) parameters ($\kappa$), impact parameters ($b$), and initial velocities ($v_0$). The threshold ($v_0^*$) of right-angle scattering is explored for head-on ($b=0$) collisions by varying $v_0$. Solutions obey time-scaling laws, $T\propto \alpha\ln(v_0-v_0^*) $, with $\kappa$-dependent scaling exponents, $\alpha$, and have $v_0^*$ that appear not to have the previously reported upper bound. The arbitrarily long-lived static intermediate attractor at criticality ($v_0=v_0^*$) is observed to be the $\kappa$-specific $m=2$ vortex solution. Scattering angles are observed for off-axis ($b\neq 0$) collisions for a wide range of $b$, $v_0$, and $\kappa$. It is shown that for arbitrarily small impact parameters ($b\rightarrow 0$), the unstable %but arbitrarily long-lived $\kappa$-dependent $m=2$ "critical" vortex is an intermediate attractor and decays with a $\kappa$-\emph{independent} scattering angle of $135^{\circ}$, as opposed to either of the well-known values of $180^{\circ}$ or $90^{\circ}$ for $b=0$. | high energy physics phenomenology |
This article focuses on numerical issues in maximum likelihood parameter estimation for Gaussian process regression (GPR). This article investigates the origin of the numerical issues and provides simple but effective improvement strategies. This work targets a basic problem but a host of studies, particularly in the literature of Bayesian optimization, rely on off-the-shelf GPR implementations. For the conclusions of these studies to be reliable and reproducible, robust GPR implementations are critical. | statistics |
In this paper we present Monte Carlo N-Particle (MCNP) simulations of the system for underwater threat detection using neutron activation analysis developed in the SABAT project. The simulated system is based on a D-T neutron generator emitting 14~MeV neutrons without associated $\alpha$ particle detection and equipped with a LaBr$_3$:Ce scintillation detector offering superior energy resolution and allowing for precise identification of activation $\gamma$ quanta. The performed simulations show that using the neutron activation analysis method with the designed geometry we are able to identify $\gamma$-rays from hydrogen, carbon, sulphur and chlorine originating from mustard gas in a sea water environment. Our results show that the most efficient way of mustard gas detection is to compare the integral peak ratio for Cl and H. | physics |
A significant opportunity for synergy between pure research and asteroid resource research exists. We provide an overview of the state of the art in asteroid resource utilization, and highlight where we can accelerate the closing of knowledge gaps, leading to the utilization of asteroid resources for growing economic productivity in space. | astrophysics |
With respect to spatial overlap, CNN-based segmentation of short axis cardiovascular magnetic resonance (CMR) images has achieved a level of performance consistent with inter observer variation. However, conventional training procedures frequently depend on pixel-wise loss functions, limiting optimisation with respect to extended or global features. As a result, inferred segmentations can lack spatial coherence, including spurious connected components or holes. Such results are implausible, violating the anticipated topology of image segments, which is frequently known a priori. Addressing this challenge, published work has employed persistent homology, constructing topological loss functions for the evaluation of image segments against an explicit prior. Building a richer description of segmentation topology by considering all possible labels and label pairs, we extend these losses to the task of multi-class segmentation. These topological priors allow us to resolve all topological errors in a subset of 150 examples from the ACDC short axis CMR training data set, without sacrificing overlap performance. | electrical engineering and systems science |
We theoretically study the quench dynamics of induced anisotropy of a large-spin magnetic molecule coupled to spin-polarized ferromagnetic leads. The real-time evolution is calculated by means of the time-dependent density-matrix numerical renormalization group method implemented within the matrix product states framework, which takes into account all correlations in very accurate manner. We determine the system's response to a quench in the spin-dependent coupling to ferromagnetic leads. In particular, we focus on the transient dynamics associated with crossing from the weak to the strong coupling regime, where the Kondo correlations become important. The dynamics is examined by calculating the time-dependent expectation values of the spin-quadrupole moment and the associated spin operators. We identify the relevant time scales describing the quench dynamics and determine the influence of the molecule's effective exchange coupling and leads spin-polarization on the dynamical behavior of the system. Furthermore, the generalization of our predictions for large values of molecule's spin is considered. Finally, we analyze the effect of finite temperature and show that it gives rise to a reduction of magnetic anisotropy by strong suppression of the time-dependent spin-quadrupole moment due to thermal fluctuations. | condensed matter |
We study the spectral properties of an overoccupied gluonic system far from equilibrium. Using classical Yang-Mills simulations and linear response theory, we determine the statistical and spectral functions. We measure dispersion relations and damping rates of transversally and longitudinally polarized excitations in the gluonic plasma, and also study further structures in the spectral function. | high energy physics phenomenology |
Quantum technology is seeing a remarkable explosion in interest due to a wave of successful commercial technology. As a wider array of engineers and scientists are needed, it is time we rethink quantum educational paradigms. Current approaches often start from classical physics, linear algebra, or differential equations. This chapter advocates for beginning with probability theory. In the approach outlined in this chapter, there is less in the way of explicit axioms of quantum mechanics. Instead the historically problematic measurement axiom is inherited from probability theory where many philosophical debates remain. Although not a typical route in introductory material, this route is nonetheless a standard vantage on quantum mechanics. This chapter outlines an elementary route to arrive at the Schr\"odinger equation by considering allowable transformations of quantum probability functions (density matrices). The central tenet of this chapter is that probability theory provides the best conceptual and mathematical foundations for introducing the quantum sciences. | physics |
Regression of data generated in simulations or experiments has important implications in sensitivity studies, uncertainty analysis, and prediction accuracy. Depending on the nature of the physical model, data points may not be evenly distributed. It is not often practical to choose all points for regression of a model because it doesn't always guarantee a better fit. Fitness of the model is highly dependent on the number of data points and the distribution of the data along the curve. In this study, the effect of the number of points selected for regression is investigated and various schemes aimed to process regression data points are explored. Time series data i.e., output varying with time, is our prime interest mainly the temperature profile from enhanced geothermal system. The objective of the research is to find a better scheme for choosing a fraction of data points from the entire set to find a better fitness of the model without losing any features or trends in the data. A workflow is provided to summarize the entire protocol of data preprocessing, regression of mathematical model using training data, model testing, and error analysis. Six different schemes are developed to process data by setting criteria such as equal spacing along axes (X and Y), equal distance between two consecutive points on the curve, constraint in the angle of curvature, etc. As an example for the application of the proposed schemes, 1 to 20% of the data generated from the temperature change of a typical geothermal system is chosen from a total of 9939 points. It is shown that the number of data points, to a degree, has negligible effect on the fitted model depending on the scheme. The proposed data processing schemes are ranked in terms of R2 and NRMSE values. | physics |
We introduce a general framework for monitoring, modelling, and predicting the recruitment to multi-centre clinical trials. The work is motivated by overly optimistic and narrow prediction intervals produced by existing time-homogeneous recruitment models for multi-centre recruitment. We first present two tests for detection of decay in recruitment rates, together with a power study. We then introduce a model based on the inhomogeneous Poisson process with monotonically decaying intensity, motivated by recruitment trends observed in oncology trials. The general form of the model permits adaptation to any parametric curve-shape. A general method for constructing sensible parameter priors is provided and Bayesian model averaging is used for making predictions which account for the uncertainty in both the parameters and the model. The validity of the method and its robustness to misspecification are tested using simulated datasets. The new methodology is then applied to oncology trial data, where we make interim accrual predictions, comparing them to those obtained by existing methods, and indicate where unexpected changes in the accrual pattern occur. | statistics |
We propose a new distribution, called the soft tMVN distribution, which provides a smooth approximation to the truncated multivariate normal (tMVN) distribution with linear constraints. An efficient blocked Gibbs sampler is developed to sample from the soft tMVN distribution in high dimensions. We provide theoretical support to the approximation capability of the soft tMVN and provide further empirical evidence thereof. The soft tMVN distribution can be used to approximate simulations from a multivariate truncated normal distribution with linear constraints, or itself as a prior in shape-constrained problems. | statistics |
We experimentally study a broadband implementation of the atomic frequency comb (AFC) rephasing protocol with a cryogenically cooled Pr$^{3+}$:Y$_2$SiO$_5$ crystal. To allow for storage of broadband pulses, we explore a novel regime where the input photonic bandwidth closely matches the inhomogeneous broadening of the material $(\sim5\,\textrm{GHz})$, thereby significantly exceeding the hyperfine ground and excited state splitting $(\sim10\,\textrm{MHz})$. Through an investigation of different AFC preparation parameters, we measure a maximum efficiency of $10\%$ after a rephasing time of $12.5\,$ns. With a suboptimal AFC, we witness up to 12 rephased temporal modes. | quantum physics |
We employ an effective field theory to study the detectability of sub-GeV dark matter through its interaction with the gapless excitations of superfluid helium-4. In a quantum field theory language, the possible interactions between the dark matter and the superfluid phonon are solely dictated by symmetry. We compute the rate for the emission of one and two phonons, and show that these two observables combined allow for a large exclusion region for the dark matter masses. Our approach allows a direct calculation of the differential distributions, even though it is limited only to the region of softer phonon excitations, where the effective field theory is well defined. The method presented here is easily extendible to different models of dark matter. | high energy physics phenomenology |
The new version of the gedanken experiment proposed by Sorce and Wald has been used to examine the weak cosmic censorship conjecture (WCCC) for black holes at the second-order approximation of the matter fields perturbation. However, only considering the perturbation until the second-order approximation is incomplete because there is an optimal option such that the existing condition of the event horizon vanishes at second-order. For this circumstance, we cannot judge whether the WCCC is satisfied at this order. In our investigation, the $k$th-order perturbation inequality is generally derived. Using the inequalities, we examine the WCCC for nearly extremal Reissner-Nordst\"{o}m black holes at higher-order approximation. It is shown that the WCCC cannot be violated yet after the perturbation. From this result, it can be indicated that the WCCC is strictly satisfied at the perturbation level for nearly extremal RN black holes. | high energy physics theory |
This short paper presents saturation-based algorithms for homogenization and elimination. This algorithm can compute elimination ideals by using syzygies and ideal membership test, hence it works with any} monomial order, in particular without the use of block-elimination orders. The used saturation is a translation of the geometric fact that the projective closure of an affine scheme has no components in the hyperplane at infinity. | mathematics |
We use the scattering matrix formalism to analyze photon blockade in coherently-driven CQED systems with a weak drive. By approximating the weak coherent drive by an input single- and two-photon Fock state, we reduce the computational complexity of the transmission and the two-photon correlation function from exponential to polynomial in the number of emitters. This enables us to easily analyze cavity-based systems containing $\sim$50 quantum emitters with modest computational resources. Using this approach we study the coherence statistics of polaritonic photon blockade while increasing the number of emitters for resonant and detuned multi-emitter CQED systems --- we find that increasing the number of emitters worsens photon blockade in resonant systems, and improves it in detuned systems. We also analyze the impact of inhomogeneous broadening in the emitter frequencies on both polaritonic and subradiant photon blockade through this system. | quantum physics |
In the models defined on the inhomogeneous background the propagators depend on the two space - time momenta rather than on one momentum as in the homogeneous systems. Therefore, the conventional Feynman diagrams contain extra integrations over momenta, which complicate calculations. We propose to express all amplitudes through the Wigner transformed propagators. This approach allows us to reduce the number of integrations. As a price for this the ordinary products of functions are replaced by the Moyal products. The corresponding rules of the diagram technique are formulated using an example of the model with the fermions interacting via an exchange by scalar bosons. The extension of these rules to the other models is straightforward. This approach may simplify calculations in certain particular cases. The most evident one is the calculation of various non - dissipative currents. | high energy physics phenomenology |
In this paper we propose a calculus for expressing algorithms for programming languages transformations. We present the type system and operational semantics of the calculus, and we prove that it is type sound. We have implemented our calculus, and we demonstrate its applicability with common examples in programming languages. As our calculus manipulates inference systems, our work can, in principle, be applied to logical systems. | computer science |
Simultaneous processing of multiple video sources requires each pixel in a frame from a video source to be processed synchronously with the pixels at the same spatial positions in corresponding frames from the other video sources. However, simultaneous processing is challenging as corresponding frames from different video signals provided by multiple sources have time-varying delay because of the electrical and mechanical restrictions inside the video sources hardware that cause deviation in the corresponding frame rates. Researchers overcome the aforementioned challenges either by utilizing ready-made video processing systems or designing and implementing a custom system tailored to their specific application. These video processing systems lack flexibility in handling different applications requirements such as the required number of video sources and outputs, video standards, or frame rates of the input/output videos. In this paper, we present a design for a flexible simultaneous video processing architecture that is suitable for various applications. The proposed architecture is upgradeable to deal with multiple video standards, scalable to process/produce a variable number of input/output videos, and compatible with most video processors. Moreover, we present in details the analog/digital mixed-signals and power distribution considerations used in designing the proposed architecture. As a case study application of the proposed flexible architecture, we utilized the architecture for a realization of a simultaneous video processing system that performs video fusion from visible and near-infrared video sources in real time. We make available the source files of the hardware design along with the bill of material (BOM) of the case study to be a reference for researchers who intend to design and implement simultaneous multi-video processing systems. | electrical engineering and systems science |
Recent works have revealed that quantum extremal islands can contribute to the fine-grained entropy of black hole radiation reproducing the unitary Page curve. In this paper, we use these results to assess if an observer in de Sitter space can decode information hidden behind their cosmological horizon. By computing the fine-grained entropy of the Gibbons-Hawking radiation in a region where gravity is weak we find that this is possible, but the observer's curiosity comes at a price. At the same time the island appears, which happens much earlier than the Page time, a singularity forms which the observer will eventually hit. We arrive at this conclusion by studying Jackiw-Teitelboim gravity in de Sitter space. We emphasize the role of the observer collecting radiation, breaking the thermal equilibrium studied so far in the literature. By analytically solving for the backreacted geometry we show how an island appears in this out-of-equilibrium state. | high energy physics theory |
Floating Offshore Wind Turbines (FOWTs) operate in the harsh marine environment with limited accessibility and maintainability. Not only failures are more likely to occur than in land-based turbines, but also corrective maintenance is more expensive. In the present study, a mixed model and signal-based Fault Diagnosis (FD) architecture is developed to detect and isolate critical faults in FOWTs. More specifically, a model-based scheme is developed to detect and isolate the faults associated with the turbine system. It is based on a fault detection and approximation estimator and fault isolation estimators, with time-varying adaptive thresholds to guarantee against false-alarms. In addition, a signal-based scheme is established, within the proposed architecture, for detecting and isolating two representative mooring lines faults. For the purpose of verification, a 10MW FOWT benchmark is developed and its operating conditions, which contains predefined faults, are simulated by extending the high-fidelity simulator. Based on it, the effectiveness of the proposed architecture is illustrated. In addition, the advantages and limitations are discussed by comparing its fault detection to the results delivered by other approaches. Results show that the proposed architecture has the best performance in detecting and isolating the critical faults in FOWTs under diverse operating conditions. | electrical engineering and systems science |
We demonstrate the emergence of an anomalous Hall effect in chiral magnetic textures which is neither proportional to the net magnetization nor to the well-known emergent magnetic field that is responsible for the topological Hall effect. Instead, it appears already at linear order in the gradients of the magnetization texture and exists for one-dimensional magnetic textures such as domain walls and spin spirals. It receives a natural interpretation in the language of Alain Connes' noncommutative geometry. We show that this chiral Hall effect resembles the familiar topological Hall effect in essential properties while its phenomenology is distinctly different. Our findings make the re-interpretation of experimental data necessary, and offer an exciting twist in engineering the electrical transport through magnetic skyrmions. | condensed matter |
As a concrete application of multi-view learning, multi-view classification improves the traditional classification methods significantly by integrating various views optimally. Although most of the previous efforts have been demonstrated the superiority of multi-view learning, it can be further improved by comprehensively embedding more powerful cross-view interactive information and a more reliable multi-view fusion strategy in intensive studies. To fulfill this goal, we propose a novel multi-view learning framework to make the multi-view classification better aimed at the above-mentioned two aspects. That is, we seamlessly embed various intra-view information, cross-view multi-dimension bilinear interactive information, and a new view ensemble mechanism into a unified framework to make a decision via the optimization. In particular, we train different deep neural networks to learn various intra-view representations, and then dynamically learn multi-dimension bilinear interactive information from different bilinear similarities via the bilinear function between views. After that, we adaptively fuse the representations of multiple views by flexibly tuning the parameters of the view-weight, which not only avoids the trivial solution of weight but also provides a new way to select a few discriminative views that are beneficial to make a decision for the multi-view classification. Extensive experiments on six publicly available datasets demonstrate the effectiveness of the proposed method. | computer science |
Four-wave-mixing-based quantum cascade laser frequency combs (QCL-FC) are a powerful photonic tool, driving a recent revolution in major molecular fingerprint regions, i.e. mid- and far-infrared domains. Their compact and frequency-agile design, together with their high optical power and spectral purity, promise to deliver an all-in-one source for the most challenging spectroscopic applications. Here, we demonstrate a metrological-grade hybrid dual comb spectrometer, combining the advantages of a THz QCL-FC with the accuracy and absolute frequency referencing provided by a free-standing, optically-rectified THz frequency comb. A proof-of-principle application to methanol molecular transitions is presented. The multi-heterodyne molecular spectra retrieved provide state-of-the-art results in line-center determination, achieving the same precision as currently available molecular databases. The devised setup provides a solid platform for a new generation of THz spectrometers, paving the way to more refined and sophisticated systems exploiting full phase control of QCL-FCs, or Doppler-free spectroscopic schemes. | physics |
Games generalize the single-objective optimization paradigm by introducing different objective functions for different players. Differentiable games often proceed by simultaneous or alternating gradient updates. In machine learning, games are gaining new importance through formulations like generative adversarial networks (GANs) and actor-critic systems. However, compared to single-objective optimization, game dynamics are more complex and less understood. In this paper, we analyze gradient-based methods with momentum on simple games. We prove that alternating updates are more stable than simultaneous updates. Next, we show both theoretically and empirically that alternating gradient updates with a negative momentum term achieves convergence in a difficult toy adversarial problem, but also on the notoriously difficult to train saturating GANs. | computer science |
Go has long been considered as a testbed for artificial intelligence. By introducing certain quantum features, such as superposition and collapse of wavefunction, we experimentally demonstrate a quantum version of Go by using correlated photon pairs entangled in polarization degree of freedom. The total dimension of Hilbert space of the generated states grows exponentially as two players take turns to place the stones in time series. As nondeterministic and imperfect information games are more difficult to solve using nowadays technology, we excitedly find that the inherent randomness in quantum physics can bring the game nondeterministic trait, which does not exist in the classical counterpart. Some quantum resources, like coherence or entanglement, can also be encoded to represent the state of quantum stones. Adjusting the quantum resource may vary the average imperfect information (as comparison classical Go is a perfect information game) of a single game. We further verify its non-deterministic feature by showing the unpredictability of the time series data obtained from different classes of quantum state. Finally, by comparing quantum Go with a few typical games that are widely studied in artificial intelligence, we find that quantum Go can cover a wide range of game difficulties rather than a single point. Our results establish a paradigm of inventing new games with quantum-enabled difficulties by harnessing inherent quantum features and resources, and provide a versatile platform for the test of new algorithms to both classical and quantum machine learning. | quantum physics |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.