text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
Context. While the shapes of many observed bow shocks can be reproduced by simple astrosphere models, more elaborate approaches have recently been used to explain differing observable structures. Aims. By placing perturbations of an otherwise homogeneous interstellar medium in front of the astrospheric bow shock of the runaway blue supergiant $\lambda$ Cephei, the observable structure of the model astrosphere is significantly altered, providing insight into the origin of perturbed bow shock images. Methods. Three-dimensional single-fluid magnetohydrodynamic (MHD) models of stationary astrospheres were subjected to various types of perturbations and simulated until stationarity was reached again. As examples, simple perturbations of the available MHD parameters (number density, bulk velocity, temperature, and magnetic field) as well as a more complex perturbation were chosen. Synthetic observations were generated by line-of-sight integration of the model data, producing H$\alpha$, $70\,\mu$m dust emission, and bremsstrahlung maps of the perturbed astrosphere's evolution. Results. The resulting shock structures and observational images differ strongly depending on the type of the injected perturbation and the viewing angles, forming arc-like protrusions or bifurcations of the bow shock structure, as well as rings, arcs, and irregular structures detached from the bow shock. | astrophysics |
We develop a procedure of generalized continuous dynamical decoupling (GCDD) for an ensemble of $d$-level systems (qudits), allowing one to protect the action of an arbitrary multi-qudit gate from general noise. We first present our GCDD procedure for the case of an arbitrary qudit and apply it to the case of a Hadamard gate acting on a qutrit. This is done using a model that, in principle, could be implemented using the three magnetic hyperfine states of the ground energy level of $^{87}\mathrm{Rb}$ and laser beams whose intensities and phases are modulated according to our prescription. We show that this model allows one to generate continuously all the possible SU(3) group operations which are, in general, needed to apply the GCDD procedure. We finally show that our method can be extended to the case of an ensemble of qudits, identical or not. | quantum physics |
Repeatedly alternating the sign of dispersion along a waveguide substantially increases the bandwidth to input peak power efficiency of supercontinuum generation (SCG). Here, we explore, theoretically and numerically, how to optimize sign-alternating dispersion waveguides for nonlinear pulse compression, so that the compression ratio is maximized. By exploring a previously unknown SCG phase effect, we find emergent phase effects unique to these structures where the spectral phase converges to a parabolic profile independent of uncompensated higher-order dispersion. The combination of an easy to compress phase spectrum, with low input power requirements, then makes sign-alternating dispersion a scheme for high-quality nonlinear pulse compression that removes the need for high powered lasers. Also, we show a new scheme for the design of practical waveguide segments that can compress SCG pulses to near transform-limited durations, which is integral for the design of these alternated waveguides and in general, for nonlinear pulse compression experiments. We conclude by showing how compression can be maximized for alternating dispersion waveguides within the integrated photonics platform, showing compression to two optical cycles. | physics |
We consider the zero-energy deformations of periodic origami sheets with generic crease patterns. Using a mapping from the linear folding motions of such sheets to force-bearing modes in conjunction with the Maxwell-Calladine index theorem we derive a relation between the number of linear folding motions and the number of rigid body modes that depends only on the average coordination number of the origami's vertices. This supports the recent result by Tachi which shows periodic origami sheets with triangular faces exhibit two-dimensional spaces of rigidly foldable cylindrical configurations. We also find, through analytical calculation and numerical simulation, branching of this configuration space from the flat state due to geometric compatibility constraints that prohibit finite Gaussian curvature. The same counting argument leads to pairing of spatially varying modes at opposite wavenumber in triangulated origami, preventing topological polarization but permitting a family of zero energy deformations in the bulk that may be used to reconfigure the origami sheet. | condensed matter |
The present paper reports a novel behavior involving regular polygons with n sides and filled to varying degrees with granular materials. These are comprised of a set of hollow polygons produced on a 3D printer, and a single larger hollow hexagon fabricated with wooden sides and clear Plexiglas faces. Empty or full polygons stop immediately at shallow ramp angles, and roll the full length of the ramp for steep ramp angles, approaching a terminal velocity. This is consistent with results previously reported by other investigators of rolling solid polygons and partially filled cylinders. In contrast, partially filled polygons released at shallow ramp angles accelerate to a "terminal velocity," but then come to an abrupt stop. The distance traveled is reproducible and dependent on the ramp angle, number of sides and the volume fill ratio and is minimized when filled to 0.4 of its total internal volume. For larger ramp angles, the partially filled polygons again approach a "terminal velocity," but the terminal velocity is minimized, again near the fill ratio of 0.4. a simple model is introduced, but it is successful only in replicating the overall trend in velocity as a function of time for the large angle cases. | physics |
Energetic neutral atoms (ENA) are an important tool for investigating the structure of the heliosphere. Recently, it was observed that fluxes of ENAs (with energy $\le$ 55 keV) coming from the upwind and downwind regions of the heliosphere are similar in strength. This led the authors of these observations to hypothesize that the heliosphere is bubble-like rather than comet-like, meaning that it has no extended tail. We investigate the directional distribution of the ENA flux for a wide energy range (3--88 keV) including the observations from IBEX (Interstellar Boundary Explorer), INCA (Ion and Neutral Camera, on board Cassini), and HSTOF (High energy Suprathermal Time Of Flight sensor, on board SOHO, Solar and Heliospheric Observatory). An essential element is the model of pickup ion acceleration at the termination shock (TS) proposed by Zank. We use state of the art models of the global heliosphere, interstellar neutral gas density, and pickup ion distributions. The results, based on the "comet-like" model of the heliosphere, are close in flux magnitude to ENA observations by IBEX, HSTOF and partly by INCA (except for the 5.2-13.5 keV energy channel). We find that the ENA flux from the tail dominates at high energy (in agreement with HSTOF, but not INCA). At low energy, our comet-like model produces the similar strengths of the ENA fluxes from the upwind and downwind directions, which, therefore, removes this as a compelling argument for a bubble-like heliosphere. | astrophysics |
We consider the most general set of integrable deformations extending the $T\bar{T}$ deformation of two-dimensional relativistic QFTs. They are CDD deformations of the theory's factorised S-matrix related to the higher-spin conserved charges. Using a mirror version of the generalised Gibbs ensemble, we write down the finite-volume expectation value of the higher-spin charges, and derive a generalised flow equation that every charge must obey under a generalised $T\bar{T}$ deformation. This also reproduces the known flow equations on the nose. | high energy physics theory |
The persistence of the hierarchy problem points to a violation of effective field theory expectations. A compelling possibility is that this results from a physical breakdown of EFT, which may arise from correlations between ultraviolet (UV) and infrared (IR) physics. To this end, we study noncommutative field theory (NCFT) as a toy model of UV/IR mixing which generates an emergent infrared scale from ultraviolet dynamics. We explore the range of such theories where ultraviolet divergences are transmogrified into infrared scales, focusing particularly on the properties of Yukawa theory, where we identify a new infrared pole accessible in the $s$-channel of the Lorentzian theory. We further investigate the interplay between UV-finiteness and UV/IR mixing by studying properties of the softly-broken noncommutative Wess-Zumino model as soft terms are varied relative to the cutoff. While the Lorentz violation inherent to noncommutative theories may limit their direct application to the hierarchy problem, these toy models provide general lessons to guide the realization of UV/IR mixing in more realistic theories. | high energy physics phenomenology |
We study the parameter estimation problem for a varying index coefficient model in high dimensions. Unlike the most existing works that iteratively estimate the parameters and link functions, based on the generalized Stein's identity, we propose computationally efficient estimators for the high-dimensional parameters without estimating the link functions. We consider two different setups where we either estimate each sparse parameter vector individually or estimate the parameters simultaneously as a sparse or low-rank matrix. For all these cases, our estimators are shown to achieve optimal statistical rates of convergence (up to logarithmic terms in the low-rank setting). Moreover, throughout our analysis, we only require the covariate to satisfy certain moment conditions, which is significantly weaker than the Gaussian or elliptically symmetric assumptions that are commonly made in the existing literature. Finally, we conduct extensive numerical experiments to corroborate the theoretical results. | statistics |
We study the classical simulation complexity in both the weak and strong senses, of matchgate (MG) computations supplemented with all combinations of settings involving inclusion of intermediate adaptive or nonadaptive computational basis measurements, product state or magic and general entangled state inputs, and single- or multi-line outputs. We find a striking parallel to known results for Clifford circuits, after some rebranding of resources. We also give bounds on the amount of classical simulation effort required in case of limited access intermediate measurements and entangled inputs. In further settings we show that adaptive MG circuits remain classically efficiently simulable if arbitrary two-qubit entangled input states on consecutive lines are allowed, but become quantum universal for three or more lines. And if adaptive measurements in non-computational bases are allowed, even with just computational basis inputs, we get quantum universal power again. | quantum physics |
We present a new method for constructing a confidence interval for the mean of a bounded random variable from samples of the random variable. We conjecture that the confidence interval has guaranteed coverage, i.e., that it contains the mean with high probability for all distributions on a bounded interval, for all samples sizes, and for all confidence levels. This new method provides confidence intervals that are competitive with those produced using Student's t-statistic, but does not rely on normality assumptions. In particular, its only requirement is that the distribution be bounded on a known finite interval. | mathematics |
This article reviews the theoretical constraints on the scalar potential of a general extension of the Standard Model that encompasses a $SU(3)_c\times SU(3)_L\times U(1)_X$ gauge symmetry. In this respect, the boundedness-from-below is analysed to identify the correct criteria for obtaining the physical minima of the Higgs parameter space. Furthermore, perturbativity and unitarity bounds are discussed in light of the exact diagonalisation of the scalar fields. This study represents a framework for fast numerical checks on specific $331$ Model benchmarks that are relevant for future collider searches. | high energy physics phenomenology |
The fractal necklaces in d-dimension Euclidean space introduced in this paper are a class of connected fractal sets generated by the so-called NIFSs, for which some basic topological questions are interesting. We give some sufficient conditions for fractal necklaces with no cut points. Especially, we prove that every stable self-similar necklace in plane has no cut points, whilst an analog for self-affine necklaces is false. | mathematics |
This paper proposes a deep learning based method for colored transparent object matting from a single image. Existing approaches for transparent object matting often require multiple images and long processing times, which greatly hinder their applications on real-world transparent objects. The recently proposed TOM-Net can produce a matte for a colorless transparent object from a single image in a single fast feed-forward pass. In this paper, we extend TOM-Net to handle colored transparent object by modeling the intrinsic color of a transparent object with a color filter. We formulate the problem of colored transparent object matting as simultaneously estimating an object mask, a color filter, and a refractive flow field from a single image, and present a deep learning framework for learning this task. We create a large-scale synthetic dataset for training our network. We also capture a real dataset for evaluation. Experiments on both synthetic and real datasets show promising results, which demonstrate the effectiveness of our method. | computer science |
Recent works on adversarial perturbations show that there is an inherent trade-off between standard test accuracy and adversarial accuracy. Specifically, they show that no classifier can simultaneously be robust to adversarial perturbations and achieve high standard test accuracy. However, this is contrary to the standard notion that on tasks such as image classification, humans are robust classifiers with low error rate. In this work, we show that the main reason behind this confusion is the inexact definition of adversarial perturbation that is used in the literature. To fix this issue, we propose a slight, yet important modification to the existing definition of adversarial perturbation. Based on the modified definition, we show that there is no trade-off between adversarial and standard accuracies; there exist classifiers that are robust and achieve high standard accuracy. We further study several properties of this new definition of adversarial risk and its relation to the existing definition. | statistics |
The survey is devoted to numerical solution of the fractional equation $A^\alpha u=f$, $0 < \alpha <1$, where $A$ is a symmetric positive definite operator corresponding to a second order elliptic boundary value problem in a bounded domain $\Omega$ in $\mathbb R^d$. The operator fractional power is a non-local operator and is defined through the spectrum. Due to growing interest and demand in applications of sub-diffusion models to physics and engineering, in the last decade, several numerical approaches have been proposed, studied, and tested. We consider discretizations of the elliptic operator $A$ by using an $N$-dimensional finite element space $V_h$ or finite differences over a uniform mesh with $N$ grid points. The numerical solution of this equation is based on the following three equivalent representations of the solution: (1) Dunford-Taylor integral formula (or its equivalent Balakrishnan formula), (2) extension of the a second order elliptic problem in $\Omega \times (0,\infty)\subset \mathbb R^{d+1}$ (with a local operator) or as a pseudo-parabolic equation in the cylinder $(x,t) \in \Omega \times (0,1) $, (3) spectral representation and the best uniform rational approximation (BURA) of $z^\alpha$ on $[0,1]$. Though substantially different in origin and their analysis, these methods can be interpreted as some rational approximation of $A^{-\alpha}$. In this paper we present the main ideas of these methods and the corresponding algorithms, discuss their accuracy, computational complexity and compare their efficiency and robustness. | mathematics |
We apply perturbative QCD to investigate the near threshold heavy quarkonium photoproduction at large momentum transfer. From an explicit calculation, we show that the conventional power counting method will be modified and the three quark Fock state with nonzero orbital angular momentum dominates the near threshold production. It carries a power behavior of $1/(-t)^5$ for the differential cross section. We further comment on the impact of our results on the interpretation of the experiment measurement in terms of the gluonic gravitational form factors of the proton. | high energy physics phenomenology |
The in-medium properties of the heavy spin-3/2 $\Sigma_Q^{*}$, $\Xi_Q^{*}$ and $\Omega_Q^{*}$ baryons with $Q$ being $b$ or $c$ quark are investigated. The shifts in some spectroscopic parameters of these particles due to the saturated cold nuclear matter are calculated. The variations of those parameters with respect to the changes in the density of the cold nuclear medium are studied, as well. It is observed that the parameters of $\Sigma_Q^{*}$ baryons are considerably affected by the nuclear matter compared to the $\Xi_Q^{*}$ and $\Omega_Q^{*}$ particles that roughly do not see the medium. The results obtained may be used in analyses of the data to be provided by the in-medium experiments like PANDA. | high energy physics phenomenology |
The results of the photometric observations of comet C/2009 P1 (Garradd) performed at the 60-cm Zeiss-600 telescope of the Terskol observatory have been analyzed. During the observations, the comet was at the heliocentric and geocentric distances of 1.7 and 2.0 AU, respectively. The CCD images of the comet were obtained in the standard narrowband interference filters suggested by the International research program for comet Hale-Bopp and correspondingly designated the 'Hale-Bopp (HB) set'. These filters were designed to isolate the BC ($\lambda$4450/67 {\AA}), GC ($\lambda$5260/56 {\AA}) and RC ($\lambda$7128/58 {\AA}) continua and the emission bands of C2 ($\lambda$5141/118 {\AA}), CN ($\lambda$3870/62 {\AA}), and C3 ($\lambda$4062/62 {\AA}). From the photometric data, the dust production rate of the comet and its color index and color excess were determined. The concentration of C2, CN, and C3 molecules and their production rates along the line of sight were estimated. The obtained results show that the physical parameters of the comet are close to the mean characteristics typicalof the dynamically new comets. | astrophysics |
We prove better Strichartz type estimates than expected from the (optimal) dispersion we obtained in our earlier work on a 2d convex model. This follows from taking full advantage of the space-time localization of caustics in the parametrix we obtain, despite their number increasing like the inverse square root of the distance from the source to the boundary. As a consequence, we improve known Strichartz estimates for the wave equation. Several improvements on our previous parametrix construction are obtained along the way and are of independent interest for further applications. | mathematics |
We construct the CFT dual of the first law of spherical causal diamonds in three-dimensional AdS spacetime. A spherically symmetric causal diamond in AdS$_3$ is the domain of dependence of a spatial circular disk with vanishing extrinsic curvature. The bulk first law relates the variations of the area of the boundary of the disk, the spatial volume of the disk, the cosmological constant and the matter Hamiltonian. In this paper we specialize to first-order metric variations from pure AdS to the conical defect spacetime, and the bulk first law is derived following a coordinate based approach. The AdS/CFT dictionary connects the area of the boundary of the disk to the differential entropy in CFT$_2$, and assuming the `complexity=volume' conjecture, the volume of the disk is considered to be dual to the complexity of a cutoff CFT. On the CFT side we explicitly compute the differential entropy and holographic complexity for the vacuum state and the excited state dual to conical AdS using the kinematic space formalism. As a result, the boundary dual of the bulk first law relates the first-order variations of differential entropy and complexity to the variation of the scaling dimension of the excited state, which corresponds to the matter Hamiltonian variation in the bulk. We also include the variation of the central charge with associated chemical potential in the boundary first law. Finally, we comment on the boundary dual of the first law for the Wheeler-deWitt patch of AdS, and we propose an extension of our CFT first law to higher dimensions. | high energy physics theory |
In this paper, a novel low complexity bit and power loading algorithm is formulated for orthogonal frequency division multiplexing (OFDM) systems operating in fading environments and in the presence of unknown interference. The proposed non-iterative algorithm jointly maximizes the throughput and minimizes the transmitted power, while guaranteeing a target bit error rate (BER) per subcarrier. Closed-form expressions are derived for the optimal bit and power distributions per subcarrier. The performance of the proposed algorithm is investigated through extensive simulations. A performance comparison with the algorithm in [1] shows the superiority of the proposed algorithm with reduced computational effort. | electrical engineering and systems science |
The main focus is the generic freeness of local cohomology modules in a graded setting. The present approach takes place in a quite nonrestrictive setting, by solely assuming that the ground coefficient ring is Noetherian. Under additional assumptions, such as when the latter is reduced or a domain, the outcome turns out to be stronger. One important application of these considerations is to the specialization of rational maps and of symmetric and Rees powers of a module. | mathematics |
In this paper, we define a notion of containment and avoidance for subsets of $\mathbb{R}^2$. Then we introduce a new, continuous and super-additive extremal function for subsets $P \subseteq \mathbb{R}^2$ called $px(n, P)$, which is the supremum of $\mu_2(S)$ over all open $P$-free subsets $S \subseteq [0, n]^2$, where $\mu_2(S)$ denotes the Lebesgue measure of $S$ in $\mathbb{R}^2$. We show that $px(n, P)$ fully encompasses the Zarankiewicz problem and more generally the 0-1 matrix extremal function $ex(n, M)$ up to a constant factor. More specifically, we define a natural correspondence between finite subsets $P \subseteq \mathbb{R}^2$ and 0-1 matrices $M_P$, and we prove that $px(n, P) = \Theta(ex(n, M_P))$ for all finite subsets $P \subseteq \mathbb{R}^2$, where the constants in the bounds depend only on the distances between the points in $P$. We also discuss bounded infinite subsets $P$ for which $px(n, P)$ grows faster than $ex(n, M)$ for all fixed 0-1 matrices $M$. In particular, we show that $px(n, P) = \Theta(n^{2})$ for any open subset $P \subseteq \mathbb{R}^2$. We prove an even stronger result, that if $Q_P$ is the set of points with rational coordinates in any open subset $P \subseteq \mathbb{R}^2$, then $px(n, Q_P) = \Theta(n^2)$. Finally, we obtain a strengthening of the K\H{o}vari-S\'{o}s-Tur\'{a}n theorem that applies to infinite subsets of $\mathbb{R}^2$. Specifically, for subsets $P_{s, t, c} \subseteq \mathbb{R}^2$ consisting of $t$ horizontal line segments of length $s$ with left endpoints on the same vertical line with consecutive segments a distance of $c$ apart, we prove that $px(n, P_{s, t,c}) = O(s^{\frac{1}{t}}n^{2-\frac{1}{t}})$, where the constant in the bound depends on $t$ and $c$. When $t = 2$, we show that this bound is sharp up to a constant factor that depends on $c$. | mathematics |
Planar silicon pixel sensors with modified n$^+$-implantation shapes based on the IBL pixel sensor were designed in Dortmund. The sensors with a pixel size of $250\,\mu$m $\times$ $50\,\mu$m are produced in n$^+$-in-n sensor technology. The charge collection efficiency should improve with electrical field strength maxima created by the different n$^+$-implantation shapes. Therefore, higher particle detection efficiencies at lower bias voltages could be achieved. The modified pixel designs and the IBL standard design are placed on one sensor to test and compare the designs. The sensor can be read out with the FE-I4 readout chip. At the iWoRiD 2018, measurements of sensors irradiated with protons and neutrons respectively at different facilities were presented and showed incongruent results. Unintended annealing during irradiation was considered as an explanation for the observed differences in the hit detection efficiency for two neutron irradiated sensors. This hypothesis will be examined and confirmed in this work, presenting first annealing studies of sensors irradiated with neutrons in Ljubljana. | physics |
The Scotogenic model is an economical setup that induces Majorana neutrino masses at the 1-loop level and includes a dark matter candidate. We discuss a generalization of the original Scotogenic model with arbitrary numbers of generations of singlet fermion and inert doublet scalar fields. First, the full form of the light neutrino mass matrix is presented, with some comments on its derivation and with special attention to some particular cases. The behavior of the theory at high energies is explored by solving the Renormalization Group Equations. | high energy physics phenomenology |
We provide a complete pipeline for the detection of patterns of interest in an image. In our approach, the patterns are assumed to be adequately modeled by a known template, and are located at unknown positions and orientations that we aim at retrieving. We propose a continuous-domain additive image model, where the analyzed image is the sum of the patterns to localize and a background with self-similar isotropic power-spectrum. We are then able to compute the optimal filter fulfilling the SNR criterion based on one single template and background pair: it strongly responds to the template while being optimally decoupled from the background model. In addition, we constrain our filter to be steerable, which allows for a fast template detection together with orientation estimation. In practice, the implementation requires to discretize a continuous-domain formulation on polar grids, which is performed using quadratic radial B-splines. We demonstrate the practical usefulness of our method on a variety of template approximation and pattern detection experiments. We show that the detection performance drastically improves when we exploit the statistics of the background via its power-spectrum decay, which we refer to as spectral-shaping. The proposed scheme outperforms state-of-the-art steerable methods by up to 50% of absolute detection performance. | electrical engineering and systems science |
We report the experimental realization of heralded distribution of single-photon path entanglement at telecommunication wavelengths in a repeater-like architecture. The entanglement is established upon detection of a single photon, originating from one of two spontaneous parametric down conversion photon pair sources, after erasing the photon's which-path information. In order to certify the entanglement, we use an entanglement witness which does not rely on post-selection. We herald entanglement between two locations, separated by a total distance of 2 km of optical fiber, at a rate of 1.6 kHz. This work paves the way towards high-rate and practical quantum repeater architectures. | quantum physics |
Quantum entanglement is an essential ingredient for the absolute security of quantum communication. Generation of continuous-variable entanglement or two-mode squeezing between light fields based on the effect of electromagnetically induced transparency (EIT) has been systematically investigated in this work. Here, we propose a new scheme to enhance the degree of entanglement between probe and coupling fields of coherent-state light by introducing a two-photon detuning in the EIT system. This proposed scheme is more efficient than the conventional one, utilizing the dephasing rate of ground-state coherence, i.e., the decoherence rate to produce entanglement or two-mode squeezing which adds far more excess fluctuation or noise to the system. In addition, maximum degree of entanglement at a given optical depth can be achieved with a wide range of the coupling Rabi frequency and the two-photon detuning, showing our scheme is robust and flexible. It is also interesting to note that while EIT is the effect in the perturbation limit, i.e. the probe field being much weaker than the coupling field and treated as a perturbation, there exists an optimum ratio of the probe to coupling intensities to achieve the maximum entanglement. Our proposed scheme can advance the continuous-variable-based quantum technology and may lead to applications in quantum communication utilizing squeezed light. | quantum physics |
We study the phenomenology of simplified $Z^\prime$ models with a global $U(2)^3$ flavour symmetry in the quark sector, broken solely by the Standard Model Yukawa couplings. This flavour symmetry, known as less-minimal flavour violation, protects $\Delta F=2$ processes from dangerously large new physics (NP) effects, and at the same time provides a free complex phase in $b\to s$ transitions, allowing for an explanation of the hints for additional direct CP violation in kaon decays ($\epsilon^\prime/\epsilon$) and in hadronic $B$-decays ($B\to K\pi$ puzzle). Furthermore, once the couplings of the $Z^\prime$ boson to the leptons are included, it is possible to address the intriguing hints for NP (above the 5$\,\sigma$ level) in $b\to s \ell^+\ell^-$ transitions. Taking into account all flavour observables in a global fit, we find that $\epsilon^\prime/\epsilon$, the $B\to K\pi$ puzzle and $b\to s \ell^+\ell^-$ data can be explained simultaneously. Sizeable CP violation in $b\to s \ell^+\ell^-$ observables, in particular $A_8$, is predicted, which can be tested in the near future, and an explanation of the $B\to K\pi$ and $\epsilon^\prime/\epsilon$ puzzles leads to effects in di-jet tails at the LHC, that are not far below the current limits. Once $b\to s \ell^+\ell^-$ is included, cancellations in di-muon tails, possibly by a second $Z^\prime$, are required by LHC data. | high energy physics phenomenology |
Convolution neural networks have achieved remarkable performance in many tasks of computing vision. However, CNN tends to bias to low frequency components. They prioritize capturing low frequency patterns which lead them fail when suffering from application scenario transformation. While adversarial example implies the model is very sensitive to high frequency perturbations. In this paper, we introduce a new regularization method by constraining the frequency spectra of the filter of the model. Different from band-limit training, our method considers the valid frequency range probably entangles in different layers rather than continuous and trains the valid frequency range end-to-end by backpropagation. We demonstrate the effectiveness of our regularization by (1) defensing to adversarial perturbations; (2) reducing the generalization gap in different architecture; (3) improving the generalization ability in transfer learning scenario without fine-tune. | computer science |
Quantum computation by non-Abelian Majorana zero modes (MZMs) offers an approach to achieve fault tolerance by encoding quantum information in the non-local charge parity states of semiconductor nanowire networks in the topological superconductor regime. Thus far, experimental studies of MZMs chiefly relied on single electron tunneling measurements which leads to decoherence of the quantum information stored in the MZM. As a next step towards topological quantum computation, charge parity conserving experiments based on the Josephson effect are required, which can also help exclude suggested non-topological origins of the zero bias conductance anomaly. Here we report the direct measurement of the Josephson radiation frequency in InAs nanowires with epitaxial aluminium shells. For the first time, we observe the $4\pi$-periodic Josephson effect above a magnetic field of $\approx 200\,$mT, consistent with the estimated and measured topological phase transition of similar devices. | condensed matter |
Intrinsic charge trap capacitive non-volatile flash memories take a significant share of the semiconductor electronics market today. It is a challenge to create intrinsic traps in the dielectric layer without high temperature processing steps. While low temperature processed memory devices fabricated from polymers have been demonstrated as an alternative, their performance degrade rapidly after a few cycles of operation. Moreover conventional memory devices need the support of tunneling and blocking layers since the memory dielectric or polymer is incapable of preventing memory leakage. The main issue in designing a memory device is to optimize the leakage current and intrinsic trap density simultaneously. Here we report a tunable flash memory device without tunneling and blocking layer by combining the discovery of high intrinsic charge traps ($>$10$^{12}$ cm$^{-2}$) together with low leakage current($<$10$^{-7}$ A.cm$^{-2}$) in solution derived, inorganic, spin$-$coated dielectric films which were heated at 200$^\circ$C or below. In addition, the memory storage is tuned systematically upto 96% by controlling the trap density with increasing heating temperature. | physics |
A well known theorem due to Koksma states that for Lebesgue almost every $x>1$ the sequence $(x^n)_{n=1}^{\infty}$ is uniformly distributed modulo one. In this paper we give sufficient conditions for an analogue of this theorem to hold for self-similar measures. Our approach applies more generally to sequences of the form $(f_{n}(x))_{n=1}^{\infty}$ where $(f_n)_{n=1}^{\infty}$ is a sequence of sufficiently smooth real valued functions satisfying a nonlinearity assumption. As a corollary of our main result, we show that if $C$ is equal to the middle third Cantor set and $t\geq 1$, then with respect to the Cantor-Lebesgue measure on $C+t$ the sequence $(x^n)_{n=1}^{\infty}$ is uniformly distributed for almost every $x$. | mathematics |
An $H^\pm W^\mp Z$ interaction at the tree level is common feature of new physics models that feature scalar triplets. In this study, we aim to probe the strength of the aforementioned interaction in a model-agnostic fashion at the futuristic 27 TeV proton-proton collider. We assume that the $H^\pm$ couples dominantly to ($W^\pm,Z$) and ($t,b$). We specifically study the processes that involve the $H^\pm W^\mp Z$ vertex at the production level, that is, $p p \to H^\pm j j$ and $p p \to Z H^\pm$. Moreover, we look into both $H^\pm \to W^\pm Z,~t b$ decays for either production process. Our investigations reveal that the $H^\pm j j$ production process has a greater reach compared to $Z H^\pm$. Moreover, the discovery potential of a charged Higgs improves markedly with respect to the earlier studies corresponding to lower centre-of-mass energies. Finally, we recast our results in the context of the popular Georgi-Machacek model. | high energy physics phenomenology |
We present a novel algorithm for non-linear instrumental variable (IV) regression, DualIV, which simplifies traditional two-stage methods via a dual formulation. Inspired by problems in stochastic programming, we show that two-stage procedures for non-linear IV regression can be reformulated as a convex-concave saddle-point problem. Our formulation enables us to circumvent the first-stage regression which is a potential bottleneck in real-world applications. We develop a simple kernel-based algorithm with an analytic solution based on this formulation. Empirical results show that we are competitive to existing, more complicated algorithms for non-linear instrumental variable regression. | statistics |
Electric fields can transform materials with respect to their structure and properties, enabling various applications ranging from batteries to spintronics. Recently electrolytic gating, which can generate large electric fields and voltage-driven ion transfer, has been identified as a powerful means to achieve electric-field-controlled phase transformations. The class of transition metal oxides (TMOs) provide many potential candidates that present a strong response under electrolytic gating. However, very few show a reversible structural transformation at room-temperature. Here, we report the realization of a digitally synthesized TMO that shows a reversible, electric-field-controlled transformation between distinct crystalline phases at room-temperature. In superlattices comprised of alternating one-unit-cell of SrIrO3 and La0.2Sr0.8MnO3, we find a reversible phase transformation with a 7% lattice change and dramatic modulation in chemical, electronic, magnetic and optical properties, mediated by the reversible transfer of oxygen and hydrogen ions. Strikingly, this phase transformation is absent in the constituent oxides, solid solutions and larger period superlattices. Our findings open up a new class of materials for voltage-controlled functionality. | condensed matter |
The fact that no evidence of new physics was found so far by LHC experiments has led some to call for the abandonment of the naturalness criterion. Others, on the contrary, have felt the need to break a lance in its defense by claiming that it should not be dismissed too quickly, but rather only reshaped to fit new needs. In this paper we argue that present pro or contra naturalness debates often miss an important historical point: that naturalness is essentially a hazily defined notion which, in the course of more than four decades, has been steadily, and often not coherently, shaped by its interplay with different branches of model-building in high-energy physics and cosmology on the one side, and new experimental results on the other side. The paper endeavours to clear up some of the physical and philosophical haze by taking a closer look back at the origin of naturalness in the 1970s and 1980s, with particular attention to the early work of Kenneth Wilson. | physics |
We study a competitive stochastic growth model called chase-escape in which red particles spread to adjacent uncolored sites and blue only to adjacent red sites. Red particles are killed when blue occupies the same site. If blue has rate-1 passage times and red rate-$\lambda$, a phase transition occurs for the probability red escapes to infinity on $\mathbb Z^d$, $d$-ary trees, and the ladder graph $\mathbb Z \times \{0,1\}$. The result on the tree was known, but we provide a new, simpler calculation of the critical value, and observe that it is a lower bound for a variety of graphs. We conclude by showing that red can be stochastically slower than blue, but still escape with positive probability for large enough $d$ on oriented $\mathbb Z^d$ with passage times that resemble Bernoulli bond percolation. | mathematics |
Various Bell inequalities are trivial algebraic properties satisfied by each line of particular data spreadsheets.It is surprising that their violation in some experiments, allows to speculate about the existence of nonlocal influences in Nature and to doubt the existence of the objective external physical reality. Such speculations are rooted in incorrect interpretations of quantum mechanics and in a failure of local realistic hidden variable models to reproduce quantum predictions for spin polarisation correlation experiments. These hidden variable models use counterfactual joint probability distributions of only pairwise measurable random variables to prove the inequalities. In real experiments Alice and Bob, using 4 incompatible pairs of experimental settings, estimate imperfect correlations between clicks, registered by their detectors. Clicks announce detection of photons and are coded by 1 or -1. Expectations of corresponding ,only pairwise measurable, random variables are estimated and compared with quantum predictions. These estimates violate significantly the inequalities. Since all these random variables cannot be jointly measured , a joint probability distribution of them does not exist and various Bell inequalities may not be derived. Thus it is not surprising that they are violated. Moreover,if contextual setting dependent parameters describing measuring instruments are correctly included in the description, then imperfect correlations between the clicks may be explained in a locally causal way. In this paper we review and rephrase several arguments proving that the violation of various Bell inequalities may neither justify the quantum nonlocality nor allow for doubt regarding the existence of atoms, electrons and other invisible elementary particles which are building blocks of the visible world around us including ourselves. | quantum physics |
Lepton Flavor Violating (LFV) processes are clear signals of physics beyond the Standard Model. We investigate the possibility of measuring this kind of processes at present and foreseeable future muon-electron colliders, taking into account present day bounds from existing experiments. As a model of new physics we consider a Z' boson with a $U'(1)$ gauge symmetry and generic couplings. Processes that violate lepton flavor by two units seem to be particularly promising. | high energy physics phenomenology |
Weyl semimetals (WSM) are a new class of topological materials that exhibit a bulk Hall effect due to time-reversal symmetry breaking, as well as a chiral magnetic effect due to inversion symmetry breaking. These unusual electromagnetic responses can be characterized by an axion term $\theta \textbf{E} \cdot \textbf{B}$ with space and time dependent axion angle $\theta (\textbf{r} ,t)$. In this paper we compute the electromagnetic fields produced by an electric charge near to a topological Weyl semimetal with two Weyl nodes in the bulk Brillouin zone. We find that, as in ordinary metals and dielectrics, outside the WSM the electric field is mainly determined by the optical properties of the material. The magnetic field is, on the contrary, of topological origin in nature due to the magnetoelectric effect of topological phases. We show that the magnetic field exhibits a particularly interesting behavior above the WSM: the field lines begin at the surface and then end at the surface (but not at the same point). This behavior is quite different from that produced by an electric charge near the surface of a topological insulator, where the magnetic field above the surface is generated by an image magnetic monopole beneath the surface, in which case, the magnetic field lines are straight rays. The unconventional behavior of the magnetic field is an experimentally observable signature of the anomalous Hall effect in the bulk of the WSM. We discuss a simple candidate material for testing our predictions, as well as two experimental setups which must be sensitive to the effects of the induced magnetic field. | high energy physics theory |
Users of cloud computing are increasingly overwhelmed with the wide range of providers and services offered by each provider. As such, many users select cloud services based on description alone. An emerging alternative is to use a decision support system (DSS), which typically relies on gaining insights from observational data in order to assist a customer in making decisions regarding optimal deployment or redeployment of cloud applications. The primary activity of such systems is the generation of a prediction model (e.g. using machine learning), which requires a significantly large amount of training data. However, considering the varying architectures of applications, cloud providers, and cloud offerings, this activity is not sustainable as it incurs additional time and cost to collect training data and subsequently train the models. We overcome this through developing a Transfer Learning (TL) approach where the knowledge (in the form of the prediction model and associated data set) gained from running an application on a particular cloud infrastructure is transferred in order to substantially reduce the overhead of building new models for the performance of new applications and/or cloud infrastructures. In this paper, we present our approach and evaluate it through extensive experimentation involving three real world applications over two major public cloud providers, namely Amazon and Google. Our evaluation shows that our novel two-mode TL scheme increases overall efficiency with a factor of 60\% reduction in the time and cost of generating a new prediction model. We test this under a number of cross-application and cross-cloud scenarios. | computer science |
The paper discusses design techniques for a seamless architecture of information systems (IS). A seamless architecture is understood as such an architectural de-scription of an IS, that defines explicit connections between elements of architec-tural models of various architectural representations. Design techniques are based on the adaptive clustering method developed by the author, which allows one to bridge technological gaps between architectural abstracts of different levels and to link architectural models in such a way as to ensure the design of a more detailed model based on a model of a higher level of abstraction. | computer science |
Here we perform a Kaluza-Klein dimensional reduction of Vasiliev's first-order description of massless spin-s particles from $D=3+1$ to $D=2+1$ and derive first-order self-dual models describing particles with helicities $\pm s$ for the cases $s=1,2,3$. In the first two cases we recover known (parity singlets) self-dual models. In the spin-3 case we derive a new first order self-dual model with a local Weyl symmetry which lifts the traceless restriction on the rank-3 tensor. A gauge fixed version of this model corresponds to a known spin-3 self-dual model. We conjecture that our procedure can be generalized to arbitrary integer spins. | high energy physics theory |
We propose a generalization of the Feynman path integral using squeezed coherent states. We apply this approach to the dynamics of Bose-Einstein condensates, which gives an effective low energy description that contains both a coherent field and a squeezing field. We derive the classical trajectory of this action, which constitutes a generalization of the Gross Pitaevskii equation, at linear order. We derive the low energy excitations, which provides a description of second sound in weakly interacting condensates as a squeezing oscillation of the order parameter. This interpretation is also supported by a comparison to a numerical c-field method. | condensed matter |
Deep Learning has been very successful in many application domains. However, its usefulness in the context of network intrusion detection has not been systematically investigated. In this paper, we report a case study on using deep learning for both supervised network intrusion detection and unsupervised network anomaly detection. We show that Deep Neural Networks (DNNs) can outperform other machine learning based intrusion detection systems, while being robust in the presence of dynamic IP addresses. We also show that Autoencoders can be effective for network anomaly detection. | computer science |
Autonomous cars are subjected to several different kind of inputs (other cars, road structure, etc.) and, therefore, testing the car under all possible conditions is impossible. To tackle this problem, scenario-based testing for automated driving defines categories of different scenarios that should be covered. Although this kind of coverage is a necessary condition, it still does not guarantee that any possible behaviour of the autonomous car is tested. In this paper, we consider the path planner of an autonomous car that decides, at each timestep, the short-term path to follow in the next few seconds; such decision is done by using a weighted cost function that considers different aspects (safety, comfort, etc.). In order to assess whether all the possible decisions that can be taken by the path planner are covered by a given test suite T, we propose a mutation-based approach that mutates the weights of the cost function and then checks if at least one scenario of T kills the mutant. Preliminary experiments on a manually designed test suite show that some weights are easier to cover as they consider aspects that more likely occur in a scenario, and that more complicated scenarios (that generate more complex paths) are those that allow to cover more weights. | computer science |
In the presence of conservation laws, superpositions of eigenstates of the corresponding conserved quantities cannot be generated by quantum dynamics. Thus, any such coherence represents a potentially valuable resource of asymmetry, which can be used, for example, to enhance the precision of quantum metrology or to enable state transitions in quantum thermodynamics. Here we ask if such superpositions, already present in a reference system, can be broadcast to other systems, thereby distributing asymmetry indefinitely at the expense of creating correlations. We prove a no-go theorem showing that this is forbidden by quantum mechanics in every finite-dimensional system. In doing so we also answer some open questions in the quantum information literature concerning the sharing of timing information of a clock and the possibility of catalysis in quantum thermodynamics. We also prove that even weaker forms of broadcasting, of which Aberg's `catalytic coherence' is a particular example, can only occur in the presence of infinite-dimensional reference systems. Our results set fundamental limits to the creation and manipulation of quantum coherence and shed light on the possibilities and limitations of quantum reference frames to act catalytically without being degraded. | quantum physics |
Problems that feature significantly different time scales, where the stiff time-step restriction comes from a linear component, implicit-explicit (IMEX) methods alleviate this restriction if the concern is linear stability. However, where the SSP property is needed, IMEX SSP Runge-Kutta (SSP-IMEX) methods have very restrictive time-steps. An alternative to SSP-IMEX schemes is to adopt an integrating factor approach to handle the linear component exactly and step the transformed problem forward using some time-evolution method. The strong stability properties of integrating factor Runge--Kutta methods were previously established, where it was shown that it is possible to define explicit integrating factor Runge-Kutta methods that preserve strong stability properties satisfied by each of the two components when coupled with forward Euler time-stepping. It was proved that the solution will be SSP if the transformed problem is stepped forward with an explicit SSP Runge-Kutta method that has non-decreasing abscissas. However, explicit SSP Runge-Kutta methods have an order barrier of p=4, and sometimes higher order is desired. In this work we consider explicit SSP two-step Runge--Kutta integrating factor methods to raise the order. We show that strong stability is ensured if the two-step Runge-Kutta method used to evolve the transformed problem is SSP and has non-decreasing abscissas. We find such methods up to eighth order and present their SSP coefficients. Adding a step allows us to break the fourth order barrier on explicit SSP Runge-Kutta methods; furthermore, our explicit SSP two-step Runge--Kutta methods with non-decreasing abscissas typically have larger SSP coefficients than the corresponding one-step methods. | mathematics |
We propose a conceptual design for a quantum blockchain. Our method involves encoding the blockchain into a temporal GHZ (Greenberger-Horne-Zeilinger) state of photons that do not simultaneously coexist. It is shown that the entanglement in time, as opposed to an entanglement in space, provides the crucial quantum advantage. All the subcomponents of this system have already been shown to be experimentally realized. Furthermore, our encoding procedure can be interpreted as nonclassically influencing the past. | quantum physics |
Diabetes prevalence is on the rise in the UK, and for public health strategy, estimation of relative disease risk and subsequent mapping is important. We consider an application to London data on diabetes prevalence and mortality. In order to improve the estimation of relative risks we analyse jointly prevalence and mortality data to ensure borrowing strength over the two outcomes. The available data involves two spatial frameworks, areas (middle level super output areas, MSOAs), and general practices (GPs) recruiting patients from several areas. This raises a spatial misalignment issue that we deal with by employing the multiple membership principle. Specifically we translate area spatial effects to explain GP practice prevalence according to proportions of GP populations resident in different areas. A sparse implementation in Stan of both the MCAR and GMCAR allows the comparison of these bivariate priors as well as exploring the different implications for the mapping patterns for both outcomes. The necessary causal precedence of diabetes prevalence over mortality allows a specific conditionality assumption in the GMCAR, not always present in the context of disease mapping. | statistics |
We introduce and study the notion of slightly trivial extensions of a fusion category which can be viewed as the first level of complexity of extensions. We also provide two examples of slightly trivial extensions which arise from rank $3$ fusion categories. | mathematics |
We show that the surface tension of fluid near the critical point may be correctly described by taking into consideration the microscopic structure of the system using a {\phi}4 field theory. We revise the theory of the surface tension near criticality to take into account a microscopic structure of the fluid. Focusing on the case of the Lennard-Jones fluid, we express the surface tension in terms of the compressibility of the reference hard-core system and its derivatives with respect to density. We demonstrate that the obtained analytical microscopic expression for the surface tension near the critical point is in a good agreement with numerical experiments, which emphasizes the impact of microscopic structure on the critical behavior of the surface tension in fluids. Our analysis provides a basis for studying the surface tension in small-volume systems important in many technological applications. | condensed matter |
We report the high-field superconducting properties of thin, disordered Re films via magneto-transport and tunneling density of states measurements. Films with thicknesses in the range of 9 nm to 3 nm had normal state sheet resistances of $\sim$0.2 k$\Omega$ to $\sim$1 k$\Omega$ and corresponding transition temperatures in the range of 6 K to 3 K. Tunneling spectra were consistent with those of a moderate coupling BCS superconductor. Notwithstanding these unremarkable superconducting properties, the films exhibited an extraordinarily high upper critical field. We estimate their zero-temperature $H_{c2}$ to be more than twice the Pauli limit. Indeed, in 6 nm samples the estimated reduced critical field $H_{c2}/T_c\sim$ 5.6 T/K is among the highest reported for any elemental superconductor. Although the sheet resistances of the films were well below the quantum resistance $R_Q=h/4e^2$, their $H_{c2}$'s approached the theoretical upper limit of a strongly disordered superconductor for which $k_F\ell\sim1$. | condensed matter |
We discuss the type of pairing in the hexagonal pnictide superconductor SrPtAs, taking into account its multiband structure. The topological chiral $d$-wave state with time-reversal-symmetry breaking has been anticipated from the spontaneous magnetization observed by the muon-spin-relaxation experiment. We point out in this paper that the recent experimental reports on the nuclear-spin-lattice relaxation rate $T_1^{-1}$ and superfluid density $n_s(T)$, which seemingly support the conventional $s$-wave pairing, are also consistent with the chiral $d$-wave state. The compatibility of the gap and multiband structures is crucial in this argument. | condensed matter |
Federated learning enables mutually distrusting participants to collaboratively learn a distributed machine learning model without revealing anything but the model's output. Generic federated learning has been studied extensively, and several learning protocols, as well as open-source frameworks, have been developed. Yet, their over pursuit of computing efficiency and fast implementation might diminish the security and privacy guarantees of participant's training data, about which little is known thus far. In this paper, we consider an honest-but-curious adversary who participants in training a distributed ML model, does not deviate from the defined learning protocol, but attempts to infer private training data from the legitimately received information. In this setting, we design and implement two practical attacks, reverse sum attack and reverse multiplication attack, neither of which will affect the accuracy of the learned model. By empirically studying the privacy leakage of two learning protocols, we show that our attacks are (1) effective - the adversary successfully steal the private training data, even when the intermediate outputs are encrypted to protect data privacy; (2) evasive - the adversary's malicious behavior does not deviate from the protocol specification and deteriorate any accuracy of the target model; and (3) easy - the adversary needs little prior knowledge about the data distribution of the target participant. We also experimentally show that the leaked information is as effective as the raw training data through training an alternative classifier on the leaked information. We further discuss potential countermeasures and their challenges, which we hope may lead to several promising research directions. | computer science |
The [CII] deficit, which describes the observed decrease in the ratio of [CII] 158 micron emission to continuum infrared emission in galaxies with high star formation surface densities, places a significant challenge to the interpretation of [CII] detections from across the observable universe. In an attempt to further decode the cause of the [CII] deficit, the [CII] and dust continuum emission from 18 Local Volume galaxies has been split based on conditions within the interstellar medium where it originated. This is completed using the Key Insights in Nearby Galaxies: a Far-Infrared Survey with Herschel (KINGFISH) and Beyond the Peak (BtP) surveys and the wide-range of wavelength information, from UV to far-infrared emission lines, available for a selection of star-forming regions within these samples. By comparing these subdivided [CII] emissions to isolated infrared emission and other properties, we find that the thermalization (collisional de-excitation) of the [CII] line in HII regions plays a significant role in the deficit observed in our sample. | astrophysics |
We explore a class of $\phi^{4n}$ models with kink and antikink solutions that have long-range tails on both sides, specializing to the cases with $n=2$ and $n=3$. A recently developed method of an accelerating kink ansatz is used to estimate the force between the kink and the antikink. We use state-of-the-art numerical methods to initialize the system in a kink-antikink configuration with a finite initial velocity and to evolve the system according to the equations of motion. Among these methods, we propose a computationally efficient way to initialize the velocity field of the system. Interestingly, we discover that, for this class of models, $\phi^{4n}$ with $n>1$, the kink-antikink annihilation behaves differently from the archetypal $\phi^4$ model or even the kinks with one long-range tail because there is neither long-lived bion formation nor resonance windows and the critical velocity is ultrarelativistic. | high energy physics theory |
High-index dielectric materials are in great demand for nanophotonic devices and applications, from ultrathin optical elements to metal-free sub-diffraction light confinement and waveguiding. Here we show that chalcogenide topological insulators are particularly apt candidates for dielectric nanophotonic architectures in the infrared spectral range by reporting metamaterial resonances in chalcogenide crystals sustained well inside the mid-infrared, choosing Bi$_2$Te$_3$ as case study within this family of materials. Strong resonant modulation of the incident electromagnetic field is achieved thanks to the exceptionally high refractive index ranging between 7 and 8 throughout the 2-10 $\mu$m region. Analysis of the complex mode structure in the metamaterial allude to the excitation of poloidal surface currents which could open pathways for enhanced light-matter interaction and low-loss plasmonic configurations by coupling to the spin-polarized topological surface carriers, thereby providing new opportunities to combine dielectric, plasmonic and magnetic metamaterials in a single platform. | physics |
To accurately estimate locations and velocities of surrounding targets (cars) is crucial for advanced driver assistance systems based on radar sensors. In this paper we derive methods for fusing data from multiple radar sensors in order to improve the accuracy and robustness of such estimates. First we pose the target estimation problem as a multivariate multidimensional spectral estimation problem. The problem is multivariate since each radar sensor gives rise to a measurement channel. Then we investigate how the use of the cross-spectra affects target estimates. We see that the use of the magnitude of the cross-spectrum significantly improves the accuracy of the target estimates, whereas an attempt to compensate the phase lag of the cross-spectrum only gives marginal improvement. This paper may be viewed as a first step towards applying high-resolution methods that builds on multidimensional multivariate spectral estimation for sensor fusion. | electrical engineering and systems science |
We investigate an alternative approach to the correspondence of four-dimensional $\mathcal{N}=2$ superconformal theories and two-dimensional vertex operator algebras, in the framework of the $\Omega$-deformation of supersymmetric gauge theories. The two-dimensional $\Omega$-deformation of the holomorphic-topological theory on the product four-manifold is constructed at the level of supersymmetry variations and the action. The supersymmetric localization is performed to achieve a two-dimensional chiral CFT. The desired vertex operator algebra is recovered as the algebra of local operators of the resulting CFT. We also discuss the identification of the Schur index of the $\mathcal{N}=2$ superconformal theory and the vacuum character of the vertex operator algebra at the level of their path integral representations, using our $\Omega$-deformation point of view on the correspondence. | high energy physics theory |
Majorana zero modes in the superconductor-semiconductor nanowire have been extensively studied during the past decade. Disorder remains a serious problem preventing the definitive observation of the topological Majorana bound states. Thus, it is worthwhile to revisit the simple model -- Kitaev chain, and study the effects of weak and strong disorder on the Kitaev chain. By comparing the role of disorder in the Kitaev chain with that in the nanowire, we find that disorder affects both systems but in a nonuniversal manner. In general, disorder has a much stronger effect on the nanowire than the Kitaev chain, particularly for weak to intermediate disorder. For strong disorder, both Kitaev chain and nanowire manifest random featureless behavior due to the universal Anderson localization. Only the vanishing and strong disorder regimes are thus universal manifesting respectively topological superconductivity and Anderson localization, but the experimentally relevant intermediate disorder regime is non-universal with the details dependent on the disorder realization in the system. | condensed matter |
We consider a short-range deformation potential scattering model of electron-acoustic phonon interaction to calculate the resistivity of an ideal metal as a function of temperature (T) and electron density (n). We consider both 3D metals and 2D metals, and focus on the dilute limit, i.e., low effective metallic carrier density of the system. The main findings are: (1) a phonon scattering induced linear-in-T resistivity could persist to arbitrarily low T in the dilute limit independent of the Debye temperature ($T_D$) although eventually the low-T resistivity turns over to the expected Bloch-Gruneisen (BG) behavior with $T^5$ ($T^4$) dependence, in 3D (2D) respectively; (2) because of low values of n, the phonon-induced resistivity could be very high in the system; (3) the resistivity shows an intrinsic saturation effect at very high temperatures (for $T>T_D$), and in fact, decreases with increasing T above a high crossover temperature with this crossover being dependent on both $T_D$ and n in a non-universal manner. We also provide high-T linear-in-T resistivity results for 2D and 3D Dirac materials. Our work brings out the universal features of phonon-induced transport in dilute metals, and we comment on possible implications of our results for strange metals, emphasizing that the mere observation of a linear-in-T metallic resistivity at low temperatures or a very high metallic resistivity at high temperatures is not necessarily a reason to invoke an underlying quantum critical strange metal behavior. We discuss the temperature variation of the effective transport scattering rate showing that the scattering rate could be below or above $k_BT$, and in particular, purely coincidentally, the calculated scattering rate happens to be $k_BT$ in normal metals with no implications for the so-called Planckian behavior. | condensed matter |
In \cite{Ka}, the authors obtained a method for deriving special matrices, whose powers are related to Fibonacci and Lucas numbers. In the study, it has been developed a method for deriving special matrices of $3\times 3$ dimensions, whose powers are related to Horadam and generalized Fibonacci numbers, and some special matrices have been found via the method developed. | mathematics |
This paper deals with joint source and relay beamforming (BF) design for an amplify-and-forward (AF) multi-antenna multirelay network. Considering that the channel state information (CSI) from relays to destination is imperfect, we aim to maximize the worst case received signal-to-noise ratio (SNR). The associated optimization problem is then solved in two steps. In the first step, by fixing the source BF vector, a semi-closed form solution of the relay BF matrices is obtained, up to a power allocation factor. In the second step, the global optimal source BF vector is obtained based on the Polyblock outer Approximation (PA) algorithm. We also propose two low-complexity methods for obtaining the source BF vector, which are different in their complexities and performances. The optimal joint source-relay BF solution obtained by the proposed algorithms serves as the benchmark for evaluating the existing schemes and the proposed low-complexity methods. Simulation results show that the proposed robust design can significantly reduce the sensitivity of the channel uncertainty to the system performance. | electrical engineering and systems science |
Reality of quantum observables, a feature of long-standing interest within foundations of quantum mechanics, has recently been quantified and deeply studied by means of entropic measures [Phys. Rev. A 97, 022107 (2018)]. However, there is no state-independent "reality trade-off" between noncommuting observables, as in certain systems all observables are real [Europhys. Lett. 112, 40005 (2015)]. We show that the entropic uncertainty relation in the presence of quantum memory [Nature Phys. 6, 659 (2010)] perfectly supplements the discussed notion of reality, rendering trade-offs between reality and quantum uncertainty. State-independent complementarity inequalities involving entropic measures of both, uncertainty and reality, for two observables are presented. | quantum physics |
Knowledge distillation (KD) has proved to be an effective approach for deep neural network compression, which learns a compact network (student) by transferring the knowledge from a pre-trained, over-parameterized network (teacher). In traditional KD, the transferred knowledge is usually obtained by feeding training samples to the teacher network to obtain the class probabilities. However, the original training dataset is not always available due to storage costs or privacy issues. In this study, we propose a novel data-free KD approach by modeling the intermediate feature space of the teacher with a multivariate normal distribution and leveraging the soft targeted labels generated by the distribution to synthesize pseudo samples as the transfer set. Several student networks trained with these synthesized transfer sets present competitive performance compared to the networks trained with the original training set and other data-free KD approaches. | computer science |
Focusing control of ultrashort pulsed beams is an important research topic, due to its impact to subsequent interaction with matter. In this work, we study the propagation near the focus of ultrashort laser pulses of ~25 fs duration under diffractive focusing. We perform the spatio-spectral and spatio-temporal measurements of their amplitude and phase, complemented by the corresponding simulations. With them, we demonstrate that pulse shaping allows modifying in a controlled way not only the spatio-temporal distribution of the light irradiance in the focal region, but also the way it propagates as well as the frequency distribution within the pulse (temporal chirp). To gain a further intuitive insight, the role of diverse added spectral phase components is analyzed, showing the symmetries that arise for each case. In particular, we compare the effects, similarities and differences of the second and third order dispersion cases. | physics |
Nano-graphene /polymer composites can functionas pressure induced electro-switches, at concentrations around their conductivity percolation threshold. Close to the critical point, the pressure dependence of the electron tunneling through the polymer barrier separating nanon-graphenes results from thecompetition among shorteningof the tunneling length and the increase of the polymer's polarizability. Such switching behaviorwas recentlyobserved inpolyvinyl alcohol (PVA) loaded withnano-graphene platelets (NGPs). In this work, PVA is blended withh alpha-poly(vinylidene fluoride) (PVdF)and NGPs. Coaxial mechanical stress and electric field render the nano-composite piezoelectric. We investigate the influence of heterogeneity, thermal properties, phase transitions and kinetic processes occurring in the polymer matrix on the macroscopicelectrical conductivity and interfacial polarization in casted specimens. Furthermore, the effect of electro-activity of PVdF grains on the electric and thermal properties are comparatively studied. Broadband Dielectricspectroscopy is employed to resolve and inspect electron transport and trapping with respect to thermal transitions and kineticprocessestraced via Differential Scanning Calorimetry. The harmonic electric field applied during a BDS sweep induces volume modifications of the electro-active PVdF grains, while, electro-activity of PVdF grains can disturb the internal electric field that free (or bound) electric. The dc conductivity and dielectric relaxation was found to exhibit weakdependencies. | condensed matter |
While people with visual impairments are interested in artwork as much as their sighted peers, their experience is limited to few selective artworks that are exhibited at certain museums. To enable people with visual impairments to access and appreciate as many artworks as possible at ease, we propose an online art gallery that allows users to explore different parts of a painting displayed on their touchscreen-based devices while listening to corresponding verbal descriptions of the touched part on the screen. To investigate the scalability of our approach, we first explored if anonymous crowd who may not have expertise in art are capable of providing visual descriptions of artwork as a preliminary study. Then we conducted a user study with 9 participants with visual impairments to explore the potential of our system for independent artwork appreciation by assessing if and how well the system supports 4 steps of Feldman Model of Criticism. The findings suggest that visual descriptions of artworks produced by an anonymous crowd are sufficient for people with visual impairments to interpret and appreciate paintings with their own judgments which is different from existing approaches that focused on delivering descriptions and opinions written by art experts. Based on the lessons learned from the study, we plan to collect visual descriptions of a greater number of artwork and distribute our online art gallery publicly to make more paintings accessible for people with visual impairments. | computer science |
The use of coordinate processes for the modelling of impulse control for {\em general}\/ Markov processes typically involves the construction of a probability measure on a countable product of copies of the path space. In addition, admissibility of an impulse control policy requires that the random times of the interventions be stopping times with respect to different filtrations arising from the different component coordinate processes. When the underlying strong Markov process has {\em continuous}\/ paths, however, a simpler model can be developed which takes the single path space as its probability space and uses the natural filtration with respect to which the intervention times must be stopping times. Moreover, this model construction allows for impulse control with random effects whereby the decision maker selects an impulse but the intervention may result in a different impulse occurring. This paper gives the construction of the probability measure on the path space for an admissible intervention policy subject to a randomized impulse mechanism. It also identifies a class of impulse policies under which the resulting controlled process is Markov and using time-shifts of the policies, a Markov family of time-space dependent measures exists. In addition, a class is defined for which the paths between interventions are independent and a further subclass for which the cycles following the initial cycle are identically distributed. The decision to use an $(s,S)$ ordering policy in inventory management provides an example of an impulse policy for which the process is Markov and has i.i.d.~cycles. A benefit of the constructed model is that one is allowed to use classical renewal arguments to analyze long-term average control problems. | mathematics |
A $k$-proper edge-coloring of a graph G is called adjacent vertex-distinguishing if any two adjacent vertices are distinguished by the set of colors appearing in the edges incident to each vertex. The smallest value $k$ for which $G$ admits such coloring is denoted by $\chi'_a(G)$. We prove that $\chi'_a(G) = 2R + 1$ for most circulant graphs $C_n([1, R])$. | computer science |
We report the design of a diamond-based honeycomb phononic network, in which a mechanical resonator couples to three distinct phononic crystal waveguides. This two-dimensional (2D) phononic network extends an earlier study on one-dimensional (1D) phononic networks with closed mechanical subsystems. With a special design for the phononic band structures of the waveguides, any two neighboring resonators in the 2D network and the waveguide between them can form a closed mechanical subsystem, which enables nearest neighbor coupling and at the same time circumvents the scaling problems inherent in typical large mechanical systems. In addition, the 2D network can be attached to a square phononic crystal lattice and be protected by the large band gap of the phononic crystal shield. Honeycomb phononic networks of spin qubits with nearest neighbor coupling can serve as an experimental platform for quantum computing and especially topological quantum error corrections. | quantum physics |
While the levitating mirror has seen renewed interest lately, relatively little is known about its quantum behaviour. In this paper we present a quantum theory of a one dimensional levitating mirror. The mirror forms a part of a Fabry-Perot cavity where the circulating intracavity field supports the mirror through radiation pressure alone. We find a blue and red detuned steady-state of which only the blue detuned solution with damping on the mirror and cavity is stable. We find strong entanglement (15-20 ebits) between the mirror output and cavity output and squeezing in the mirror position. | quantum physics |
In this paper we construct unstable shocks in the context of 2D isentropic compressible Euler in azimuthal symmetry. More specifically, we construct initial data that when viewed in self-similar coordinates, converges asymptotically to the unstable $C^{\frac15}$ self-similar solution to the Burgers' equation. Moreover, we show the behavior is stable in $C^8$ modulo a two dimensional linear subspace. Under the azimuthal symmetry assumption, one cannot impose additional symmetry assumptions in order to isolate the corresponding unstable manifold: rather, we rely on modulation variable techniques in conjunction with a Newton scheme. | mathematics |
In [arXiv:1805.05057 [hep-th]],[arXiv:1812.00811 [hep-th]], the partition function of the Gross-Witten-Wadia unitary matrix model with the logarithmic term has been identified with the $\tau$ function of a certain Painlev\'{e} system, and the double scaling limit of the associated discrete Painlev\'{e} equation to the critical point provides us with the Painlev\'{e} II equation. This limit captures the critical behavior of the $su(2)$, $N_f =2$ $\mathcal{N}=2$ supersymmetric gauge theory around its Argyres-Douglas $4D$ superconformal point. Here, we consider further extension of the model that contains the $k$-th multicritical point and that is to be identified with $\hat{A}_{2k, 2k}$ theory. In the $k=2$ case, we derive a system of two ODEs for the scaling functions to the free energy, the time variable being the scaled total mass and make a consistency check on the spectral curve on this matrix model. | high energy physics theory |
The Hessian geometry is the real analogue of the K\"ahler one. Sasakian geometry is an odd-dimensional counterpart of the K\"ahler geometry. In the paper, we study the connection between projective Hessian and Sasakian manifolds analogous to the one between Hessian and K\"ahler manifolds. In particular, we construct a Sasakian structure on $TM\times \mathbb{R}$ from a projective Hessian structure on $M$. Especially, we are interested in the case of invariant structure on Lie groups. We define semi-Sasakian Lie groups as a generalization of Sasakian Lie groups. Then we construct a semi-Sasakian structure on a group $G\ltimes \mathbb{R}^{n+1}$ for a projective Hessian Lie group $G$. Further, we describe examples of homogeneous Hessian Lie groups and corresponding semi-Sasakian Lie groups. The big class of projective Hessian Lie groups can be constructed by homogeneous regular domains in $\mathbb{R}^n$. The groups $\text{SO}(2)$ and $\text{SU}(2)$ belong to another kind of examples. Using them, we construct semi-Sasakian structures on the group of the Euclidean motions of the real plane and the group of isometries of the complex plane. | mathematics |
We consider the problem of identifying significant predictors in large data bases, where the response variable depends on the linear combination of explanatory variables through an unknown link function, corrupted with the noise from the unknown distribution. We utilize the natural, robust and efficient approach, which relies on replacing values of the response variables by their ranks and then identifying significant predictors by using well known Lasso. We provide new consistency results for the proposed procedure (called ,,RankLasso") and extend the scope of its applications by proposing its thresholded and adaptive versions. Our theoretical results show that these modifications can identify the set of relevant predictors under much wider range of data generating scenarios than regular RankLasso. Theoretical results are supported by the simulation study and the real data analysis, which show that our methods can properly identify relevant predictors, even when the error terms come from the Cauchy distribution and the link function is nonlinear. They also demonstrate the superiority of the modified versions of RankLasso in the case when predictors are substantially correlated. The numerical study shows also that RankLasso performs substantially better in model selection than LADLasso, which is a well established methodology for robust model selection. | statistics |
This paper considers a convex optimization problem with cost and constraints that evolve over time. The function to be minimized is strongly convex and possibly non-differentiable, and variables are coupled through linear constraints. In this setting, the paper proposes an online algorithm based on the alternating direction method of multipliers (ADMM), to track the optimal solution trajectory of the time-varying problem; in particular, the proposed algorithm consists of a primal proximal gradient descent step and an appropriately perturbed dual ascent step. The paper derives tracking results, asymptotic bounds, and linear convergence results. The proposed algorithm is then specialized to a multi-area power grid optimization problem, and our numerical results verify the desired properties. | electrical engineering and systems science |
We propose a muon-proton collider with asymmetrical multi-TeV beam energies and integrated luminosities of $0.1-1$ ab$^{-1}$. With its large center-of-mass energies and yet small Standard Model background, such a machine can not only improve electroweak precision measurements but also probe new physics beyond the Standard Model to an unprecedented level. We study its potential in measuring the Higgs properties, probing the R-parity-violating Supersymmetry, as well as testing heavy new physics in the muon $g-2$ anomaly. We find that for these physics cases the muon-proton collider can perform better than both the ongoing and future high-energy collider experiments. | high energy physics phenomenology |
We consider the decay of a false vacuum in circumstances where the methods suggested by Coleman run into difficulties. We find that in these cases quantum fluctuations play a crucial role. Namely, they naturally induce both an ultraviolet and infrared cutoff scales, determined by the parameters of the classical solution, beyond which this solution cannot be trusted anymore. This leads to the appearance of a broad class of new $O(4)$ invariant instantons, which would have been singular in the absence of an ultraviolet cutoff. We apply our results to a case where the potential is unbounded from below in a linear way and in particular show how the problem of small instantons is resolved by taking into account the inevitable quantum fluctuations. | high energy physics theory |
We apply the recently proposed perturbative technique to solve the supergravity BPS equations of ${\cal N}=1^*$ theories put on $S^4$. In particular, we have calculated the coefficients of the leading quartic terms exactly, in the expression of the universal part for the holographic free energy as a function of the mass parameters. We also report on the coefficients of higher order terms upto 10th order, which are computed numerically. | high energy physics theory |
With the increasing availability of behavioral data from diverse digital sources, such as social media sites and cell phones, it is now possible to obtain detailed information about the structure, strength, and directionality of social interactions in varied settings. While most metrics of network structure have traditionally been defined for unweighted and undirected networks only, the richness of current network data calls for extending these metrics to weighted and directed networks. One fundamental metric in social networks is edge overlap, the proportion of friends shared by two connected individuals. Here we extend definitions of edge overlap to weighted and directed networks, and present closed-form expressions for the mean and variance of each version for the Erdos-Renyi random graph and its weighted and directed counterparts. We apply these results to social network data collected in rural villages in southern Karnataka, India. We use our analytical results to quantify the extent to which the average overlap of the empirical social network deviates from that of corresponding random graphs and compare the values of overlap across networks. Our novel definitions allow the calculation of edge overlap for more complex networks and our derivations provide a statistically rigorous way for comparing edge overlap across networks. | statistics |
The perfectly matched layer is very important for the elastic wave problem in the frequency domain. Generally, the formulas of the elasticity tensor in the perfectly matched layers are derived from the transformed momentum equation. In this note, we proved that the transformed elasticity tensor derived in this way lost its symmetry. Therefore, these formulas are inconsistency in theory and it's hard to explain its numerical performance. We present a new symmetrical formula of elasticity tensor from the weak form. So the theory of elasticity is still applicable in the perfectly matched layers. | physics |
While numerical simulations have been playing a key role in the studies of planet-disk interaction, testing numerical results against observations has been limited so far. With the two directly imaged protoplanets embedded in its circumstellar disk, PDS 70 offers an ideal testbed for planet-disk interaction studies. Using two-dimensional hydrodynamic simulations we show that the observed features can be well explained with the two planets in formation, providing strong evidence that previously proposed theories of planet-disk interaction are in action, including resonant migration, particle trapping, size segregation, and filtration. Our simulations suggest that the two planets are likely in 2:1 mean motion resonance and can remain dynamically stable over million-year timescales. The growth of the planets at $10^{-8}-10^{-7}~M_{\rm Jup}~{\rm yr}^{-1}$, rates comparable to the estimates from H$\alpha$ observations, does not destabilize the resonant configuration. Large grains are filtered at the gap edge and only small, (sub-)$\mu$m grains can flow to the circumplanetary disks and the inner circumstellar disk. With the sub-millimeter continuum ring observed outward of the two directly imaged planets, PDS 70 provides the first observational evidence of particle filtration by gap-opening planets. The observed sub-millimeter continuum emission at the vicinity of the planets can be reproduced when (sub-)$\mu$m grains survive over multiple circumplanetary disk gas viscous timescales and accumulate therein. One such possibility is if (sub-)$\mu$m grains grow in size and remain trapped in pressure bumps, similar to what we find happening in circumstellar disks. We discuss potential implications to planet formation in the solar system and mature extrasolar planetary systems. | astrophysics |
We propose an extension to the Standard Model accommodating two families of Dirac neutral fermions and Majorana fermions under additional ${U(1)_{e-\mu} \times Z_3\times Z_2}$ symmetries where ${U(1)_{e-\mu}}$ is a flavor dependent gauge symmetry related to the first and second family of the lepton sector, which features a two-loop induced neutrino mass model. The two families are favored by minimally reproducing the current neutrino oscillation data and two mass difference squares and canceling the gauge anomalies at the same time. As a result, we have a prediction for neutrino masses. The lightest Dirac neutral fermion is a dark matter candidate with tree-level interaction restricted to electron, muon and neutrinos, which makes it difficult to detect in direct dark matter search as well as indirect search focusing on the ${\tau}$-channel, such as through ${\gamma}$-rays. It may however be probed by search for dark matter signatures in electron and positron cosmic rays, and allows interpretation of a structure appearing in the CALET electron+positron spectrum around 350-400 GeV as its signature, with a boost factor $\approx$40 Breit-Wigner enhancement of the annihilation cross section. | high energy physics phenomenology |
Given a finitely generated group with generating set $S$, we study the \emph{cogrowth} sequence, which is the number of words of length $n$ over the alphabet $S$ that are equal to one. This is related to the probability of return for walks in a Cayley graph with steps from $S$. We prove that the cogrowth sequence is not $P$-recursive when~$G$ is an amenable group of superpolynomial growth, answering a question of Garrabant and Pak. | mathematics |
With rapid adoption of deep learning in critical applications, the question of when and how much to trust these models often arises, which drives the need to quantify the inherent uncertainties. While identifying all sources that account for the stochasticity of models is challenging, it is common to augment predictions with confidence intervals to convey the expected variations in a model's behavior. We require prediction intervals to be well-calibrated, reflect the true uncertainties, and to be sharp. However, existing techniques for obtaining prediction intervals are known to produce unsatisfactory results in at least one of these criteria. To address this challenge, we develop a novel approach for building calibrated estimators. More specifically, we use separate models for prediction and interval estimation, and pose a bi-level optimization problem that allows the former to leverage estimates from the latter through an \textit{uncertainty matching} strategy. Using experiments in regression, time-series forecasting, and object localization, we show that our approach achieves significant improvements over existing uncertainty quantification methods, both in terms of model fidelity and calibration error. | statistics |
Global warming, the phenomenon of increasing global average temperature in the recent decades, is receiving wide attention due to its very significant adverse effects on climate. Whether global warming will continue even in the future, is a question that is most important to investigate. In this regard, the so-called general circulation models (GCMs) have attempted to project the future climate, and nearly all of them exhibit alarming rates of global temperature rise in the future. Although global warming in the current time frame is undeniable, it is important to assess the validity of the future predictions of the GCMs. In this article, we attempt such a study using our recently-developed Bayesian multiple testing paradigm for model selection in inverse regression problems. The model we assume for the global temperature time series is based on Gaussian process emulation of the black box scenario, realistically treating the dynamic evolution of the time series as unknown. We apply our ideas to datasets available from the Intergovernmental Panel on Climate Change (IPCC) website. The best GCM models selected by our method under different assumptions on future climate change scenarios do not convincingly support the present global warming pattern when only the future predictions are considered known. Using our Gaussian process idea, we also forecast the future temperature time series given the current one. Interestingly, our results do not support drastic future global warming predicted by almost all the GCM models. | statistics |
A delay between the occurrence and the reporting of events often has practical implications such as for the amount of capital to hold for insurance companies, or for taking preventive actions in case of infectious diseases. The accurate estimation of the number of incurred but not (yet) reported events forms an essential part of properly dealing with this phenomenon. We review the current practice for analysing such data and we present a flexible regression framework to jointly estimate the occurrence and reporting of events. By linking this setting to an incomplete data problem, estimation is performed by the expectation-maximization algorithm. The resulting method is elegant, easy to understand and implement, and provides refined insights in the nowcasts. The proposed methodology is applied to a European general liability portfolio in insurance. | statistics |
A second-gradient elastic (SGE) material is identified as the homogeneous solid equivalent to a periodic planar lattice characterized by a hexagonal unit cell, which is made up of three different linear elastic bars ordered in a way that the hexagonal symmetry is preserved and hinged at each node, so that the lattice bars are subject to pure axial strain while bending is excluded. Closed form-expressions for the identified non-local constitutive parameters are obtained by imposing the elastic energy equivalence between the lattice and the continuum solid, under remote displacement conditions having a dominant quadratic component. In order to generate equilibrated stresses, in the absence of body forces, the applied remote displacement has to be constrained, thus leading to the identification in a \lq condensed' form of a higher-order solid, so that imposition of further constraints becomes necessary to fully quantify the equivalent continuum. The identified SGE material reduces to an equivalent Cauchy material only in the limit of vanishing side length of hexagonal unit cell. The analysis of positive definiteness and symmetry of the equivalent constitutive tensors, the derivation of the second-gradient elastic properties from those of the higher-order solid in the \lq condensed' definition, and a numerical validation of the identification scheme are deferred to Part II of this study. | physics |
Inertialess anisotropic particles in a Rayleigh-B\'enard turbulent flow show maximal tumbling rates for weakly oblate shapes, in contrast with the universal behaviour observed in developed turbulence where the mean tumbling rate monotonically decreases with the particle aspect ratio. This is due to the concurrent effect of turbulent fluctuations and of a mean shear flow whose intensity, we show, is determined by the kinetic boundary layers. In Rayleigh-B\'enard turbulence prolate particles align preferentially with the fluid velocity, while oblate ones orient with the temperature gradient. This analysis elucidates the link between particle angular dynamics and small-scale properties of convective turbulence and has implications for the wider class of sheared turbulent flows. | physics |
Quantum anomaly manifests itself in the deviation of breathing mode frequency from the scale invariant value of $2\omega$ in two-dimensional harmonically trapped Fermi gases, where $\omega$ is the trapping frequency. Its recent experimental observation with cold-atoms reveals an unexpected role played by the effective range of interactions, which requires quantitative theoretical understanding. Here we provide accurate, benchmark results on quantum anomaly from a few-body perspective. We consider the breathing mode of a few trapped interacting fermions in two dimensions up to six particles and present the mode frequency as a function of scattering length for a wide range of effective range. We show that the maximum quantum anomaly gradually reduces as effective range increases while the maximum position shifts towards the weak-coupling limit. We extrapolate our few-body results to the many-body limit and find a good agreement with the experimental measurements. Our results may also be directly applicable to a few-fermion system prepared in microtraps and optical tweezers. | condensed matter |
We propose a general theory to construct functorial assignments $\Sigma \longmapsto \Omega_{\Sigma} \in E(\Sigma)$ for a large class of functors $E$ from a certain category of bordered surfaces to a suitable target category of topological vector spaces. The construction proceeds by successive excisions of homotopy classes of embedded pairs of pants, and thus by induction on the Euler characteristic. We provide sufficient conditions to guarantee the infinite sums appearing in this construction converge. In particular, we can generate mapping class group invariant vectors $\Omega_{\Sigma} \in E(\Sigma)$. The initial data for the recursion encode the cases when $\Sigma$ is a pair of pants or a torus with one boundary, as well as the "recursion kernels" used for glueing. We give this construction the name of Geometric Recursion (GR). As a first application, we demonstrate that our formalism produce a large class of measurable functions on the moduli space of bordered Riemann surfaces. Under certain conditions, the functions produced by the geometric recursion can be integrated with respect to the Weil--Petersson measure on moduli spaces with fixed boundary lengths, and we show that the integrals satisfy a topological recursion (TR) generalizing the one of Eynard and Orantin. We establish a generalization of Mirzakhani--McShane identities, namely that multiplicative statistics of hyperbolic lengths of multicurves can be computed by GR, and thus their integrals satisfy TR. As a corollary, we find an interpretation of the intersection indices of the Chern character of bundles of conformal blocks in terms of the aforementioned statistics. The theory has however a wider scope than functions on Teichm\"uller space, which will be explored in subsequent papers; one expects that many functorial objects in low-dimensional geometry could be constructed by variants of our new geometric recursion. | mathematics |
The K-matrix method is often used to describe overlapping resonances. It guarantees the unitarity of the scattering matrix but its parameters are not resonances masses and widths. It is also unclear how to separate resonant and background contributions and to describe background in terms of phase shifts. The Breit-Wigner (BW) approach operates with parameters having direct experimental meaning but a simple sum of the BW functions is not unitary. We show how to construct the unitary S-matrix by taking into account the interference of the BW functions. The method is simple and straightforward, background can be added to resonance amplitudes in the standard quantum mechanics form. In examples we give a comparison between the K-matrix and unitary BW-approaches. | high energy physics phenomenology |
The short answer to the question in the title is 'no'. We identify classes of truncated configuration interaction (CI) wave functions for which the externally corrected coupled-cluster (ec-CC) approach using the three-body ($T_{3}$) and four-body ($T_{4}$) components of the cluster operator extracted from CI does not improve the results of the underlying CI calculations. Implications of our analysis, illustrated by numerical examples, for the ec-CC computations using truncated and selected CI methods are discussed. We also introduce a novel ec-CC approach using the $T_{3}$ and $T_{4}$ amplitudes obtained with the selected CI scheme abbreviated as CIPSI, correcting the resulting energies for the missing $T_{3}$ correlations not captured by CIPSI with the help of moment expansions similar to those employed in the completely renormalized CC methods. | physics |
In this paper, a novel partial form dynamic linearization (PFDL) data-driven model-free adaptive predictive control (MFAPC) method is proposed for a class of discrete-time single-input single-output nonlinear systems. The main contributions of this paper are that we combine the concept of MPC with MFAC together to propose a novel MFAPC method. We prove the bounded-input bounded-output stability and tracking error monotonic convergence of the proposed method; Moreover, we discuss the possible relationship between the current PFDL-MFAC and the proposed PFDL-MFAPC. The simulation and experiment are carried out to verify the effectiveness of the proposed MFAPC. | electrical engineering and systems science |
Classical linear discriminant analysis (LDA) is based on squared Frobenious norm and hence is sensitive to outliers and noise. To improve the robustness of LDA, in this paper, we introduce capped l_{2,1}-norm of a matrix, which employs non-squared l_2-norm and "capped" operation, and further propose a novel capped l_{2,1}-norm linear discriminant analysis, called CLDA. Due to the use of capped l_{2,1}-norm, CLDA can effectively remove extreme outliers and suppress the effect of noise data. In fact, CLDA can be also viewed as a weighted LDA. CLDA is solved through a series of generalized eigenvalue problems with theoretical convergency. The experimental results on an artificial data set, some UCI data sets and two image data sets demonstrate the effectiveness of CLDA. | statistics |
We give a further study on the improved soft-wall AdS/QCD model with two flavors. The chiral transition behaviors are studied in the case of finite chemical potential, with the chiral phase diagram obtained at zero quark mass. The thermal spectral functions of the vector and axial-vector mesons are calculated, and the in-medium melting properties of the mesons are investigated. We find that the chiral transition behaviors and the meson melting properties at finite chemical potential can be qualitatively described by the improved soft-wall AdS/QCD model, except in the region of large chemical potential. The reason for these inadequate descriptions may be that the background geometry adopted in the model is not a dynamical one which is able to produce the QCD equation of state. To give a quantitative description for these low-energy phenomenologies, we shall consider a more consistent AdS/QCD model which treats the background fields and the chiral fields on the same footing. | high energy physics phenomenology |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.