text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
In the present study, we investigated flow structures and properties of elastic turbulence in straight 2D channel viscoelastic fluid flow and tested earlier observations. We discovered self-organized cycling process of weakly unstable coherent structures (CSs) of co-existing streaks and stream-wise vortices, with the former being destroyed by Kelvin-Helmholtz-like instability resulting in chaotic structures. The sequence periodically repeats itself leading to stochastically steady state. This self-sustained process (SSP) remarkably resembles one investigated theoretically and experimentally for Newtonian turbulence in straight channel flow. The unexpected new ingredient is the observation of elastic waves, which finds to be critical for existence of CSs and SSP generation due to energy pumping from large to smaller scales preceding sharp power-law decay in elastic turbulence energy spectrum. The reported finding suggests the universality of CSs in transition to turbulence via self-organized cycling (SSP) in linearly stable plane shear flows of both elastic and Newtonian fluids.
|
physics
|
Massive MIMO has been regarded as a key enabling technique for 5G and beyond networks. Nevertheless, its performance is limited by the large overhead needed to obtain the high-dimensional channel information. To reduce the huge training overhead associated with conventional pilot-aided designs, we propose a novel blind data detection method by leveraging the channel sparsity and data concentration properties. Specifically, we propose a novel $\ell_3$-norm-based formulation to recover the data without channel estimation. We prove that the global optimal solution to the proposed formulation can be made arbitrarily close to the transmitted data up to a phase-permutation ambiguity. We then propose an efficient parameter-free algorithm to solve the $\ell_3$-norm problem and resolve the phase permutation ambiguity. We also derive the convergence rate in terms of key system parameters such as the number of transmitters and receivers, the channel noise power, and the channel sparsity level. Numerical experiments will show that the proposed scheme has superior performance with low computational complexity.
|
electrical engineering and systems science
|
We propose to boost the performance of the density matrix renormalization group (DMRG) in two dimensions by using Gutzwiller projected states as the initialization ansatz. When the Gutzwiller projected state is properly chosen, the notorious "local minimum" issue in DMRG can be circumvented and the precision of DMRG can be improved by orders of magnitude without extra computational cost. Moreover, this method allows to quantify the closeness of the initial Gutzwiller projected state and the final converged state after DMRG sweeps, thereby sheds light on whether the Gutzwiller ansatz captures the essential entanglement features of the actual ground state for a given Hamiltonian. The Kitaev honeycomb model has been exploited to demonstrate and benchmark this new method.
|
condensed matter
|
We propose a Gribov-Zwanziger type action for the Landau-DeWitt gauge that preserves, for any gauge group, the invariance under background gauge transformations. At zero temperature, and to one-loop accuracy, the model can be related to the Gribov no-pole condition. We apply the model to the deconfinement transition in SU(2) and SU(3) Yang-Mills theories and compare the predictions obtained with a single or with various (color dependent) Gribov parameters that can be introduced in the action without jeopardizing its background gauge invariance. The Gribov parameters associated to color directions orthogonal to the background can become negative, while keeping the background effective potential real. In some cases, the proper analysis of the transition requires the potential to be resolved in those regions.
|
high energy physics theory
|
We consider magnetospheric structure of rotating neutron stars with internally twisted axisymmetric magnetic fields. The twist-induced and rotation-induced toroidal magnetic fields align/counter-align in different hemispheres. Using analytical and numerical calculations (with PHAEDRA code) we show that as a result the North-South symmetry is broken: the magnetosphere and the wind become "angled", of conical shape. Angling of the magnetosphere affects the spindown (making it smaller for mild twists), makes the return current split unequally at the Y-point, produces anisotropic wind and linear acceleration that may dominate over gravitational acceleration in the Galactic potential and give a total kick up to $\sim 100$ km/s. We also consider analytically the structure of the Y-point in the twisted magnetosphere, and provide estimate of the internal twist beyond which no stable solutions exist: over-twisted magnetospheres must produce plasma ejection events.
|
astrophysics
|
Harmonic, Geometric, Arithmetic, Heronian and Contraharmonic means have been studied by many mathematicians. In 2003, H. Evens studied these means from geometrical point of view and established some of the inequalities between them in using a circle and its radius. In 1961, E. Beckenback and R. Bellman introduced several inequalities corresponding to means. In this paper, we will introduce the concept of mean functions and integral means and give bounds on some of these mean functions and integral means.
|
statistics
|
We give a combinatorial criterion for the tangent bundle on a smooth toric variety to be stable with respect to a given polarisation in terms of the corresponding lattice polytope. Furthermore, we show that for a smooth toric surface and a smooth toric variety of Picard rank 2, there exists an ample line bundle with respect to which the tangent bundle is stable if and only if it is an iterated blow-up of projective space.
|
mathematics
|
Magnetic topological phases of quantum matter are an emerging frontier in physics and material science. Along these lines, several kagome magnets have appeared as the most promising platforms. Here, we explore magnetic correlations in the transition-metal-based kagome magnet TbMn$_{6}$Sn$_{6}$. Our results show that the system exhibits an out-of-plane ferrimagnetic structure $P6/mm'm'$ (comprised by Tb and Mn moments) with slow magnetic fluctuations in a wide temperature range. These fluctuations exhibit a slowing down below $T_{\rm C1}^{*}$ ${\simeq}$ 120 K and a slightly modified quasi-static magnetic state is established below $T_{\rm C1}$ ${\simeq}$ 20 K. A canted variation of the $P6/mm'm'$ structure is possible, where all moments contribute to a net $c$-axis ferrimagnetic state which exhibits zero net in-plane components. Alternatively, a small incommensurate $k$-vector could arise below $T_{\rm C1}$. We further show that the temperature evolution of the anomalous Hall conductivity (AHC) is strongly influenced by the low temperature magnetic crossover. More importantly, the here identified magnetic state seems to be responsible for the large quasi-linear magnetoresistance as well as for the appearance of quantum oscillations, which are related to the quantized Landau fan structure featuring a spin-polarized Dirac dispersion with a large Chern gap. Therefore the exciting perspective of a magnetic system arises in which the topological response can be controlled, and thus explored, over a wide range of parameters.
|
condensed matter
|
In the last decades, philosophers have begun using empirical data for conceptual analysis, but corpus-based conceptual analysis has so far failed to develop, in part because of the absence of reliable methods to automatically detect concepts in textual data. Previous attempts have shown that topic models can constitute efficient concept detection heuristics, but while they leverage the syntagmatic relations in a corpus, they fail to exploit paradigmatic relations, and thus probably fail to model concepts accurately. In this article, we show that using a topic model that models concepts on a space of word embeddings (Hu and Tsujii, 2016) can lead to significant increases in concept detection performance, as well as enable the target concept to be expressed in more flexible ways using word vectors.
|
computer science
|
We study the generation of magnetic field seeds during a first-order electroweak phase transition, by numerically evolving the classical equations of motion of the bosonic electroweak theory on the lattice. The onset of the transition is implemented by the random nucleation of bubbles with an arbitrarily oriented Higgs field in the broken phase. We find that about 10% of the latent heat is converted into magnetic energy, with most of the magnetic fields being generated in the last stage of the phase transition when the Higgs oscillates around the true vacuum. The energy spectrum of the magnetic field has a peak that shifts towards larger length scales as the phase transition unfolds. By the end of our runs the peak wavelength is of the order of the bubble percolation scale, or about a third of our lattice size.
|
high energy physics phenomenology
|
Weight decay (WD) is a traditional regularization technique in deep learning, but despite its ubiquity, its behavior is still an area of active research. Golatkar et al. have recently shown that WD only matters at the start of the training in computer vision, upending traditional wisdom. Loshchilov et al. show that for adaptive optimizers, manually decaying weights can outperform adding an $l_2$ penalty to the loss. This technique has become increasingly popular and is referred to as decoupled WD. The goal of this paper is to investigate these two recent empirical observations. We demonstrate that by applying WD only at the start, the network norm stays small throughout training. This has a regularizing effect as the effective gradient updates become larger. However, traditional generalizations metrics fail to capture this effect of WD, and we show how a simple scale-invariant metric can. We also show how the growth of network weights is heavily influenced by the dataset and its generalization properties. For decoupled WD, we perform experiments in NLP and RL where adaptive optimizers are the norm. We demonstrate that the primary issue that decoupled WD alleviates is the mixing of gradients from the objective function and the $l_2$ penalty in the buffers of Adam (which stores the estimates of the first-order moment). Adaptivity itself is not problematic and decoupled WD ensures that the gradients from the $l_2$ term cannot "drown out" the true objective, facilitating easier hyperparameter tuning.
|
computer science
|
Let $\mathbb{F}_q$ be a finite field of odd characteristic. We study R\'edei functions that induce permutations over $\mathbb{P}^1(\mathbb{F}_q)$ whose cycle decomposition contains only cycles of length $1$ and $j$, for an integer $j\geq 2$. When $j$ is $4$ or a prime number, we give necessary and sufficient conditions for a R\'edei permutation of this type to exist over $\mathbb{P}^1(\mathbb{F}_q)$, characterize R\'edei permutations consisting of $1$- and $j$-cycles, and determine their total number. We also present explicit formulas for R\'edei involutions based on the number of fixed points, and procedures to construct R\'edei permutations with a prescribed number of fixed points and $j$-cycles for $j \in \{3,4,5\}$.
|
mathematics
|
We establish a weighted pointwise Jacobian determinant inequality on corank 1 Carnot groups related to optimal mass transportation akin to the work of Cordero-Erausquin, McCann and Schmuckenschl\"ager. In this setting, the presence of abnormal geodesics does not allow the application of the general sub-Riemannian optimal mass transportation theory developed by Figalli and Rifford and we need to work with a weaker notion of Jacobian determinant. Nevertheless, our result achieves a transition between Euclidean and sub-Riemannian structures, corresponding to the mass transportation along abnormal and strictly normal geodesics, respectively. The weights appearing in our expression are distortion coefficients that reflect the delicate sub-Riemannian structure of our space. As applications, entropy, Brunn-Minkowski and Borell-Brascamp-Lieb inequalities are established on Carnot groups.
|
mathematics
|
Visible Light Communication~(VLC) systems provide not only illumination and data communication, but also indoor monitoring services if the effect that different events create on the received optical signal is properly tracked. For this purpose, the Channel State Information that a VLC receiver computes to equalize the subcarriers of the OFDM signal can be also reused to train an Unsupervised Learning classifier. This way, different clusters can be created on the collected CSI data, which could be then mapped into relevant events to-be-monitored in the indoor environments, such as the presence of a new object in a given position or the change of the position of a given object. When compared to supervised learning algorithms, the proposed approach does not need to add tags in the training data, simplifying notably the implementation of the machine learning classifier. The practical validation the monitoring approach was done with the aid of a software-defined VLC link based on OFDM, in which a copy of the intensity modulated signal coming from a Phosphor-converted LED was captured by a pair of Photodetectors~(PDs). The performance evaluation of the experimental VLC-based monitoring demo achieved a positioning accuracy in the few-centimeter-range, without the necessity of deploying a large number of sensors and/or adding a VLC-enabled sensor on the object to-be-tracked.
|
electrical engineering and systems science
|
Supercooled Stefan problems describe the evolution of the boundary between the solid and liquid phases of a substance, where the liquid is assumed to be cooled below its freezing point. Following the methodology of Delarue, Nadtochiy and Shkolnikov, we construct solutions to the one-phase one-dimensional supercooled Stefan problem through a certain McKean-Vlasov equation, which allows to define global solutions even in the presence of blow-ups. Solutions to the McKean-Vlasov equation arise as mean-field limits of particle systems interacting through hitting times, which is important for systemic risk modeling. Our main contributions are: (i) we prove a general tightness theorem for the Skorokhod M1-topology which applies to processes that can be decomposed into a continuous and a monotone part. (ii) We prove propagation of chaos for a perturbed version of the particle system for general initial conditions. (iii) We prove a conjecture of Delarue, Nadtochiy and Shkolnikov, relating the solution concepts of so-called minimal and physical solutions, showing that minimal solutions of the McKean-Vlasov equation are physical whenever the initial condition is integrable.
|
mathematics
|
A high fraction of artifact-free signals is highly desirable in functional neuroimaging and brain-computer interfacing (BCI). We present the high-variance electrode artifact removal (HEAR) algorithm to remove transient electrode pop and drift (PD) artifacts from electroencephalographic (EEG) signals. Transient PD artifacts reflect impedance variations at the electrode scalp interface that are caused by ion concentration changes. HEAR and its online version (oHEAR) are open-source and publicly available. Both outperformed state of the art offline and online transient, high-variance artifact correction algorithms for simulated EEG signals. (o)HEAR attenuated PD artifacts by approx. 25 dB, and at the same time maintained a high SNR during PD artifact-free periods. For real-world EEG data, (o)HEAR reduced the fraction of outlier trials by half and maintained the waveform of a movement related cortical potential during a center-out reaching task. In the case of BCI training, using oHEAR can improve the reliability of the feedback a user receives through reducing a potential negative impact of PD artifacts.
|
electrical engineering and systems science
|
Understanding electrical transport in strange metals, including the seeming universality of Planckian $T$-linear resistivity, remains a longstanding challenge in condensed matter physics. We propose that local imaging techniques, such as nitrogen vacancy center magnetometry, can locally identify signatures of quantum critical response which are invisible in measurements of a bulk electrical resistivity. As an illustrative example, we use a minimal holographic model for a strange metal in two spatial dimensions to predict how electrical current will flow in regimes dominated by quantum critical dynamics on the Planckian length scale. We describe the crossover between quantum critical transport and hydrodynamic transport (including Ohmic regimes), both in charge neutral and finite density systems. We compare our holographic predictions to experiments on charge neutral graphene, finding quantitative agreement with available data; we suggest further experiments which may determine the relevance of holography to transport on Planckian scales in this material. More broadly, we propose that locally imaged transport be used to test the universality (or lack thereof) of microscopic dynamics in the diverse set of quantum materials exhibiting $T$-linear resistivity.
|
condensed matter
|
The Chiral Magnetic Effect (CME) has been investigated as a new transport phenomenon in condensed matter. Such an effect appears in systems with chiral fermions and involves an electric current generated by a magnetic field by means of an "exotic" magnetic conductivity. This effect can also be connected with extensions of the usual Ohm's law either in magnetohydrodynamics or in Lorentz-violating scenarios. In this work, we study the classical propagation of electromagnetic waves in isotropic dispersive matter subject to a generalized Ohm's law. The latter involves currents linear in the magnetic field and implies scenarios inducing parity violation. We pay special attention to the case of a vanishing electric conductivity. For a diagonal magnetic conductivity, which includes the CME, the refractive index is modified such that it implies birefringence. For a nondiagonal magnetic conductivity, modified refractive indices exhibiting imaginary parts occur ascribing a conducting behavior to a usual dielectric medium. Our findings provide new insight into typical material properties associated with a magnetic conductivity.
|
high energy physics theory
|
General principles of quantum field theory imply that there exists an operator product expansion (OPE) for Wightman functions in Minkowski momentum space that converges for arbitrary kinematics. This convergence is guaranteed to hold in the sense of a distribution, meaning that it holds for correlation functions smeared by smooth test functions. The conformal blocks for this OPE are conceptually extremely simple: they are products of 3-point functions. We construct the conformal blocks in 2-dimensional conformal field theory and show that the OPE in fact converges pointwise to an ordinary function in a specific kinematic region. Using microcausality, we also formulate a bootstrap equation directly in terms of momentum space Wightman functions.
|
high energy physics theory
|
Gaussian Boson Sampling (GBS) is a near-term platform for photonic quantum computing. Applications have been developed which rely on directly programming GBS devices, but the ability to train and optimize circuits has been a key missing ingredient for developing new algorithms. In this work, we derive analytical gradient formulas for the GBS distribution, which can be used to train devices using standard methods based on gradient descent. We introduce a parametrization of the distribution that allows the gradient to be estimated by sampling from the same device that is being optimized. In the case of training using a Kullback-Leibler divergence or log-likelihood cost function, we show that gradients can be computed classically, leading to fast training. We illustrate these results with numerical experiments in stochastic optimization and unsupervised learning. As a particular example, we introduce the variational Ising solver, a hybrid algorithm for training GBS devices to sample ground states of a classical Ising model with high probability.
|
quantum physics
|
We discuss an electro-osmotic flow near charged porous coatings of a finite hydrodynamic permeability, impregnated with an outer electrolyte solution. It is shown that their electrokinetic (zeta) potential is generally augmented compared to the surface electrostatic potential, thanks to a large liquid slip at their surface emerging due to an electro-osmotic flow in the enriched by counter-ions porous films. The inner flow shows a very rich behavior controlled by the volume charge density of the coating, its Brinkman length, and concentration of added salt. Interestingly, even for relatively small Brinkman length the zeta-potential can, in some cases, become huge, providing a very fast outer flow in the bulk electrolyte. When the Brinkman length is large enough, the zeta-potential could be extremely high even at practically vanishing surface potential. To describe the slip velocity in a simple manner, we introduce a concept of an electro-osmotic slip length and demonstrate that the latter is always defined by the hydrodynamic permeability of the porous film, and also, depending on the regime, either by its volume charge density or the salt concentration. These results provide a framework for the rational design of porous coatings to enhance electrokineic phenomena, and for tuning their properties by adjusting bulk electrolyte concentrations, with direct applications in microfluidics.
|
physics
|
This paper proposes an $SE_2(3)$ based extended Kalman filtering (EKF) framework for the inertial-integrated state estimation problem. The error representation using the straight difference of two vectors in the inertial navigation system may not be reasonable as it does not take the direction difference into consideration. Therefore, we choose to use the $SE_2(3)$ matrix Lie group to represent the state of the inertial-integrated navigation system which consequently leads to the common frame error representation. With the new velocity and position error definition, we leverage the group affine dynamics with the autonomous error properties and derive the error state differential equation for the inertial-integrated navigation on the north-east-down (NED) navigation frame and the earth-centered earth-fixed (ECEF) frame, respectively, the corresponding EKF, terms as $SE_2(3)$ based EKF has also been derived. It provides a new perspective on the geometric EKF with a more sophisticated formula for the inertial-integrated navigation system. Furthermore, we design two new modified error dynamics on the NED frame and the ECEF frame respectively by introducing new auxiliary vectors. Finally the equivalence of the left-invariant EKF and left $SE_2(3)$ based EKF have been shown in navigation frame and ECEF frame.
|
computer science
|
In the present paper, the maximum principle for finite horizon state constrained problems from the book by R. Vinter [\textit{Optimal Control}, Birkh\"auser, Boston, 2000; Theorem~9.3.1] is analyzed via parametric examples. The latter has origin in a recent paper by V.~Basco, P.~Cannarsa, and H.~Frankowska, and resembles the optimal growth problem in mathematical economics. The solution existence of these parametric examples is established by invoking Filippov's existence theorem for Mayer problems. Since the maximum principle is only a necessary condition for local optimal processes, a large amount of additional investigations is needed to obtain a comprehensive synthesis of finitely many processes suspected for being local minimizers. Our analysis not only helps to understand the principle in depth, but also serves as a sample of applying it to meaningful prototypes of economic optimal growth models. Problems with unilateral state constraints are studied in Part 1 of the paper. Problems with bilateral state constraints will be addressed in Part 2.
|
mathematics
|
Quality control is of vital importance during electronics production. As the methods of producing electronic circuits improve, there is an increasing chance of solder defects during assembling the printed circuit board (PCB). Many technologies have been incorporated for inspecting failed soldering, such as X-ray imaging, optical imaging, and thermal imaging. With some advanced algorithms, the new technologies are expected to control the production quality based on the digital images. However, current algorithms sometimes are not accurate enough to meet the quality control. Specialists are needed to do a follow-up checking. For automated X-ray inspection, joint of interest on the X-ray image is located by region of interest (ROI) and inspected by some algorithms. Some incorrect ROIs deteriorate the inspection algorithm. The high dimension of X-ray images and the varying sizes of image dimensions also challenge the inspection algorithms. On the other hand, recent advances on deep learning shed light on image-based tasks and are competitive to human levels. In this paper, deep learning is incorporated in X-ray imaging based quality control during PCB quality inspection. Two artificial intelligence (AI) based models are proposed and compared for joint defect detection. The noised ROI problem and the varying sizes of imaging dimension problem are addressed. The efficacy of the proposed methods are verified through experimenting on a real-world 3D X-ray dataset. By incorporating the proposed methods, specialist inspection workload is largely saved.
|
electrical engineering and systems science
|
Due to their flexibility and predictive performance, machine-learning based regression methods have become an important tool for predictive modeling and forecasting. However, most methods focus on estimating the conditional mean or specific quantiles of the target quantity and do not provide the full conditional distribution, which contains uncertainty information that might be crucial for decision making. In this article, we provide a general solution by transforming a conditional distribution estimation problem into a constrained multi-class classification problem, in which tools such as deep neural networks. We propose a novel joint binary cross-entropy loss function to accomplish this goal. We demonstrate its performance in various simulation studies comparing to state-of-the-art competing methods. Additionally, our method shows improved accuracy in a probabilistic solar energy forecasting problem.
|
statistics
|
In this paper, we study the 't Hooft type instantons in eight dimensions. Using various designs of such instantons, we find new soliton solutions to the low-energy effective theory of the heterotic fivebrane.
|
high energy physics theory
|
The present study reports on an innovative method to prepare discrete diamond nanoparticles or nanodiamonds (NDs) with high structural and optical quality through top-down approach by controlled oxidation of pre-synthesized nanocrystalline diamond (NCD) film. These NDs are studied for their structural and optical properties using atomic force microscopy (AFM), Raman and fluorescence (FL) spectroscopy. While AFM analysis confirms uniform distribution of discrete NDs with different sizes varying from a few tens of nanometers to about a micron, spectroscopic investigations reveal the presence of impurity - vacancy related color centers exhibiting FL at 637 and 738 nm as a function of particle size. In addition, an intense emission originating from vacancy centers associated with N and Si (SiV-) is observed for all NDs at the temperature close to liquid nitrogen. A detailed spectral analysis is carried out on the structural defects in these NDs. The full width at half maximum of diamond Raman band ( ~ 1332 cm-1) is found to be as narrow as 1.5 cm-1 which reveals the superior structural quality of these NDs. Further, mapping of diamond Raman and FL spectra of SiV- confirm the uniform distribution of NDs throughout the substrate. The narrow line widths and a minimal shift in peak positions of Raman and FL spectra endorse this aspect.
|
condensed matter
|
We study the gravity-mediated scattering of scalar fields based on a parameterisation of the Lorentzian quantum effective action. We demonstrate that the interplay of infinite towers of spin zero and spin two poles at imaginary squared momentum leads to scattering amplitudes that are compatible with unitarity bounds, causal, and scale-free at trans-Planckian energy. Our construction avoids introducing non-localities or the massive higher-spin particles that are characteristic in string theory.
|
high energy physics theory
|
We report on new X-ray and optical observations of PSR J2030+4415, a Gamma-ray Pulsar with an H$\alpha$ bow shock. These data reveal the velocity structure of the bow shock apex and resolve unusual X-ray structure in its interior. In addition the system displays a very long, thin filament, extending at least $5^\prime$ at $\sim 130^\circ$ to the pulsar motion vector. Careful astrometry, compared with a short archival exposure, detects the pulsar proper motion at 85 mas yr$^{-1}$. With the H$\alpha$ velocity structure this allows us to estimate the distance as $0.75$ kpc.
|
astrophysics
|
Electronic states of a correlated material can be effectively modified by structural variations delivered from a single-crystal substrate. In this letter, we show that the CrN films grown on MgO (001) substrates have a (001) orientation, whereas the CrN films on {\alpha}-Al2O3 (0001) substrates are oriented along (111) direction parallel to the surface normal. Transport properties of CrN films are remarkably different depending on crystallographic orientations. The critical thickness for the metal-insulator transition (MIT) in CrN 111 films is significantly larger than that of CrN 001 films. In contrast to CrN 001 films without apparent defects, scanning transmission electron microscopy results reveal that CrN 111 films exhibit strain-induced structural defects, e. g. the periodic horizontal twinning domains, resulting in an increased electron scattering facilitating an insulating state. Understanding the key parameters that determine the electronic properties of ultrathin conductive layers is highly desirable for future technological applications.
|
condensed matter
|
This paper analyzes the robustness and stability of a disturbance observer (DOB) and a reaction torque observer (RTOB) based robust motion control systems. Conventionally, a DOB is analyzed by using an ideal velocity measurement that is obtained without using a low-pass-filter (LPF); however, it is impractical due to noise constraints. An LPF of velocity measurement changes the robustness of a DOB significantly and puts a new design constraint on the bandwidth of a DOB. An RTOB, which is used to estimate environmental impedance, is an application of a DOB. The stability of an RTOB based robust force control system has not been reported yet since its oversimplified model is derived by assuming that an RTOB has a feed-forward control structure. However, in reality, it has a feed-back control structure; therefore, not only the performance but also the stability is affected by the design parameters of a RTOB. A new practical stability analysis method is proposed for a RTOB based robust force control system. Besides that novel and practical design methods, which improve the robustness of a DOB and the stability and performance of an RTOB based robust force control system, are proposed by using the new analysis methods. The validity of the proposals is verified by simulation and experimental results.
|
electrical engineering and systems science
|
We use a lattice model to study first-passage time distributions of target finding events through complex environments with elongated fibers distributed with different anisotropies and volume occupation fractions. For isotropic systems and for low densities of aligned fibers, the three-dimensional search is a Poisson process with the first-passage time exponentially distributed with the most probable finding time at zero. At high enough densities of aligned fibers, elongated channels emerge, reducing the dynamics dimensionality to one dimension. We show how the shape and size of the channels modify the behavior of the first-passage time distribution and its short, intermediate, and long time scales. We develop an exactly solvable model for synthetic rectangular channels, which captures the effects of the tortuous local structure of the elongated channels that naturally emerge in our system. For arbitrary values of the nematic order parameter of fiber orientations, we develop a mapping to the simpler situation of fully aligned fibers at some other effective volume occupation fraction. Our results shed light on the molecular transport of biomolecules between biological cells in complex fibrous environments.
|
condensed matter
|
The low-complexity assumption in linear systems can often be expressed as rank deficiency in data matrices with generalized Hankel structure. This makes it possible to denoise the data by estimating the underlying structured low-rank matrix. However, standard low-rank approximation approaches are not guaranteed to perform well in estimating the noise-free matrix. In this paper, recent results in matrix denoising by singular value shrinkage are reviewed. A novel approach is proposed to solve the low-rank Hankel matrix denoising problem by using an iterative algorithm in structured low-rank approximation modified with data-driven singular value shrinkage. It is shown numerically in both the input-output trajectory denoising and the impulse response denoising problems, that the proposed method performs the best in terms of estimating the noise-free matrix among existing algorithms of low-rank matrix approximation and denoising.
|
electrical engineering and systems science
|
We analyse the gravitational wave and low energy signatures of a Pati-Salam phase transition. For a Pati-Salam scale of $M_{PS} \sim 10^5$ GeV, we find a stochastic power spectrum within reach of the next generation of ground-based interferometer experiments such as the Einstein Telescope, in parts of the parameter space. We study the lifetime of the proton in this model, as well as complementarity with low energy constraints including electroweak precision data, neutrino mass measurements, lepton flavour violation, and collider constraints.
|
high energy physics phenomenology
|
The ANITA experiment, which is designed to detect ultra-high energy neutrinos, has reported the observation of two anomalous events, directed at angles of $27^{\circ}$ and $35^{\circ}$ with respect to the horizontal. At these angles, the Earth is expected to efficiently absorb ultra-high energy neutrinos, making the origin of these events unclear and motivating explanations involving physics beyond the Standard Model. In this study, we consider the possibility that ANITA's anomalous events are the result of Askaryan emission produced by exotic weakly interacting particles scattering elastically with nuclei in the Antarctic ice sheet. Such particles could be produced by superheavy ($\sim 10^{10}-10^{13}$ GeV) dark matter particles decaying in the halo of the Milky Way. Such scenarios can be constrained by existing measurements of the high-latitude gamma-ray background and the ultra-high energy cosmic ray spectrum, along with searches for ultra-high energy neutrinos by IceCube and other neutrino telescopes.
|
astrophysics
|
The transfer matrix ${\mathbf{M}}$ of a short-range potential may be expressed in terms of the time-evolution operator for an effective two-level quantum system with a time-dependent non-Hermitian Hamiltonian. This leads to a dynamical formulation of stationary scattering. We explore the utility of this formulation in the study of the low-energy behavior of the scattering data. In particular, for the exponentially decaying potentials, we devise a simple iterative scheme for computing terms of arbitrary order in the series expansion of ${\mathbf{M}}$ in powers of the wavenumber. The coefficients of this series are determined in terms of a pair of solutions of the zero-energy stationary Schr\"odinger equation. We introduce a transfer matrix for the latter equation, express it in terms of the time-evolution operator for an effective two-level quantum system, and use it to obtain a perturbative series expansion for the solutions of the zero-energy stationary Schr\"odinger equation. Our approach allows for identifying the zero-energy resonances for scattering potentials in both full line and half-line with zeros of the entries of the zero-energy transfer matrix of the potential or its trivial extension to the full line.
|
quantum physics
|
We numerically study the dynamics and stationary states of a spin ensemble strongly coupled to a single-mode resonator subjected to loss and external driving. Employing a generalized cumulant expansion approach we analyze finite-size corrections to a semiclassical description of amplitude bistability, which is a paradigm example of a driven-dissipative phase transition. Our theoretical model allows us to include inhomogeneous broadening of the spin ensemble and to capture in which way the quantum corrections approach the semiclassical limit for increasing ensemble size $N$. We set up a criterion for the validity of the Maxwell-Bloch equations and show that close to the critical point of amplitude bistability even very large spin ensembles consisting of up to $10^4$ spins feature significant deviations from the semiclassical theory.
|
quantum physics
|
Although complex-valued (CV) neural networks have shown better classification results compared to their real-valued (RV) counterparts for polarimetric synthetic aperture radar (PolSAR) classification, the extension of pixel-level RV networks to the complex domain has not yet thoroughly examined. This paper presents a novel complex-valued deep fully convolutional neural network (CV-FCN) designed for PolSAR image classification. Specifically, CV-FCN uses PolSAR CV data that includes the phase information and utilizes the deep FCN architecture that performs pixel-level labeling. It integrates the feature extraction module and the classification module in a united framework. Technically, for the particularity of PolSAR data, a dedicated complex-valued weight initialization scheme is defined to initialize CV-FCN. It considers the distribution of polarization data to conduct CV-FCN training from scratch in an efficient and fast manner. CV-FCN employs a complex downsampling-then-upsampling scheme to extract dense features. To enrich discriminative information, multi-level CV features that retain more polarization information are extracted via the complex downsampling scheme. Then, a complex upsampling scheme is proposed to predict dense CV labeling. It employs complex max-unpooling layers to greatly capture more spatial information for better robustness to speckle noise. In addition, to achieve faster convergence and obtain more precise classification results, a novel average cross-entropy loss function is derived for CV-FCN optimization. Experiments on real PolSAR datasets demonstrate that CV-FCN achieves better classification performance than other state-of-art methods.
|
electrical engineering and systems science
|
In this paper, the trajectory optimization problem for a multi-aerial base station (ABS) communication network is investigated. The objective is to find the trajectory of the ABSs so that the sum-rate of the users served by each ABS is maximized. To reach this goal, along with the optimal trajectory design, optimal power and sub-channel allocation is also of great importance to support the users with the highest possible data rates. To solve this complicated problem, we divide it into two sub-problems: ABS trajectory optimization sub-problem, and joint power and sub-channel assignment sub-problem. Then, based on the Q-learning method, we develop a distributed algorithm which solves these sub-problems efficiently, and does not need significant amount of information exchange between the ABSs and the core network. Simulation results show that although Q-learning is a model-free reinforcement learning technique, it has a remarkable capability to train the ABSs to optimize their trajectories based on the received reward signals, which carry decent information from the topology of the network.
|
electrical engineering and systems science
|
We show that a gas of relativistic electrons is a left-handed material at low frequencies by computing the effective electric permittivity and effective magnetic permeability that appear in Maxwell's equations in terms of the responses appearing in the constitutive relations, and showing that the former are both negative below the {\it same} frequency, which coincides with the zero-momentum frequency of longitudinal plasmons. We also show, by explicit computation, that the photonic mode of the electromagnetic radiation does not dissipate energy, confirming that it propagates in the gas with the speed of light in vacuum, and that the medium is transparent to it. We then combine those results to show that the gas has a negative effective index of refraction $n_{\rm eff}=-1$. We illustrate the consequences of this fact for Snell's law, and for the reflection and transmission coefficients of the gas.
|
condensed matter
|
Perturbation theory is an indispensable tool for studying the cosmic large-scale structure, and establishing its limits is therefore of utmost importance. One crucial limitation of perturbation theory is shell-crossing, which is the instance when cold-dark-matter trajectories intersect for the first time. We investigate Lagrangian perturbation theory (LPT) at very high orders in the vicinity of the first shell-crossing for random initial data in a realistic three-dimensional Universe. For this we have numerically implemented the all-order recursion relations for the matter trajectories, from which the convergence of the LPT series at shell-crossing is established. Convergence studies performed up to the 40th order reveal the nature of the convergence-limiting singularities. These singularities are not the well-known density singularities at shell-crossing but occur at later times when LPT already ceased to provide physically meaningful results.
|
astrophysics
|
The nonlinear interaction of intense linearly polarized electromagnetic waves (EMWs) with longitudinal electron density perturbations is revisited in relativistic degenerate plasmas. The nonlinear dynamics of the EMWs and the longitudinal field, driven by the EMW ponderomotive force, is governed by a coupled set of nonlinear partial differential equations. A numerical simulation of these coupled equations reveals that the generation of wakefields is possible in weakly relativistic degenerate plasmas with $R_0\equiv p_F/mc\ll1$ and $v_g/c\sim1$, where $p_F$ is the Fermi momentum, $m$ is the mass of electrons, $c$ is the speed of light in vacuum, and $v_g$ is the EMW group velocity. However, when the ratio $v_g/c$ is reduced to $\sim0.1$, the wakefield generation is suppressed, instead the longitudinal fields get localized to form soliton-like structures. On the other hand, in the regimes of moderate $(R_0\lesssim1)$ or strong relativistic degeneracy $(R_0>1)$ with $v_g/c\sim0.1$, only the EM solitons can be formed.
|
physics
|
Purity and coherence of a quantum state are recognized as useful resources for various information processing tasks. In this article, we propose a fidelity based valid measure of purity and coherence monotone and establish a relationship between them. This formulation of coherence is extended to quantum correlation relative to measurement. We have also studied the role of weak measurement on purity.
|
quantum physics
|
One of the most widely used chiroptical spectroscopic methods for studying chiral molecules is Raman optical activity; however, the chiral Raman optical activity signal is extremely weak. Here, we theoretically examine enhanced chiral signals in a system with strongly prepared molecular coherence. We show that the enhanced chiral signal due to strong molecular coherence is up to four orders of magnitude higher than that of the spontaneous Raman optical activity. We discuss several advantages of studying the heterodyned signal obtained by combining the anti-Stokes signal with a local oscillator. The heterodyning allows direct measurement of the ratio of the chiral and achiral parameters. Taking advantage of the molecular coherence and heterodyne detection, the coherent anti-Stokes Raman scattering technique opens up a new potential application for investigation of biomolecular chirality.
|
quantum physics
|
Mapping states with explicit gluonic degrees of freedom in the light sector is a challenge, and has led to controversies in the past. In particular, the experiments have reported two different hybrid candidates with spin-exotic signature, pi1(1400) and pi1(1600), which couple separately to eta pi and eta' pi. This picture is not compatible with recent Lattice QCD estimates for hybrid states, nor with most phenomenological models. We consider the recent partial wave analysis of the eta(') pi system by the COMPASS collaboration. We fit the extracted intensities and phases with a coupled-channel amplitude that enforces the unitarity and analyticity of the S-matrix. We provide a robust extraction of a single exotic pi1 resonant pole, with mass and width 1564 +- 24 +- 86 MeV and 492 +- 54 +- 102 MeV, which couples to both eta(') pi channels. We find no evidence for a second exotic state. We also provide the resonance parameters of the a2(1320) and a2'(1700).
|
high energy physics phenomenology
|
We prove that if $L(G)$ immerses $K_t$ then $L(mG)$ immerses $K_{mt}$, where $mG$ is the graph obtained from $G$ by replacing each edge in $G$ with a parallel edge of multiplicity $m$. This implies that when $G$ is a simple graph, $L(mG)$ satisfies a conjecture of Abu-Khzam and Langston. We also show that when $G$ is a line graph, $G$ has a $K_t$-immersion iff $G$ has a $K_t$-minor whenever $t\leq 4$, but this equivalence fails in both directions when $t \geq 5$.
|
mathematics
|
In this paper it presents, develops and discusses the existence of a process with long scope memory structure, representing of the independence between the degree of randomness of the traffic generated by the sources and flow pattern exhibited by the network. The process existence is presented in term of a new algorithmic that is a variant of the maximum likelihood estimator (MLE) of Whittle, for the calculation of the Hurst exponent (H) of self-similar stationary second order time series of the flows of the individual sources and their aggregation. Also, it is discussed the additional problems introduced by the phenomenon of the locality of the Hurst exponent, that appears when the traffic flows consist of diverse elements with different Hurst exponents. The instance is exposed with the intention of being considered as a new and alternative approach for modeling and simulating traffic in existing computer networks.
|
physics
|
The discovery of superconductivity in copper oxide compounds has attracted considerable attention over the past three decades. The high transition temperature in these compounds, exhibiting proximity to an antiferromagnetic order in their phase diagrams, remains one of the main areas of research. The present study attempts to introduce Fe, Co and Ni magnetic impurities into the superconducting Y-123 with the aim of exploring the transition temperature behavior. The solid-state synthesis is exploited to prepare fully oxygenated Y1-xMxBa2Cu3O7 (M = Co, Fe, Ni) samples with low levels of doping (0< x < 0.03). Systematic measurements are then employed to assess the synthesized samples using AC magnetic susceptibility, electrical resistivity and X-ray diffraction. The measurements revealed an increase in Tc as a result of magnetic substitution for Y. However, the study of non-magnetic dopings on the fully oxygenated Y1-xM'xBa2Cu3O7 (M' = Ca, Sr) samples showed a decrease in Tc. Quantitative XRD analysis further suggested that the internal pressure could have minor effects on the increase in Tc. The normal state resistivity vs temperature showed a linear profile, confirming that the samples are at an optimal doping of the carrier concentration.
|
condensed matter
|
We present a different kind of monogamy and polygamy relations based on concurrence and concurrence of assistance for multiqubit systems. By relabeling the subsystems associated with different weights, a smaller upper bound of the $\alpha$th ($0\leq\alpha\leq2$) power of concurrence for multiqubit states is obtained. We also present tighter monogamy relations satisfied by the $\alpha$th ($0\leq\alpha\leq2$) power of concurrence for $N$-qubit pure states under the partition $AB$ and $C_1 . . . C_{N-2}$, as well as under the partition $ABC_1$ and $C_2\cdots C_{N-2}$. These inequalities give rise to the restrictions on entanglement distribution and the trade off of entanglement among the subsystems. Similar results are also derived for negativity.
|
quantum physics
|
We prove a sharp asymptotic formula for certain oscillatory integrals that may be approached using the stationary phase method. The estimates are uniform in terms of auxiliary parameters, which is crucial for application in analytic number theory.
|
mathematics
|
In this work we study ten-dimensional solutions to type IIA string theory of the form AdS$_4$ x $X_6$ which contain orientifold planes and preserve N=1 supersymmetry. In particular, we consider solutions which exhibit some key features of the four-dimensional DGKT proposal for compactifications on Calabi-Yau manifolds with fluxes, and in this sense may be considered their ten-dimensional uplifts. We focus on the supersymmetry equations and Bianchi identities, and find solutions to these that are valid at the two-derivative level and at first order in an expansion parameter which is related to the AdS cosmological constant. This family of solutions is such that the background metric is deformed from the Ricci-flat one to one exhibiting SU(3)xSU(3)-structure, and dilaton gradients and warp factors are induced.
|
high energy physics theory
|
We present a modified version of the Koide formula from a scalar potential model or from a Yukawaon model, based on scalar fields set up in a nonet representation of the SU(3) flavor symmetry of the Standard Model. The Koide's character, which involves the Standard Model fermion mass ratios, is derived from the vacuum expectation value of the nonet field in either model. The scalar potential in the scalar potential model or the superpotential in the Yukawaon model is constructed with all terms invariant under symmetries. The resulting Koide's character, which is modified by two effective parameters, can fit the experimental data of masses of charged leptons, up quarks and down quarks. It offers a natural interpretation of the Standard Model fermion mass spectra.
|
high energy physics phenomenology
|
By utilizing Bose-Einstein condensate solitons, optically manipulated and trapped in a double-well potential, coupled through nonlinear Josephson effect, we propose novel quantum metrology applications with two soliton qubit states. In addition to steady-state solutions in different scenarios, phase space analysis, in terms of population imbalance - phase difference variables, is also performed to demonstrate macroscopic quantum self-trapping regimes. Schr\"odinger-cat states, maximally path-entangled ($N00N$) states, and macroscopic soliton qubits are predicted and exploited for the distinguishability of obtained macroscopic states in the framework of binary (non-orthogonal) state discrimination problem. For arbitrary phase estimation in the framework of linear quantum metrology approach, these macroscopic soliton states are revealed to have a scaling up to the Heisenberg limit (HL). The examples are illustrated for HL estimation of angular frequency between the ground and first excited macroscopic states of the condensate, which opens new perspectives for current frequency standards technologies.
|
quantum physics
|
The goal of this article is to investigate the propagation behavior of 28-GHz millimeter wave in coniferous forests and model its basic transmission loss. Field measurements were conducted with a custom-designed sliding correlator sounder. Relevant foliage regions were extracted from high-resolution LiDAR data and satellite images. Our results show that traditional foliage analysis models for lower-frequency wireless communications fail to consistently output correct path loss predictions. Novel fully automated site-specific models are proposed to resolve this issue, yielding 0.9 dB overall improvement and up to 20 dB regional improvement in root mean square errors.
|
electrical engineering and systems science
|
The dissociation energies of four transition metal dimers are determined using diffusion Monte Carlo. The Jastrow, CI, and molecular orbital parameters of the wave function are both partially and fully optimized with respect to the variational energy. The pivotal role is thereby ascribable to the optimization of the molecular orbital parameters of a complete active space wave function in the presence of a Jastrow correlation function. Excellent results are obtained for ZnO, FeO, FeH, and CrS. In addition, potential energy curves are computed for the first three compounds at multi-reference diffusion Monte Carlo (MR-DMC) level, from which spectroscopic constants such as the equilibrium bond distance, the harmonic frequency, and the anharmonicity are extracted. All of those quantities agree well with the experiment. Furthermore, it is shown for CrS that a restricted active space calculation can yield improved initial orbitals by including single and double excitations from the original active space into a set of virtual orbitals. We demonstrated in this study that the fixed-node error in DMC can be systematically reduced for multi-reference systems by orbital optimization in compact active spaces. While DMC calculations with a large number of determinants are possible and very accurate, our results demonstrate that compact wave functions may be sufficient in order to obtain accurate nodal surfaces, which determine the accuracy of DMC, even in the case of transition metal compounds.
|
physics
|
The increasing integration of power electronic devices is driving the development of more advanced tools and methods for the modeling, analysis, and control of modern power systems to cope with the different time-scale oscillations. In this paper, we propose a general methodology based on the singular perturbation theory to reduce the order of systems modeled by ordinary differential equations and the computational burden in their simulation. In particular, we apply the proposed methodology to a simplified power system scenario comprised of three inverters in parallel---controlled as synchronverters---connected to an ideal grid. We demonstrate by time-domain simulations that the reduced and decoupled system obtained with the proposed approach accurately represents the dynamics of the original system because it preserves the non-linear dynamics. This shows the efficiency of our technique even for transient perturbations and has relevant applications including the simplification of the Lyapunov stability assessment or the design of non-linear controllers for large-scale power systems.
|
electrical engineering and systems science
|
State-of-art deep neural networks (DNN) are vulnerable to attacks by adversarial examples: a carefully designed small perturbation to the input, that is imperceptible to human, can mislead DNN. To understand the root cause of adversarial examples, we quantify the probability of adversarial example existence for linear classifiers. Previous mathematical definition of adversarial examples only involves the overall perturbation amount, and we propose a more practical relevant definition of strong adversarial examples that separately limits the perturbation along the signal direction also. We show that linear classifiers can be made robust to strong adversarial examples attack in cases where no adversarial robust linear classifiers exist under the previous definition. The quantitative formulas are confirmed by numerical experiments using a linear support vector machine (SVM) classifier. The results suggest that designing general strong-adversarial-robust learning systems is feasible but only through incorporating human knowledge of the underlying classification problem.
|
statistics
|
We report the results of a Monte Carlo global QCD analysis of unpolarized parton distribution functions (PDFs), including for the first time constraints from ratios of $^3$He to $^3$H structure functions recently obtained by the MARATHON experiment at Jefferson Lab. Our simultaneous analysis of nucleon PDFs and nuclear effects in $A=2$ and $A=3$ nuclei reveals the first indication for an isovector nuclear EMC effect in light nuclei. We find that while the MARATHON data yield relatively weak constraints on the $F_2^n/F_2^p$ neutron to proton structure function ratio and the $d/u$ PDF ratio, they suggest a strongly enhanced nuclear effect on the $d$-quark PDF in the bound proton.
|
high energy physics phenomenology
|
We investigate the spontaneous breaking of chiral symmetry in QCD by means of a recently proposed approximation scheme in the Landau-gauge Curci-Ferrari model, which combines an expansion in the Yang-Mills coupling and in the inverse number of colors, without expanding in the quark-gluon coupling. The expansion allows for a consistent treatment of ultraviolet tails via renormalization group techniques. At leading order, it leads to the resummation of rainbow diagrams for the quark propagator, with, however, a trivial running of both the gluon mass and the quark-gluon coupling. In a previous work, by using a simple model for a more realistic running of these parameters, we could reproduce the known phenomenology of chiral symmetry breaking, including a satisfactory description of the lattice data for the quark mass function. Here, we get rid of this model-dependence by taking our approximation scheme to next-to-leading order. This allows us to consistently include the realistic running of the parameters and to access the unquenched gluon and ghost propagators to first nontrivial order, which we can compare to available lattice data for an even more stringent test of our approach. In particular, our results for the various two-point functions compare well with lattice data while the parameters of the model are strongly constrained.
|
high energy physics phenomenology
|
Clustering individuals into similar groups in longitudinal studies can improve time series models by combining information across like individuals. While there is a well developed literature for clustering of time series, these approaches tend to generate clusters independently of the model training procedure which can lead to poor model fit. We propose a novel method that simultaneously clusters and fits autoregression models for groups of similar individuals. We apply a Wishart mixture model so as to cluster individuals while modeling the corresponding autocorrelation matrices at the same time. The fitted Wishart scale matrices map to cluster-level autoregressive coefficients through the Yule-Walker equations, fitting robust parsimonious autoregressive mixture models. This approach is able to discern differences in underlying serial variation of time series while accounting for an individual's intrinsic variability. We prove consistency of our cluster membership estimator and compare our approach against competing methods through simulation as well as by modeling regional COVID-19 infection rates.
|
statistics
|
We study distributed computing of the truncated singular value decomposition problem. We develop an algorithm that we call \texttt{LocalPower} for improving communication efficiency. Specifically, we uniformly partition the dataset among $m$ nodes and alternate between multiple (precisely $p$) local power iterations and one global aggregation. In the aggregation, we propose to weight each local eigenvector matrix with orthogonal Procrustes transformation (OPT). As a practical surrogate of OPT, sign-fixing, which uses a diagonal matrix with $\pm 1$ entries as weights, has better computation complexity and stability. We theoretically show that under certain assumptions \texttt{LocalPower} lowers the required number of communications by a factor of $p$ to reach a constant accuracy. We also show that the strategy of periodically decaying $p$ helps obtain high-precision solutions. We conduct experiments to demonstrate the effectiveness of \texttt{LocalPower}.
|
statistics
|
In this paper, we obtain an isometry between the Fock-Sobolev space and the Gauss-Sobolev space. As an application, we use multipliers on the Gauss-Sobolev space to characterize the boundedness of an integral operator on the Fock-Sobolev space.
|
mathematics
|
We explore routing of propagating phonons in analogy with previous experiments on photons. Surface acoustic waves (SAWs) in the microwave regime are scattered by a superconducting transmon qubit. The transmon can be tuned on or off resonance with the incident SAW field using an external magnetic field or the Autler-Townes effect, and thus the reflection and transmission of the SAW field can be controlled in time. We observe 80% extinction in the transmission of the low power continuous signal and a 40 ns rise time of the router. The slow propagation speed of SAWs on solid surfaces allows for in-flight manipulations of the propagating phonons. The ability to route short, 100 ns, pulses enables new functionality, for instance to catch an acoustic phonon between two qubits and then release it in a controlled direction.
|
quantum physics
|
The increasing amount of available data and more affordable hardware solutions have opened a gate to the realm of Deep Learning (DL). Due to the rapid advancements and ever-growing popularity of DL, it has begun to invade almost every field, where machine learning is applicable, by altering the traditional state-of-the-art methods. While many researchers in the speaker recognition area have also started to replace the former state-of-the-art methods with DL techniques, some of the traditional i-vector-based methods are still state-of-the-art in the context of text-independent speaker verification (TI-SV). In this paper, we discuss the most recent generalized end-to-end (GE2E) DL technique based on Long Short-term Memory (LSTM) units for TI-SV by Google and compare different scenarios and aspects including utterance duration, training time, and accuracy to prove that our method outperforms the traditional methods.
|
electrical engineering and systems science
|
Counterfactual estimation using synthetic controls is one of the most successful recent methodological developments in causal inference. Despite its popularity, the current description only considers time series aligned across units and synthetic controls expressed as linear combinations of observed control units. We propose a continuous-time alternative that models the latent counterfactual path explicitly using the formalism of controlled differential equations. This model is directly applicable to the general setting of irregularly-aligned multivariate time series and may be optimized in rich function spaces -- thereby improving on some limitations of existing approaches.
|
statistics
|
Quantum information theory has shown strong connections with classical statistical physics. For example, quantum error correcting codes like the surface and the color code present a tolerance to qubit loss that is related to the classical percolation threshold of the lattices where the codes are defined. Here we explore such connection to study analytically the tolerance of the color code when the protocol introduced in [Phys. Rev. Lett. $\textbf{121}$, 060501 (2018)] to correct qubit losses is applied. This protocol is based on the removal of the lost qubit from the code, a neighboring qubit, and the lattice edges where these two qubits reside. We first obtain analytically the average fraction of edges $ r(p) $ that the protocol erases from the lattice to correct a fraction $ p $ of qubit losses. Then, the threshold $ p_c $ below which the logical information is protected corresponds to the value of $ p $ at which $ r(p) $ equals the bond-percolation threshold of the lattice. Moreover, we prove that the logical information is protected if and only if the set of lost qubits does not include the entire support of any logical operator. The results presented here open a route to an analytical understanding of the effects of qubit losses in topological quantum error codes.
|
quantum physics
|
We construct and study the 6D dual superconformal algebra. Our construction is inspired by the dual superconformal symmetry of massless 4D $\mathcal{N}=4$ SYM and extends the previous construction of the enhanced dual conformal algebra for 6D $\mathcal{N}=(1,1)$ SYM to the full 6D dual superconformal algebra for chiral theories. We formulate constraints in 6D spinor helicity formalism and find all generators of the 6D dual superconformal algebra. Next we check that they agree with the dual superconformal generators of known 3D and 4D theories. We show that it is possible to significantly simplify the form of generators and compactly write the dual superconformal algebra using superindices. Finally, we work out some examples of algebra invariants.
|
high energy physics theory
|
Causal decomposition analyses can help build the evidence base for interventions that address health disparities (inequities). They ask how disparities in outcomes may change under hypothetical intervention. Through study design and assumptions, they can rule out alternate explanations such as confounding, selection-bias, and measurement error, thereby identifying potential targets for intervention. Unfortunately, the literature on causal decomposition analysis and related methods have largely ignored equity concerns that actual interventionists would respect, limiting their relevance and practical value. This paper addresses these concerns by explicitly considering what covariates the outcome disparity and hypothetical intervention adjust for (so-called allowable covariates) and the equity value judgements these choices convey, drawing from the bioethics, biostatistics, epidemiology, and health services research literatures. From this discussion, we generalize decomposition estimands and formulae to incorporate allowable covariate sets, to reflect equity choices, while still allowing for adjustment of non-allowable covariates needed to satisfy causal assumptions. For these general formulae, we provide weighting-based estimators based on adaptations of ratio-of-mediator-probability and inverse-odds-ratio weighting. We discuss when these estimators reduce to already used estimators under certain equity value judgements, and a novel adaptation under other judgements.
|
statistics
|
We study spin density wave quantum critical points in two dimensional metals with a quenched disorder potential coupling to the electron density. Adopting an $\epsilon$-expansion around three spatial dimensions, where both disorder and the Yukawa-type interaction between electrons and bosonic order parameter fluctuations are marginal, we present a perturbative, one-loop renormalization group analysis of this problem, where the interplay between fermionic and bosonic excitations is fully incorporated. Considering two different Gaussian disorder models restricted to small-angle scattering, we show that the non-Fermi liquid fixed point of the clean SDW hot-spot model is generically unstable and the theory flows to strong coupling due to a mutual enhancement of interactions and disorder. We study properties of the asymptotic flow towards strong coupling, where our perturbative approach eventually breaks down. Our results indicate that disorder dominates at low energies, suggesting that the ground-state in two dimensions is Anderson-localized.
|
condensed matter
|
Deep learning achieves state-of-the-art results in many tasks in computer vision and natural language processing. However, recent works have shown that deep networks can be vulnerable to adversarial perturbations, which raised a serious robustness issue of deep networks. Adversarial training, typically formulated as a robust optimization problem, is an effective way of improving the robustness of deep networks. A major drawback of existing adversarial training algorithms is the computational overhead of the generation of adversarial examples, typically far greater than that of the network training. This leads to the unbearable overall computational cost of adversarial training. In this paper, we show that adversarial training can be cast as a discrete time differential game. Through analyzing the Pontryagin's Maximal Principle (PMP) of the problem, we observe that the adversary update is only coupled with the parameters of the first layer of the network. This inspires us to restrict most of the forward and back propagation within the first layer of the network during adversary updates. This effectively reduces the total number of full forward and backward propagation to only one for each group of adversary updates. Therefore, we refer to this algorithm YOPO (You Only Propagate Once). Numerical experiments demonstrate that YOPO can achieve comparable defense accuracy with approximately 1/5 ~ 1/4 GPU time of the projected gradient descent (PGD) algorithm. Our codes are available at https://https://github.com/a1600012888/YOPO-You-Only-Propagate-Once.
|
statistics
|
The formalism of Holographic Space-time (HST) is a translation of the principles of Lorentzian geometry into the language of quantum information. Intervals along time-like trajectories, and their associated causal diamonds, completely characterize a Lorentzian geometry. The Bekenstein-Hawking-Gibbons-'t Hooft-Jacobson-Fischler-Susskind-Bousso Covariant Entropy Principle, equates the logarithm of the dimension of the Hilbert space associated with a diamond to one quarter of the area of the diamond's holographic screen, measured in Planck units. The most convincing argument for this principle is Jacobson's derivation of Einstein's equations as the hydrodynamic expression of this entropy law. In that context, the null energy condition (NEC) is seen to be the analog of the local law of entropy increase. The quantum version of Einstein's relativity principle is a set of constraints on the mutual quantum information shared by causal diamonds along different time-like trajectories. The implementation of this constraint for trajectories in relative motion is the greatest unsolved problem in HST. The other key feature of HST is its claim that, for non-negative cosmological constant or causal diamonds much smaller than the asymptotic radius of curvature for negative c.c., the degrees of freedom localized in the bulk of a diamond are constrained states of variables defined on the holographic screen. This principle gives a simple explanation of otherwise puzzling features of BH entropy formulae, and resolves the firewall problem for black holes in Minkowski space. It motivates a covariant version of the CKN\cite{ckn} bound on the regime of validity of quantum field theory (QFT) and a detailed picture of the way in which QFT emerges as an approximation to the exact theory.
|
high energy physics theory
|
In 1974 Dashen, Hasslacher and Neveu calculated the leading quantum correction to the mass of the kink in the scalar $\phi^4$ theory in 1+1 dimensions. The derivation relies on the identification of the perturbations about the kink as solutions of the Poschl-Teller (PT) theory. They regularize the theory by placing it in a periodic box, although the kink is not itself periodic. They also require an ad hoc identification of plane wave and PT states which is difficult to interpret in the decompactified limit. We rederive the mass using the kink operator to recast this problem in terms of the PT Hamiltonian which we explicitly diagonalize using its exact eigenstates. We normal order from the beginning, rendering our theory finite so that no compactification is necessary. In our final expression for the kink mass, the form of the PT potential disappears, suggesting that our mass formula applies to other quantum solitons.
|
high energy physics theory
|
The light-cone gauge approach to $T{\overline T}$ deformed models is generalised to models deformed by U(1) conserved currents $J^\alpha$, $\widetilde J^\alpha$, stress-energy tensor $T^\alpha{}_\beta$, and their various quadratic combinations of the form $\epsilon_{\alpha\beta} K_1^\alpha K_2^\beta$. It is then applied to derive a ten-parameter deformed Hamiltonian for a system of scalars with an arbitrary potential, the flow equations for the Hamiltonian density, and the flow equations for the energy of the deformed model. The flow equations disagree with the ones recently proposed in arXiv:1903.07606. The results obtained are applied to analyse a CFT with left- and right-moving conserved currents deformed by these operators. It is shown that with a proper choice of the parameter of the $T{\overline T}$ deformation the deformed CFT Hamiltonian density is independent of the parameters of the $J\Theta$ and $\bar J\overline \Theta$ deformations. This leads to the existence of two extra relations which generalise the $J\Theta=0$ and $\bar J\overline \Theta=0$ relations of the undeformed CFT. The spectrum of the deformed CFT is found and shown to satisfy the flow equations.
|
high energy physics theory
|
We examine Higgs boson production and decay in heavy-ion collisions at the LHC and future colliders. Owing to the long lifetime of the Higgs boson, its hadronic decays may experience little or no screening from the hot and dense quark-gluon plasma whereas jets from hard scattering processes and from decays of the electro-weak gauge bosons and the top-quark suffer significant energy loss. This distinction can lead to enhanced signal to background ratios in hadronic decay channels and thus, for example, provide alternative ways to probe the Yukawa coupling of the Higgs boson to the bottom quark and its lifetime.
|
high energy physics phenomenology
|
It is now widely accepted that the standard inferential toolkit used by the scientific research community -- null-hypothesis significance testing (NHST) -- is not fit for purpose. Yet despite the threat posed to the scientific enterprise, there is no agreement concerning alternative approaches. This lack of consensus reflects long-standing issues concerning Bayesian methods, the principal alternative to NHST. We report on recent work that builds on an approach to inference put forward over 70 years ago to address the well-known "Problem of Priors" in Bayesian analysis, by reversing the conventional prior-likelihood-posterior ("forward") use of Bayes's Theorem. Such Reverse-Bayes analysis allows priors to be deduced from the likelihood by requiring that the posterior achieve a specified level of credibility. We summarise the technical underpinning of this approach, and show how it opens up new approaches to common inferential challenges, such as assessing the credibility of scientific findings, setting them in appropriate context, estimating the probability of successful replications, and extracting more insight from NHST while reducing the risk of misinterpretation. We argue that Reverse-Bayes methods have a key role to play in making Bayesian methods more accessible and attractive to the scientific community. As a running example we consider a recently published meta-analysis from several randomized controlled clinical trials investigating the association between corticosteroids and mortality in hospitalized patients with COVID-19.
|
statistics
|
Detailed understanding of spin dynamics in magnetic nanomaterials is necessary for developing ultrafast, low-energy and high-density spintronic logic and memory. Here, we develop micromagnetic models and analytical solutions to elucidate the effect of increasing damping and uniaxial anisotropy on magnetic field pulse-assisted switching time, energy and field requirements of nanowires with perpendicular magnetic anisotropy and yttrium iron garnet-like spin transport properties. A nanowire is initially magnetized using an external magnetic field pulse (write) and self-relaxation. Next, magnetic moments exhibit deterministic switching upon receiving 2.5 ns-long external magnetic pulses in both vertical polarities. Favorable damping ({\alpha}~0.1-0.5) and anisotropy energies (10^4-10^5 J m^-3) allow for as low as picosecond magnetization switching times. Magnetization reversal with fields below coercivity was observed using spin precession instabilities. A competition or a nanomagnetic trilemma arises among the switching rate, energy cost and external field required. Developing magnetic nanowires with optimized damping and effective anisotropy could reduce the switching energy barrier down to 3163kBT at room temperature. Thus, pulse-assisted picosecond and low energy switching in nanomagnets could enable ultrafast nanomagnetic logic and cellular automata.
|
condensed matter
|
In 2018, the Fermi mission celebrated its first decade of operation. In this time, the Large Area Telescope (LAT) has been very successful in detecting the high-energy emission (>100 MeV) from Gamma-Ray Bursts (GRBs). The analysis of particularly remarkable events - such as GRB 080916C, GRB 090510 and GRB 130427A - has been presented in dedicated publications. Here we present the results of a new systematic search for high-energy emission from the full sample of GRBs detected in 10 years by the Fermi Gamma-Ray Burst Monitor, as well as Swift, AGILE, Integral and IPN bursts, featuring a detection efficiency more than 50\%\ better than previous works, and returning 186 detections during 10 years of LAT observations. This milestone marks a vast improvement from the 35 events contained in the first LAT GRB catalog (covering the first 3 years of Fermi operations). We assess the characteristics of the GRB population at high energy with unprecedented sensitivity, covering aspects such as temporal properties, energetics and spectral index of the high-energy emission. Finally, we show how the LAT observations can be used to inform theory, in particular the prospects for very high-energy emission.
|
astrophysics
|
We present the first security analysis of conference key agreement (CKA) in the most adversarial model of device independence (DI). Our protocol can be implemented {by any experimental setup} that is capable of performing Bell tests (specifically, we introduce the "Parity-CHSH" inequality), and security can in principle be obtained for any violation of the Parity-CHSH inequality. We use a direct connection between the $N$-partite Parity-CHSH inequality and the CHSH inequality. Namely, the Parity-CHSH inequality can be considered as a CHSH inequality or another CHSH inequality (equivalent up to relabelling) depending on the parity of the output of $N-2$ of the parties. We compare the asymptotic key rate for DICKA to the case where the parties use $N-1$ DIQKD protocols in order to generate a common key. We show that for some regime of noise the DICKA protocol leads to better rates.
|
quantum physics
|
We construct a viable 3-3-1 model with two $SU(3)_L$ scalar triplets, extended fermion and scalar spectrum, based on the $T^{\prime}$ family symmetry and other auxiliary cyclic symmetries, whose spontaneous breaking yields the observed pattern of SM fermion mass spectrum and fermionic mixing parameters. In our model the SM quarks lighter than the top quark, get their masses from a low scale Universal seesaw mechanism, the SM charged lepton masses are produced by a Froggatt-Nielsen mechanism and the small light active neutrino masses are generated from an inverse seesaw mechanism. The model is consistent with the low energy SM fermion flavor data and successfully accommodates the current Higgs diphoton decay rate and predicts charged lepton flavor violating decays within the reach of the forthcoming experiments.
|
high energy physics phenomenology
|
In the article we introduce the experiment of the photostimulation effect in the tunneling conductivity of free-standing thin C-Au films. We observe a sharp increase of the conductivity of hybrid film due to the electromagnetic exciting at the frequencies which are close to the plasmon resonance of gold nanoparticles. The use of carbyne threads as a stabilizing matrix makes it possible to obtain free-standing thin films that demonstrate a good structural stability. The tunnel current-voltage measurements demonstrate a strong dependence of the current value on the intensity of green laser radiation used to photostimulate thin C-Au in area of the measuring experiment.
|
condensed matter
|
The quest for the origin(s) of ultra-high-energy cosmic rays (UHECRs) continues to be a far-reaching pillar of high energy astrophysics. The source scrutiny is mostly based on three observables: the energy spectrum, the nuclear composition, and the distribution of arrival directions. We show that each of these three observables can be well reproduced with UHECRs originating in starburst galaxies.
|
astrophysics
|
Very recently a new hadronic structure around $3.98$ GeV was observed in BESIII experiment. From its decay modes, it is reasonable for people to assign it to the category of exotic state, say $Z^+_{cs}$, the stranged-parter of $Z_{c}(3900)$. This finding indicates for the first time the exotic state with strange quark in charm sector, and hence has a peculiar importance. By virtue of the QCD Sum Rule technique, we analyze the $Z^+_{cs}$ about its possible configuration and physical properties, and find it could be configured as a mixture of two types of structures, $[1_c]_{\bar{c} u}\otimes[1_c]_{\bar{s} c}$ and $[1_c]_{\bar{c} c}\otimes[1_c]_{\bar{s} u}$, or $[3_c]_{\bar{c} u}\otimes[\bar{3}_c]_{\bar{s} c}$ and $[3_c]_{\bar{c} c}\otimes[\bar{3}_c]_{\bar{s} u}$, with $J^P=1^+$. Physically, it then appears to be the emergence of a compound of four possible currents in each configuration, which tells the single current evaluation of hadron spectroscopy and their decay properties are sometimes not enough. We find in both cases the energy spectra may fit well with the experimental observation, i.e. $3.98$ GeV, within the uncertainties, while noted the former is not favored by vector meson exchange model. Various $Z^+_{cs}(3980)$ decay modes are evaluated, which are critical for pinning down its configuration and left for experimental verification. We also predict the mass of $Z^0_{cs}$, the neutral partner of $Z^+_{cs}(3980)$, and analyze its dominant decay probabilities.
|
high energy physics phenomenology
|
We present a complete list of the dimension 8 operator basis in the standard model effective field theory using group theoretic techniques in a systematic and automated way. We adopt a new form of operators in terms of the irreducible representations of the Lorentz group, and identify the Lorentz structures as states in a $SU(N)$ group. In this way, redundancy from equations of motion is absent and that from integration-by-part is treated using the fact that the independent Lorentz basis forms an invariant subspace of the $SU(N)$ group. We also decompose operators into the ones with definite permutation symmetries among flavor indices to deal with subtlety from repeated fields. For the first time, we provide the explicit form of independent flavor-specified operators in a systematic way. Our algorithm can be easily applied to higher dimensional standard model effective field theory and other effective field theories, making these studies more approachable.
|
high energy physics phenomenology
|
Assigning weights to a large pool of objects is a fundamental task in a wide variety of applications. In this article, we introduce a concept of structured high-dimensional probability simplexes, whose most components are zero or near zero and the remaining ones are close to each other. Such structure is well motivated by 1) high-dimensional weights that are common in modern applications, and 2) ubiquitous examples in which equal weights -- despite their simplicity -- often achieve favorable or even state-of-the-art predictive performances. This particular structure, however, presents unique challenges both computationally and statistically. To address these challenges, we propose a new class of double spike Dirichlet priors to shrink a probability simplex to one with the desired structure. When applied to ensemble learning, such priors lead to a Bayesian method for structured high-dimensional ensembles that is useful for forecast combination and improving random forests, while enabling uncertainty quantification. We design efficient Markov chain Monte Carlo algorithms for easy implementation. Posterior contraction rates are established to provide theoretical support. We demonstrate the wide applicability and competitive performance of the proposed methods through simulations and two real data applications using the European Central Bank Survey of Professional Forecasters dataset and a UCI dataset.
|
statistics
|
Understanding jets initiated by quarks and gluons is of fundamental importance in collider physics. Efficient and robust techniques for quark versus gluon jet discrimination have consequences for new physics searches, precision $\alpha_s$ studies, parton distribution function extractions, and many other applications. Numerous machine learning analyses have attacked the problem, demonstrating that good performance can be obtained but generally not providing an understanding for what properties of the jets are responsible for that separation power. In this paper, we provide an extensive and detailed analysis of quark versus gluon discrimination from first-principles theoretical calculations. Working in the strongly-ordered soft and collinear limits, we calculate probability distributions for fixed $N$-body kinematics within jets with up through three resolved emissions (${\cal O}(\alpha_s^3)$). This enables explicit calculation of quantities central to machine learning such as the likelihood ratio, the area under the receiver operating characteristic curve, and reducibility factors within a well-defined approximation scheme. Further, we relate the existence of a consistent power counting procedure for discrimination to ideas for operational flavor definitions, and we use this relationship to construct a power counting for quark versus gluon discrimination as an expansion in $e^{C_F-C_A}\ll1$, the exponential of the fundamental and adjoint Casimirs. Our calculations provide insight into the discrimination performance of particle multiplicity and show how observables sensitive to all emissions in a jet are optimal. We compare our predictions to the performance of individual observables and neural networks with parton shower event generators, validating that our predictions describe the features identified by machine learning.
|
high energy physics phenomenology
|
We describe all groups that can be generated by two twists along spherical sequences in an enhanced triangulated category. It will be shown that with one exception such a group is isomorphic to an abelian group generated by not more than two elements, the free group on two generators or the braid group of one of the types $A_2$, $B_2$ and $G_2$ factorized by a central subgroup. The last mentioned subgroup can be nontrivial only if some specific linear relation between length and sphericity holds. The mentioned exception can occur when one has two spherical sequences of length $3$ and sphericity $2$. In this case the group generated by the corresponding two spherical twists can be isomorphic to the nontrivial central extension of the symmetric group on three elements by the infinite cyclic group. Also we will apply this result to give a presentation of the derived Picard group of selfinjective algebras of the type $D_4$ with torsion $3$ by generators and relations.
|
mathematics
|
Analysis of structural and functional connectivity (FC) of human brains is of pivotal importance for diagnosis of cognitive ability. The Human Connectome Project (HCP) provides an excellent source of neural data across different regions of interest (ROIs) of the living human brain. Individual specific data were available from an existing analysis (Dai et al., 2017) in the form of time varying covariance matrices representing the brain activity as the subjects perform a specific task. As a preliminary objective of studying the heterogeneity of brain connectomics across the population, we develop a probabilistic model for a sample of covariance matrices using a scaled Wishart distribution. We stress here that our data units are available in the form of covariance matrices, and we use the Wishart distribution to create our likelihood function rather than its more common usage as a prior on covariance matrices. Based on empirical explorations suggesting the data matrices to have low effective rank, we further model the center of the Wishart distribution using an orthogonal factor model type decomposition. We encourage shrinkage towards a low rank structure through a novel shrinkage prior and discuss strategies to sample from the posterior distribution using a combination of Gibbs and slice sampling. We extend our modeling framework to a dynamic setting to detect change points. The efficacy of the approach is explored in various simulation settings and exemplified on several case studies including our motivating HCP data. We extend our modeling framework to a dynamic setting to detect change points.
|
statistics
|
This paper presents an online adaptive learning solution to optimal synchronization control problem of heterogeneous multi-agent systems via a novel distributed policy iteration approach.
|
electrical engineering and systems science
|
The seminal work of DiPerna and Lions [Invent. Math., 98, 1989] guarantees the existence and uniqueness of regular Lagrangian flows for Sobolev vector fields. The latter is a suitable selection of trajectories of the related ODE satisfying additional compressibility/semigroup properties. A long-standing open question is whether the uniqueness of the regular Lagrangian flow is a corollary of the uniqueness of the trajectory of the ODE for a.e. initial datum. Using Ambrosio's superposition principle we relate the latter to the uniqueness of positive solutions of the continuity equation and we then provide a negative answer using tools introduced by Modena and Sz\'ekelyhidi in the recent groundbreaking work [Ann. PDE, 4, 2018]. On the opposite side, we introduce a new class of asymmetric Lusin-Lipschitz inequalities and use them to prove the uniqueness of positive solutions of the continuity equation in an integrability range which goes beyond the DiPerna-Lions theory.
|
mathematics
|
Extreme floods cause casualties, and widespread damage to property and vital civil infrastructure. We here propose a Bayesian approach for predicting extreme floods using the generalized extreme-value (GEV) distribution within gauged and ungauged catchments. A major methodological challenge is to find a suitable parametrization for the GEV distribution when covariates or latent spatial effects are involved. Other challenges involve balancing model complexity and parsimony using an appropriate model selection procedure, and making inference using a reliable and computationally efficient approach. Our approach relies on a latent Gaussian modeling framework with a novel multivariate link function designed to separate the interpretation of the parameters at the latent level and to avoid unreasonable estimates of the shape and time trend parameters. Structured additive regression models are proposed for the four parameters at the latent level. For computational efficiency with large datasets and richly parametrized models, we exploit an accurate and fast approximate Bayesian inference approach. We applied our proposed methodology to annual peak river flow data from 554 catchments across the United Kingdom (UK). Our model performed well in terms of flood predictions for both gauged and ungauged catchments. The results show that the spatial model components for the transformed location and scale parameters, and the time trend, are all important. Posterior estimates of the time trend parameters correspond to an average increase of about $1.5\%$ per decade and reveal a spatial structure across the UK. To estimate return levels for spatial aggregates, we further develop a novel copula-based post-processing approach of posterior predictive samples, in order to mitigate the effect of the conditional independence assumption at the data level, and we show that our approach provides accurate results.
|
statistics
|
We survey known results on the canonical bundle formula and its applications in algebraic geometry.
|
mathematics
|
We define a super analog of the classical Pl\"{u}cker embedding of the Grassmannian into a projective space. The difficulty of the problem is rooted in the fact that super exterior powers $\Lambda^{r|s}(V)$ are not a simple generalization from the completely even case (this works only for $r|0$ when it is possible to use $\Lambda^r(V)$). To construct the embedding we need to non-trivially combine a super vector space $V$ and its parity-reversion $\Pi V$. Our "super Pl\"{u}cker map" takes the Grassmann supermanifold $G_{r|s}(V)$ to a "weighted projective space" $P\left(\Lambda^{r|s}(V)\oplus \Lambda^{s|r}(\Pi V)\right)$ with weights $+1,-1$. A simpler map $G_{r|0}(V)\to P(\Lambda^r(V))$ works for the case $s=0$. We construct a super analog of Pl\"{u}cker coordinates, prove that our map is an embedding, and obtain "super Pl\"{u}cker relations". It is interesting that another type of relations (due to Khudaverdian) is equivalent to the (super) Pl\"{u}cker relations in the case $r|s=2|0$. We discuss application to much sought-after super cluster algebras and construct a super cluster structure for $G_2(\mathbb{R}^{4|1})$ and $G_2(\mathbb{R}^{5|1})$.
|
mathematics
|
This paper proposes a multiobjective multitasking optimization evolutionary algorithm based on decomposition with dual neighborhood. In our proposed algorithm, each subproblem not only maintains a neighborhood based on the Euclidean distance among weight vectors within its own task, but also keeps a neighborhood with subproblems of other tasks. Gray relation analysis is used to define neighborhood among subproblems of different tasks. In such a way, relationship among different subproblems can be effectively exploited to guide the search. Experimental results show that our proposed algorithm outperforms four state-of-the-art multiobjective multitasking evolutionary algorithms and a traditional decomposition-based multiobjective evolutionary algorithm on a set of test problems.
|
computer science
|
We investigate the existence and uniqueness issues of the 3D incompressible Hall-magnetohydrodynamic system supplemented with initial velocity $u_0$ and magnetic field $B_0$ in critical regularity spaces.In the case where $u_0,$ $B_0$ and the current $J_0:=\nabla\times B_0$ belong to the homogeneous Besov space $\dot B^{\frac 3p-1}_{p,1},$ $\:1\leq p<\infty,$ and are small enough, we establish a global result and the conservation of higher regularity.If the viscosity is equal to the magnetic resistivity, then we obtain the global well-posedness provided $u_0,$ $B_0$ and $J_0$ are small enough in the \emph{larger} Besov space $\dot B^{\frac12}_{2,r},$ $r\geq1.$If $r=1,$ then we also establish the local existence for large data, and exhibit continuation criteria for solutions with critical regularity. Our results rely on an extended formulation of the Hall-MHD system, that has some similarities with the incompressibleNavier-Stokes equations.
|
mathematics
|
We propose methods for estimating correspondence between two point sets under the presence of outliers in both the source and target sets. The proposed algorithms expand upon the theory of the regression without correspondence problem to estimate transformation coefficients using unordered multisets of covariates and responses. Previous theoretical analysis of the problem has been done in a setting where the responses are a complete permutation of the regressed covariates. This paper expands the problem setting by analyzing the cases where only a subset of the responses is a permutation of the regressed covariates in addition to some covariates being outliers. We term this problem \textit{robust regression without correspondence} and provide several algorithms based on random sample consensus for exact and approximate recovery in a noiseless and noisy one-dimensional setting as well as an approximation algorithm for multiple dimensions. The theoretical guarantees of the algorithms are verified in simulated data. We demonstrate an important computational neuroscience application of the proposed framework by demonstrating its effectiveness in a \textit{Caenorhabditis elegans} neuron matching problem where the presence of outliers in both the source and target nematodes is a natural tendency.
|
statistics
|
The form-factors for the transition $ N^*(1535)\to N $ induced by isovector and isoscalar axial currents within the framework of light-cone QCD sum rules by using the most general form of the interpolating current are calculated. In numerical calculations, we use two sets of values of input parameters. It is observed that the $ Q^2 $ dependence of the form-factor $ G_A $ can be described by the dipole form. Moreover, the form-factors $ G_P^{(S)} $ are found to be highly sensitive to the variations in the auxiliary parameter $ \beta $.
|
high energy physics phenomenology
|
We present an analytic method to compute the one-loop magnetic correction to the gluon polarization tensor starting from the Landau-level representation of the quark propagator in the presence of an external magnetic field. We show that the general expression contains the vacuum contribution that can be isolated from the zero-field limit for finite gluon momentum. The general tensor structure for the gluon polarization also contains two spurious terms that do not satisfy the transversality properties. However, we also show that the coefficients of this structures vanish and thus do not contribute to the polarization tensor, as expected. In order to check the validity of the expressions we study the strong and weak field limits and show that well established results are reproduced. The findings can be used to study the conditions for gluons to equilibrate with the magnetic field produced during the early stages of a relativistic heavy-ion collision.
|
high energy physics theory
|
Low-bit quantization of network weights and activations can drastically reduce the memory footprint, complexity, energy consumption and latency of Deep Neural Networks (DNNs). However, low-bit quantization can also cause a considerable drop in accuracy, in particular when we apply it to complex learning tasks or lightweight DNN architectures. In this paper, we propose a training procedure that relaxes the low-bit quantization. We call this procedure \textit{DNN Quantization with Attention} (DQA). The relaxation is achieved by using a learnable linear combination of high, medium and low-bit quantizations. Our learning procedure converges step by step to a low-bit quantization using an attention mechanism with temperature scheduling. In experiments, our approach outperforms other low-bit quantization techniques on various object recognition benchmarks such as CIFAR10, CIFAR100 and ImageNet ILSVRC 2012, achieves almost the same accuracy as a full precision DNN, and considerably reduces the accuracy drop when quantizing lightweight DNN architectures.
|
computer science
|
Collagen is the most abundant extracellular-matrix protein in mammals and the main structural and load-bearing element of connective tissues. Collagen networks show remarkable strain-stiffening properties which tune the mechanical functions of tissues and regulate cell behaviours. Linear and non-linear mechanical properties of in-vitro disordered collagen networks have been widely studied using rheology for a range of self-assembly conditions in recent years. However, a one to one correlation between the onset of macroscopic network failure and local deformations inside the sample is yet to be established in these systems. Here, using shear rheology and in-situ high-resolution boundary imaging, we study the yielding dynamics of in-vitro reconstituted networks of uncrosslinked type-I collagen. We find that in the non-linear regime, the differential shear modulus ($K$) of the network initially increases with applied strain and then begins to drop as the network starts to yield beyond a critical strain (yield strain). Measurement of the local velocity profile using colloidal tracer particles reveals that beyond the peak of $K$, strong strain-localization and slippage between the network and the rheometer plate sets in that eventually leads to a detachment. We generalize this observation for a range of collagen concentrations, applied strain ramp rates, as well as different network architectures obtained by varying the polymerization temperature. Furthermore, by fitting the stress vs strain data with a continuum affine network model, we map out a state diagram showing the dependence of yield-stain and -stress on the reduced persistence length and mesh size of the network. Our findings can have broad implications in tissue engineering, particularly, in designing highly resilient biological scaffolds.
|
condensed matter
|
The study of prestellar cores is critical as they set the initial conditions in star formation and determine the final mass of the stellar object. To date, several hypotheses are describing their gravitational collapse. We perform detailed line analysis and modelling of H$_{2}$D$^{+}$ 110 -111 and N$_{2}$H$^{+}$ 4-3 emission at 372 GHz, using 2'x2' maps (JCMT). Our goal is to test the most prominent dynamical models by comparing the modelled gas kinematics and spatial distribution (H$_{2}$D$^{+}$ and N$_{2}$H$^{+}$) with observations towards four prestellar (L1544, L183, L694-2, L1517B) and one protostellar core (L1521f). We perform a detailed non-LTE radiative transfer modelling using RATRAN, where we compare the predicted spatial distribution and line profiles of H$_{2}$D$^{+}$ and N$_{2}$H$^{+}$ with observations towards all cores. To do so, we adopt the physical structure for each core predicted by three different dynamical models taken from literature: Quasi-Equilibrium Bonnor-Ebert Sphere (QE-BES), Singular Isothermal Sphere (SIS), and Larson-Penston (LP) flow. Our analysis provides an updated picture of the physical structure of prestellar cores. We find that the SIS model can be clearly excluded in explaining the gas emission towards the cores, but a larger sample is required to differentiate clearly between the LP flow, the QE-BES and the static models. All models of collapse underestimate the intensity of the gas emission by up to several factors towards the only protostellar core in our sample, indicating that different dynamics take place in different evolutionary core stages. If the LP model is confirmed towards a larger sample of prestellar cores, it would indicate that they may form by compression or accretion of gas from larger scales. If the QE-BES model is confirmed, it means that quasi hydrostatic cores can exist within turbulent ISM.
|
astrophysics
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.