text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
In this reply we clarify the main points of our manuscript and respond to the critique in the Comment arXiv:2002.11514. In particular, we emphasize that our conclusion "squeezing loses effectiveness in the deep quantum regime" does not contradict the previous work arXiv:1801.10383, but instead adds to it, and raises fundamental questions on classifying parameter regimes for quantum synchronization. Moreover, we address the concern brought up on the validity of the master equation in the deep quantum regime, and show that our noise-enhanced synchronization differs from previous literature. Through numerical examples, we also demonstrate that the choice of ansatz, while appearing inconsistent, does not lead to erroneous conclusions. Lastly, we expound on the physics of noise-boosted synchronization, and show that it is indeed a genuine feature unique to the deep quantum regime. However, we note that single photon dissipation is a more accurate term, and will move to using that. | quantum physics |
Thin dielectric elastomers with compliant electrodes exhibit various types of instability under the action of electromechanical loading. Guided by the thermodynamically-based formulation of Fosdick and Tang (J. Elasticity 88, 255-297, 2007), here we provide an energetic perspective on the stability of dielectric elastomers and we highlight the fundamental energetic divide between voltage control and charge control. By using the concept of energy relaxation, we describe wrinkling for neo-Hookean ideal elastomers, and we show that in voltage control wrinkling is stable as long as the tension-extension inequality holds, whereas wrinkling is always stable in charge control. We finally illustrate some examples involving both homogeneous and inhomogeneous deformations, showing that the type and hierarchy of instabilities taking place in dielectric membranes can be tuned by suitable choices of the boundary conditions. | condensed matter |
In this paper, we propose a neuro-symbolic framework called weighted Signal Temporal Logic Neural Network (wSTL-NN) that combines the characteristics of neural networks and temporal logics. Weighted Signal Temporal Logic (wSTL) formulas are recursively composed of subformulas that are combined using logical and temporal operators. The quantitative semantics of wSTL is defined such that the quantitative satisfaction of subformulas with higher weights has more influence on the quantitative satisfaction of the overall wSTL formula. In the wSTL-NN, each neuron corresponds to a wSTL subformula, and its output corresponds to the quantitative satisfaction of the formula. We use wSTL-NN to represent wSTL formulas as features to classify time series data. STL features are more explainable than those used in classical methods. The wSTL-NN is end-to-end differentiable, which allows learning of wSTL formulas to be done using back-propagation. To reduce the number of weights, we introduce two techniques to sparsify the wSTL-NN.We apply our framework to an occupancy detection time-series dataset to learn a classifier that predicts the occupancy status of an office room. | computer science |
Finite-$N$ effects unavoidably drive the long-term evolution of long-range interacting $N$-body systems. The Balescu-Lenard kinetic equation generically describes this process sourced by ${1/N}$ effects but this kinetic operator exactly vanishes by symmetry for one-dimensional homogeneous systems: such systems undergo a kinetic blocking and cannot relax as a whole at this order in ${1/N}$. It is therefore only through the much weaker ${1/N^{2}}$ effects, sourced by three-body correlations, that these systems can relax, leading to a much slower evolution. In the limit where collective effects can be neglected, but for an arbitrary pairwise interaction potential, we derive a closed and explicit kinetic equation describing this very long-term evolution. We show how this kinetic equation satisfies an $H$-theorem while conserving particle number and energy, ensuring the unavoidable relaxation of the system towards the Boltzmann equilibrium distribution. Provided that the interaction is long-range, we also show how this equation cannot suffer from further kinetic blocking, i.e., the ${1/N^{2}}$ dynamics is always effective. Finally, we illustrate how this equation quantitatively matches measurements from direct $N$-body simulations. | condensed matter |
We investigate the theoretical limits of pipeline parallel learning of deep learning architectures, a distributed setup in which the computation is distributed per layer instead of per example. For smooth convex and non-convex objective functions, we provide matching lower and upper complexity bounds and show that a naive pipeline parallelization of Nesterov's accelerated gradient descent is optimal. For non-smooth convex functions, we provide a novel algorithm coined Pipeline Parallel Random Smoothing (PPRS) that is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension. While the convergence rate still obeys a slow $\varepsilon^{-2}$ convergence rate, the depth-dependent part is accelerated, resulting in a near-linear speed-up and convergence time that only slightly depends on the depth of the deep learning architecture. Finally, we perform an empirical analysis of the non-smooth non-convex case and show that, for difficult and highly non-smooth problems, PPRS outperforms more traditional optimization algorithms such as gradient descent and Nesterov's accelerated gradient descent for problems where the sample size is limited, such as few-shot or adversarial learning. | statistics |
By combining the Wigner function formalism of relativistic quantum kinetic theory with fundamental equations of relativistic magnetohydrodynamics (MHD), we present a novel approach to determine the proper time evolution of the temperature and other thermodynamic quantities in a uniformly expanding hot, magnetized, and weakly interacting plasma. The aim is to study the contribution of quantum corrections to this evolution. We first determine the corresponding Wigner function in terms of the solution of the Dirac equation in the presence of a constant magnetic field. Using this function, we then compute the energy-momentum tensor of the above-mentioned plasma, which eventually yields its energy density and pressure. Plugging these quantities in the energy equation of relativistic MHD, we arrive, after choosing an appropriate coordinate system, at a differential equation for the temperature as a function of the proper time. The numerical solution of this equation leads finally to the proper time evolution of the temperature. The latter is then used to determine the evolution of a large number of thermodynamic quantities in this expanding and magnetized plasma. We compare our results with other existing results from relativistic MHD. We also comment on the effect of point to point decaying magnetic fields on the thermodynamic properties of this plasma. | high energy physics phenomenology |
Solar filaments are an intriguing phenomenon, like cool clouds suspended in the hot corona. Similar structures exist in the intergalactic medium as well. Despite being a long-studied topic, solar filaments have continually attracted intensive attention because of their link to the coronal heating, coronal seismology, solar flares, and coronal mass ejections (CMEs). In this review paper, by combing through the solar filament-related work done in the past decade, we discuss several controversial topics, such as the fine structures, dynamics, magnetic configurations, and helicity of filaments. With high-resolution and high-sensitivity observations, combined with numerical simulations, it is expected that resolving these disputes will definitely lead to a huge leap in understanding the physics related to solar filaments, and even shed light on galactic filaments. | astrophysics |
We present for the first time a model-independent anatomy of the ratio $\varepsilon'/\varepsilon$ in the context of the $\Delta S = 1$ effective theory with operators invariant under QCD and QED and in the context of the Standard Model Effective Field Theory (SMEFT) with the operators invariant under the full SM gauge group. Our goal is to identify the new physics scenarios that are probed by this ratio and which could help to explain a possible deviation from the SM that is hinted by the data. To this end we derive a master formula for $\varepsilon'/\varepsilon$, which can be applied to any theory beyond the Standard Model (BSM) in which the Wilson coefficients of all contributing operators have been calculated at the electroweak scale. The relevant hadronic matrix elements of BSM operators are from the Dual QCD approach and the SM ones from lattice QCD. Within SMEFT, the constraints from $K^0$ and $D^0$ mixing as well as electric dipole moments limit significantly potential new physics contributions to $\varepsilon'/\varepsilon$. Correlations of $\varepsilon'/\varepsilon$ with $K\to\pi\nu\bar\nu$ decays are briefly discussed. Building on our EFT analysis and the model-independent constraints, we discuss implications of a possible deviation from the SM in $\varepsilon'/\varepsilon$ for model building, highlighting the role of the new scalar and tensor matrix elements in models with scalar mediators. | high energy physics phenomenology |
The $SO(5)$ Landau model is the mathematical platform of the 4D quantum Hall effect and provide a rare opportunity for a physical realization of the fuzzy four-sphere. We present an integrated analysis of the $SO(5)$ Landau models and the associated matrix geometries through the Landau level projection. With the $SO(5)$ monopole harmonics, we explicitly derive matrix geometry of a four-sphere in any Landau level: In the lowest Landau level the matrix coordinates are given by the generalized $SO(5)$ gamma matrices of the fuzzy four-sphere satisfying the quantum Nambu algebra, while in higher Landau levels the matrix geometry becomes a nested fuzzy structure realizing a pure quantum geometry with no counterpart in classical geometry. The internal fuzzy geometry structure is discussed in the view of an $SO(4)$ Pauli-Schr\"odinger model and the $SO(4)$ Landau model, where we unveil a hidden singular gauge transformation between their background non-Abelian field configurations. Relativistic versions of the $SO(5)$ Landau model are also investigated and relationship to the Berezin-Toeplitz quantization is clarified. We finally discuss the matrix geometry of the Landau models in even higher dimensions. | high energy physics theory |
We investigate the mineralogy of L5 Martian Trojan asteroids via reflectance spectroscopy, in particular (101429) 1998 $\mbox{VF}_{31}$, the only L5 Trojan that does not belong to the Eureka family (Christou, 2013). We find that this asteroid most likely belongs to the Bus-Demeo S-complex, in agreement with Rivkin et al. (2007) and obtain good spectral matches with Sq- or S-type asteroids, the lunar surface and of Martian and lunar meteorites. Mixture fitting to spectral endmembers suggests a surface abundance of Mg-rich orthopyroxene and iron metal or, alternatively, a mixture of plagioclase and metal with a small amount of Mg-poor orthopyroxene. The metallic component may be part of the intrinsic mineral makeup of the asteroid or an indication of extreme space weathering. We discuss several origin scenarios for (101429). The asteroid could be related to iron-rich primitive achondrites (Rivkin et al.), may have originated as impact ejecta from Mars - as proposed recently for the Eureka family asteroids (Polishook et al., 2017) - or could be a relic fragment of the Moon's original solid crust. If, on the other hand, (101429) is a relatively recent addition to the Martian Trojan clouds (Christou et al., 2020), its origin is probably traced to high-inclination asteroid families in the Inner Main Belt. For the olivine-dominated Eureka family, we find that the two smaller asteroids are more spectrally similar to one another than to (5261) Eureka. Spectral profiles of all three asteroids are closely similar shortward of $\sim$0.7$\mu$m but diverge at longer wavelengths. For the two smaller asteroids in particular, we find the spectra are virtually identical up to $0.8$$\mu$m. We attribute spectral differences in the near-IR region to differences in either: degree of space weathering, olivine chemical composition and/or regolith grain size. | astrophysics |
We analyse the flat space limit of 3-point correlators in momentum space for general conformal field theories in even spacetime dimensions, and show they exhibit a double copy structure similar to that found in odd dimensions. In even dimensions, the situation is more complicated because correlators contain branch cuts and divergences which need to be renormalised. We describe the analytic continuation of momenta required to extract the flat space limit, and show that the flat space limit is encoded in the leading singularity of a 1-loop triangle integral which serves as a master integral for 3-point correlators in even dimensions. We then give a detailed analysis of the renormalised correlators in four dimensions where the flat space limit of stress tensor correlators are controlled by the coefficients in the trace anomaly. | high energy physics theory |
Robot path planning model based on RNN and visual quality evaluation in the context of crowds is analyzed in this paper. Mobile robot path planning is the key to robot navigation and an important field in robot research. Let the motion space of the robot be a two-dimensional plane, and the motion of the robot is regarded as a kind of motion under the virtual artificial potential field force when the artificial potential field method is used for the path planning. Compared to simple image acquisition, image acquisition in a complex crowd environment requires image pre-processing first. We mainly use OpenCV calibration tools to pre-process the acquired images. In themethodology design, the RNN-based visual quality evaluation to filter background noise is conducted. After calibration, Gaussian noise and some other redundant information affecting the subsequent operations still exist in the image. Based on RNN, a new image quality evaluation algorithm is developed, and denoising is performed on this basis. Furthermore, the novel path planning model is designed and simulated. The expeirment compared with the state-of-the-art models have shown the robustness of the model. | computer science |
3D data compression techniques can be used to determine the natural basis of radial eigenmodes that encode the maximum amount of information in a tomographic large-scale structure survey. We explore the potential of the Karhunen-Lo\`eve decomposition in reducing the dimensionality of the data vector for cosmic shear measurements, and apply it to the final data from the \cfh survey. We find that practically all of the cosmological information can be encoded in one single radial eigenmode, from which we are able to reproduce compatible constraints with those found in the fiducial tomographic analysis (done with 7 redshift bins) with a factor of ~30 fewer datapoints. This simplifies the problem of computing the two-point function covariance matrix from mock catalogues by the same factor, or by a factor of ~800 for an analytical covariance. The resulting set of radial eigenfunctions is close to ell-independent, and therefore they can be used as redshift-dependent galaxy weights. This simplifies the application of the Karhunen-Lo\`eve decomposition to real-space and Fourier-space data, and allows one to explore the effective radial window function of the principal eigenmodes as well as the associated shear maps in order to identify potential systematics. We also apply the method to extended parameter spaces and verify that additional information may be gained by including a second mode to break parameter degeneracies. The data and analysis code are publicly available at https://github.com/emiliobellini/kl_sample. | astrophysics |
Given nonstationary data, one generally wants to extract the trend from the noise by smoothing or filtering. However, it is often important to delineate a third intermediate category, that we call high frequency (HF) features: this is the case in our motivating example, which consists in experimental measurements of the time-dynamics of depolymerising protein fibrils average size. One may intuitively visualise HF features as the presence of fast, possibly nonstationary and transient oscillations, distinct from a slowly-varying trend envelope. The aim of this article is to propose an empirical definition of HF features and construct estimators and statistical tests for their presence accordingly, when the data consists of a noisy nonstationary 1-dimensional signal. We propose a parametric characterization in the Fourier domain of the HF features by defining a maximal amplitude and distance to low frequencies of significant energy. We introduce a data-driven procedure to estimate these parameters, and compute a p-value proxy based on a statistical test for the presence of HF features. The test is first conducted on simulated signals where the ratio amplitude of the HF features to the level of the noise is controlled. The test detects HF features even when the level of noise is five times larger than the amplitude of the oscillations. In a second part, the test is conducted on experimental data from Prion disease experiments and it confirms the presence of HF features in these signals with significant confidence. | electrical engineering and systems science |
We investigate the combined effects of anisotropy and a magnetic field in strongly interacting gauge theories by the gauge/gravity correspondence. Our main motivation is the quark-gluon plasma produced in off-central heavy-ion collisions which exhibits large anisotropy in pressure gradients as well as large external magnetic fields. We explore two different configurations, with the anisotropy either parallel or perpendicular to the magnetic field, focusing on the competition and interplay between the two. A detailed study of the RG flow in the ground state reveals a rich structure where depending on which of the two, anisotropy or magnetic field, is stronger, intermediate geometries with approximate AdS$_4\times \mathbb{R}$ and AdS$_3\times \mathbb{R}^2$ factors arise. This competition is also manifest in the phase structure at finite temperature, specifically in the dependence of the chiral transition temperature on anisotropy and magnetic field, from which we infer the presence of inverse magnetic and anisotropic catalyses of the chiral condensate. Finally, we consider other salient observables in the theory, including the quark-antiquark potential, shear viscosity, entanglement entropy and the butterfly velocity. We demonstrate that they serve as good probes of the theory, in particular, distinguishing between the effects of the magnetic field and anisotropy in the ground and plasma states. We also find that the butterfly velocity, which codifies how fast information propagates in the plasma, exhibits a rich structure as a function of temperature, anisotropy and magnetic field, exceeding the conformal value in certain regimes. | high energy physics theory |
Aspect-level sentiment classification aims to identify the sentiment polarity towards a specific aspect term in a sentence. Most current approaches mainly consider the semantic information by utilizing attention mechanisms to capture the interactions between the context and the aspect term. In this paper, we propose to employ graph convolutional networks (GCNs) on the dependency tree to learn syntax-aware representations of aspect terms. GCNs often show the best performance with two layers, and deeper GCNs do not bring additional gain due to over-smoothing problem. However, in some cases, important context words cannot be reached within two hops on the dependency tree. Therefore we design a selective attention based GCN block (SA-GCN) to find the most important context words, and directly aggregate these information into the aspect-term representation. We conduct experiments on the SemEval 2014 Task 4 datasets. Our experimental results show that our model outperforms the current state-of-the-art. | computer science |
In this paper, we use certain norm inequalities to obtain new uncertain relations based on the Wigner-Yanase skew information. First for an arbitrary finite number of observables we derive an uncertainty relation outperforming previous lower bounds. We then propose new weighted uncertainty relations for two noncompatible observables. Two separable criteria via skew information are also obtained. | quantum physics |
Supernova remnant RX J1713.7-3946 (also named as G347.3-0.5) has exhibited largest surface brightness, detailed spectral and shell-type morphology, it is one of the brightest TeV sources. The recent H.E.S.S. observation of RX J1713.7-3946 revealed \textbf{a} broken power-law spectrum of GeV-TeV gamma-ray spectrum and more extended gamma-ray spatial radial profile than that in the X-ray band. Based on the diffusion shock acceleration model, we solve spherically symmetric hydrodynamic equations and transport equations of particles, and investigate multi-band non-thermal emission of RX J1713.7-3946 and radial profiles of its surface brightness for two selected zones in the leptonic scenario for the $\gamma$-ray emission. We found (1) the diffusion coefficient has a weak energy-dependent, and the Kolmogorov type is favored; (2) the magnetic field strength could vary linearly or nonlinearly with radius for different surrounding environments because of possible turbulence in shock downstream region, and a compressional amplification is likely to exist at the shock front; (3) the non-thermal photons from radio to X-ray bands are dominated by synchrotron emission from relativistic electrons, if the GeV-TeV gamma-rays are produced by inverse Compton scattering from these electrons interacting with the background photons, then the X-ray and gamma-ray radial profiles can be reproduced except for the more extended $\gamma$-ray emission. | astrophysics |
The XENON1T experiment has recently announced the observation of an excess in electron recoil events at energy range of $1-7$ keV with a $3.5~\sigma$ signal significance over the Standard Model prediction. In this letter we sketch the prospects of explaining such an excess from Migdal ionization events with below threshold nuclear recoil energies. Interestingly, these are expected to show signal events in the ballpark energy scale of the observed excess. We demonstrate that the observed signal can be reproduced through the Migdal effect by an $\mathcal{O}(1)$ GeV neutron-philic dark matter having a spin-dependent coupling with the nucleus. A more optimistic scenario is explored where the Migdal ionization is driven by MeV scale boosted dark matter. | high energy physics phenomenology |
Noise in quantum operations often negates the advantage of quantum computation. However, most classical simulations of quantum computers calculate the ideal probability amplitudes either storing full state vectors or using sophisticated tensor network contractions. Here, we investigate sampling-based classical simulation methods for noisy quantum circuits. Specifically, we characterize the simulation costs of two major schemes, stabilizer-state sampling of magic states and Heisenberg propagation, for quantum circuits being subject to stochastic Pauli noise, such as depolarizing and dephasing noise. To this end, we introduce several techniques for the stabilizer-state sampling to reduce the simulation costs under such noise. It revealed that in the low noise regime, stabilizer-state sampling results in a smaller sampling cost, while Heisenberg propagation is better in the high noise regime. Furthermore, for a high depolarizing noise rate $\sim 10\%$, these methods provide better scaling compared to that given by the low-rank stabilizer decomposition. We believe that these knowledge of classical simulation costs is useful to squeeze possible quantum advantage on near-term noisy quantum devices as well as efficient classical simulation methods. | quantum physics |
Rapid advances in computing technology over the past few decades have spurred two extraordinary phenomena in science: large-scale and high-throughput data collection coupled with the creation and implementation of complex statistical algorithms for data analysis. Together, these two phenomena have brought about tremendous advances in scientific discovery but have also raised two serious concerns, one relatively new and one quite familiar. The complexity of modern data analyses raises questions about the reproducibility of the analyses, meaning the ability of independent analysts to re-create the results claimed by the original authors using the original data and analysis techniques. While seemingly a straightforward concept, reproducibility of analyses is typically thwarted by the lack of availability of the data and computer code that were used in the analyses. A much more general concern is the replicability of scientific findings, which concerns the frequency with which scientific claims are confirmed by completely independent investigations. While the concepts of reproduciblity and replicability are related, it is worth noting that they are focused on quite different goals and address different aspects of scientific progress. In this review, we will discuss the origins of reproducible research, characterize the current status of reproduciblity in public health research, and connect reproduciblity to current concerns about replicability of scientific findings. Finally, we describe a path forward for improving both the reproducibility and replicability of public health research in the future. | statistics |
Currently double-interface MTJs have been developed for enhancing the thermal stability barrier in small technology node. Dzyaloshinskii-Moriya interaction (DMI) inevitably exists in such devices due to the use of the heavy-metal/ferromagnet structures. Previous studies have demonstrated the detrimental effect of DMI on the conventional single-interface spin transfer torque (STT) MTJs. Here in this work we will prove that the detrimental effect of the DMI could be almost eliminated in the double-interface STT-MTJ. This conclusion is attributed to the suppressing effect of the Ruderman-Kittel-Kasuya-Yosida (RKKY) interaction on the DMI. Detailed mechanisms are analyzed based on the theoretical models and micromagnetic simulation results. Our work highlights the importance of appropriately controlling the DMI in two free layers of the double-interface STT-MTJ. | physics |
Recently, reinforcement learning models have achieved great success, mastering complex tasks such as Go and other games with higher scores than human players. Many of these models store considerable data on the tasks and achieve high performance by extracting visual and time-series features using convolutional neural networks (CNNs) and recurrent neural networks, respectively. However, these networks have very high computational costs because they need to be trained by repeatedly using the stored data. In this study, we propose a novel practical approach called reinforcement learning with convolutional reservoir computing (RCRC) model. The RCRC model uses a fixed random-weight CNN and a reservoir computing model to extract visual and time-series features. Using these extracted features, it decides actions with an evolution strategy method. Thereby, the RCRC model has several desirable features: (1) there is no need to train the feature extractor, (2) there is no need to store training data, (3) it can take a wide range of actions, and (4) there is only a single task-dependent weight parameter to be trained. Furthermore, we show the RCRC model can solve multiple reinforcement learning tasks with a completely identical feature extractor. | computer science |
Simultaneous diagonalization via congruence (SDC) for more than two symmetric matrices has been a long standing problem. So far, the best attempt either relies on the existence of a semidefinite matrix pencil or casts on the complex field. The problem now is resolved without any assumption. We first propose necessary and sufficient conditions for SDC in case that at least one of the matrices is nonsingular. Otherwise, we show that the singular matrices can be decomposed into diagonal blocks such that the SDC of given matrices becomes equivalently the SDC of the sub-matrices. Most importantly, the sub-matrices now contain at least one nonsingular matrix. Applications to simplify some difficult optimization problems with the presence of SDC are mentioned. | mathematics |
Although variable-speed three-blade wind turbines are nowadays quite popular, their control remains a challenging task. We propose a new easily implementable model-free control approach with the corresponding intelligent controllers. Several convincing computer simulations, including some fault accommodations, shows that model-free controllers are more efficient and robust than classic proportional-integral controllers. | electrical engineering and systems science |
We introduce a near-term experimental platform for realizing an associative memory. It can simultaneously store many memories by using spinful bosons coupled to a degenerate multimode optical cavity. The associative memory is realized by a confocal cavity QED neural network, with the cavity modes serving as the synapses, connecting a network of superradiant atomic spin ensembles, which serve as the neurons. Memories are encoded in the connectivity matrix between the spins, and can be accessed through the input and output of patterns of light. Each aspect of the scheme is based on recently demonstrated technology using a confocal cavity and Bose-condensed atoms. Our scheme has two conceptually novel elements. First, it introduces a new form of random spin system that interpolates between a ferromagnetic and a spin-glass regime as a physical parameter is tuned---the positions of ensembles within the cavity. Second, and more importantly, the spins relax via deterministic steepest-descent dynamics, rather than Glauber dynamics. We show that this nonequilibrium quantum-optical scheme has significant advantages for associative memory over Glauber dynamics: These dynamics can enhance the network's ability to store and recall memories beyond that of the standard Hopfield model. Surprisingly, the cavity QED dynamics can retrieve memories even when the system is in the spin glass phase. Thus, the experimental platform provides a novel physical instantiation of associative memories and spin glasses as well as provides an unusual form of relaxational dynamics that is conducive to memory recall even in regimes where it was thought to be impossible. | quantum physics |
Superconducting circuits are a strong contender for realizing quantum computing systems, and are also successfully used to study quantum optics and hybrid quantum systems. However, their cryogenic operation temperatures and the current lack of coherence-preserving microwave-to-optical conversion solutions have hindered the realization of superconducting quantum networks either spanning different cryogenics systems or larger distances. Here, we report the successful operation of a cryogenic waveguide coherently linking transmon qubits located in two dilution refrigerators separated by a physical distance of five meters. We transfer qubit states and generate entanglement on-demand with average transfer and target state fidelities of 85.8 % and 79.5 %, respectively, between the two nodes of this elementary network. Cryogenic microwave links do provide an opportunity to scale up systems for quantum computing and create local area quantum communication networks over length scales of at least tens of meters. | quantum physics |
We study the mixed topological / holomorphic Chern-Simons theory of Costello, Witten and Yamazaki on an orbifold $(\Sigma\times{\mathbb C})/{\mathbb Z}_2$, obtaining a description of lattice integrable systems in the presence of a boundary. By performing an order $\hbar$ calculation we derive a formula for the the asymptotic behaviour of $K$-matrices associated to rational, quasi-classical $R$-matrices. The ${\mathbb Z}_2$-action on $\Sigma\times {\mathbb C}$ fixes a line $L$, and line operators on $L$ are shown to be labelled by representations of the twisted Yangian. The OPE of such a line operator with a Wilson line in the bulk is shown to give the coproduct of the twisted Yangian. We give the gauge theory realisation of the Sklyanin determinant and related conditions in the $RTT$ presentation of the boundary Yang-Baxter equation. | high energy physics theory |
We propose a Bayesian hidden Markov model for analyzing time series and sequential data where a special structure of the transition probability matrix is embedded to model explicit-duration semi-Markovian dynamics. Our formulation allows for the development of highly flexible and interpretable models that can integrate available prior information on state durations while keeping a moderate computational cost to perform efficient posterior inference. We show the benefits of choosing a Bayesian approach over its frequentist counterpart, in terms of incorporation of prior information, quantification of uncertainty, model selection and out-of-sample forecasting. The use of our methodology is illustrated in an application relevant to e-Health, where we investigate rest-activity rhythms using telemetric activity data collected via a wearable sensing device. | statistics |
Further advancement of quantum computing (QC) is contingent on enabling many-body models that avoid deep circuits and excessive use of CNOT gates. To this end, we develop a QC approach employing finite-order connected moment expansions (CMX) and affordable procedures for initial state preparation. We demonstrate the performance of our approach employing several quantum variants of CMX through the classical emulations on the H2 molecule potential energy surface and the Anderson model with a broad range of correlation strength. The results show that our approach is robust and flexible. Good agreements with exact solutions can be maintained even at the dissociation and strong correlation limits. | quantum physics |
This work uses the long-wavelength limit to compute LSPR response of biosensors, expanding the open-source PyGBe code to compute the extinction cross-section of metallic nanoparticles in the presence of any target for sensing. The target molecule is represented by a surface mesh, based on its crystal structure. PyGBe is research software for continuum electrostatics, written in Python with computationally expensive parts accelerated on GPU hardware, via PyCUDA. It is also accelerated algorithmically via a treecode that offers O(N log N) computational complexity. These features allow PyGBe to handle problems with half a million boundary elements or more. Using a model problem consisting of an isolated silver nanosphere in an electric field, our results show grid convergence as 1/N, and accurate computation of the extinction cross-section as a function of wavelength (compared with an analytical solution). For a model of a sensor-analyte system, consisting of a spherical silver nanoparticle and a set of bovine serum albumin (BSA) proteins, our results again obtain grid convergence as 1/N (with respect to the Richardson extrapolated value). Computing the LSPR response as a function of wavelength in the presence of BSA proteins captures a red-shift of 0.5 nm in the resonance frequency due to the presence of the analytes at 1-nm distance. The final result is a sensitivity study of the biosensor model, obtaining the shift in resonance frequency for various distances between the proteins and the nanoparticle. All results in this paper are fully reproducible, and we have deposited in archival data repositories all the materials needed to run the computations again and re-create the figures. PyGBe is open source under a permissive license and openly developed. Documentation is available at http://barbagroup.github.io/pygbe/docs/. | physics |
Large generative language models have been very successful for English, but other languages lag behind due to data and computational limitations. We propose a method that may overcome these problems by adapting existing pre-trained language models to new languages. Specifically, we describe the adaptation of English GPT-2 to Italian and Dutch by retraining lexical embeddings without tuning the Transformer layers. As a result, we obtain lexical embeddings for Italian and Dutch that are aligned with the original English lexical embeddings and induce a bilingual lexicon from this alignment. Additionally, we show how to scale up complexity by transforming relearned lexical embeddings of GPT-2 small to the GPT-2 medium embedding space. This method minimises the amount of training and prevents losing information during adaptation that was learned by GPT-2. English GPT-2 models with relearned lexical embeddings can generate realistic sentences in Italian and Dutch, but on average these sentences are still identifiable as artificial by humans. Based on perplexity scores and human judgements, we find that generated sentences become more realistic with some additional full model finetuning, especially for Dutch. For Italian, we see that they are evaluated on par with sentences generated by a GPT-2 model fully trained from scratch. Our work can be conceived as a blueprint for training GPT-2s for other languages, and we provide a 'recipe' to do so. | computer science |
One of the core advantages topological methods for data analysis provide is that the language of (co)chains can be mapped onto the semantics of the data, providing a natural avenue for human understanding of the results. Here, we describe such a semantic structure on Chen's classical iterated integral cochain model for paths in Euclidean space. Specifically, in the context of population time series data, we observe that iterated integrals provide a model-free measure of pairwise influence that can be used for causality inference. Along the way, we survey recent results and applications, review the current standard methods for causality inference, and briefly provide our outlook on generalizations to go beyond time series data. | statistics |
The collective motion of microswimmers in suspensions induce patterns of vortices on scales that are much larger than the characteristic size of a microswimmer, attaining a state called bacterial turbulence. Hydrodynamic turbulence acts on even larger scales and is dominated by inertial transport of energy. Using an established modification of the Navier-Stokes equation that accounts for the small scale forcing of hydrodynamic flow by microswimmers, we study the properties of a dense supensions of microswimmers in two dimensions, where the conservation of enstrophy can drive an inverse cascade through which energy is accumulated on the largest scales. We find that the dynamical and statistical properties of the flow show a sharp transition to the formation of vortices at the largest length scale. The results show that 2d bacterial and hydrodynamic turbulence are separated by a subcritical phase transition. | physics |
Given a set of baseline assumptions, a breakdown frontier is the boundary between the set of assumptions which lead to a specific conclusion and those which do not. In a potential outcomes model with a binary treatment, we consider two conclusions: First, that ATE is at least a specific value (e.g., nonnegative) and second that the proportion of units who benefit from treatment is at least a specific value (e.g., at least 50\%). For these conclusions, we derive the breakdown frontier for two kinds of assumptions: one which indexes relaxations of the baseline random assignment of treatment assumption, and one which indexes relaxations of the baseline rank invariance assumption. These classes of assumptions nest both the point identifying assumptions of random assignment and rank invariance and the opposite end of no constraints on treatment selection or the dependence structure between potential outcomes. This frontier provides a quantitative measure of robustness of conclusions to relaxations of the baseline point identifying assumptions. We derive $\sqrt{N}$-consistent sample analog estimators for these frontiers. We then provide two asymptotically valid bootstrap procedures for constructing lower uniform confidence bands for the breakdown frontier. As a measure of robustness, estimated breakdown frontiers and their corresponding confidence bands can be presented alongside traditional point estimates and confidence intervals obtained under point identifying assumptions. We illustrate this approach in an empirical application to the effect of child soldiering on wages. We find that sufficiently weak conclusions are robust to simultaneous failures of rank invariance and random assignment, while some stronger conclusions are fairly robust to failures of rank invariance but not necessarily to relaxations of random assignment. | statistics |
In this paper, we construct for higher twists that arise from cohomotopy classes, the Chern character in higher twisted K-theory, that maps into higher twisted cohomology. We show that it gives rise to an isomorphism between higher twisted K-theory and higher twisted cohomology over the reals. Finally we compute spherical T-duality in higher twisted K-theory and higher twisted cohomology in very general cases. | mathematics |
We propose to implement speech enhancement by the regeneration of clean speech from a salient representation extracted from the noisy signal. The network that extracts salient features is trained using a set of weight-sharing clones of the extractor network. The clones receive mel-frequency spectra of different noisy versions of the same speech signal as input. By encouraging the outputs of the clones to be similar for these different input signals, we train a feature extractor network that is robust to noise. At inference, the salient features form the input to a WaveNet network that generates a natural and clean speech signal with the same attributes as the ground-truth clean signal. As the signal becomes noisier, our system produces natural sounding errors that stay on the speech manifold, in place of traditional artifacts found in other systems. Our experiments confirm that our generative enhancement system provides state-of-the-art enhancement performance within the generative class of enhancers according to a MUSHRA-like test. The clones based system matches or outperforms the other systems at each input signal-to-noise (SNR) range with statistical significance. | electrical engineering and systems science |
Central exclusive production at hadron colliders is characterised by the hadronic state produced at or close to midrapidity, and by the two forward scattered protons, or remnants thereof. No particles are produced between the midrapidity system and the forward going beam particles, and such events can hence be identified experimentally by a double-gap topology. At LHC energies, central exclusive production in proton-proton collisions is dominated by pomeron-pomeron fusion. The models to describe such reactions are reviewed, and the ongoing efforts in the ALICE Collaboration to analyse double-gap events taken in Run 2 at the LHC are presented. | high energy physics phenomenology |
Forbush decreases (FDs), which are short-term drops in the flux of galactic cosmic rays, are caused by the shielding from strong and/or turbulent magnetic structures in the solar wind, especially interplanetary coronal mass ejections (ICMEs) and their associated shocks, as well as corotating interaction regions. Such events can be observed at Earth, for example, using neutron monitors, and also at many other locations in the solar system, such as on the surface of Mars with the Radiation Assessment Detector instrument onboard Mars Science Laboratory. They are often used as a proxy for detecting the arrival of ICMEs or corotating interaction regions, especially when sufficient in situ solar wind measurements are not available. We compare the properties of FDs observed at Earth and Mars, focusing on events produced by ICMEs. We find that FDs at both locations show a correlation between their total amplitude and the maximum hourly decrease, but with different proportionality factors. We explain this difference using theoretical modeling approaches and suggest that it is related to the size increase of ICMEs, and in particular their sheath regions, en route from Earth to Mars. From the FD data, we can derive the sheath broadening factor to be between about 1.5 and 1.9, agreeing with our theoretical considerations. This factor is also in line with previous measurements of the sheath evolution closer to the Sun. | physics |
The room temperature compatibility of the negatively-charged nitrogen-vacancy (NV-) in diamond makes it the ideal quantum system for a university teaching lab. Here, we describe a low-cost experimental setup for coherent control experiments on the electronic spin state of the NV- center. We implement spin-relaxation measurements, optically-detected magnetic resonance, Rabi oscillations, and dynamical decoupling sequences on an ensemble of NV- centers. The relatively short times required to perform each of these experiments (<10 minutes) demonstrate the feasibility of the setup in a teaching lab. Learning outcomes include basic understanding of quantum spin systems, magnetic resonance, the rotating frame, Bloch spheres, and pulse sequence development. | physics |
In this paper, we study the nonnegative tensor data and propose an orthogonal nonnegative Tucker decomposition (ONTD). We discuss some properties of ONTD and develop a convex relaxation algorithm of the augmented Lagrangian function to solve the optimization problem. The convergence of the algorithm is given. We employ ONTD on the image data sets from the real world applications including face recognition, image representation, hyperspectral unmixing. Numerical results are shown to illustrate the effectiveness of the proposed algorithm. | statistics |
Network representation learning (NRL) technique has been successfully adopted in various data mining and machine learning applications. Random walk based NRL is one popular paradigm, which uses a set of random walks to capture the network structural information, and then employs word2vec models to learn the low-dimensional representations. However, until now there is lack of a framework, which unifies existing random walk based NRL models and supports to efficiently learn from large networks. The main obstacle comes from the diverse random walk models and the inefficient sampling method for the random walk generation. In this paper, we first introduce a new and efficient edge sampler based on Metropolis-Hastings sampling technique, and theoretically show the convergence property of the edge sampler to arbitrary discrete probability distributions. Then we propose a random walk model abstraction, in which users can easily define different transition probability by specifying dynamic edge weights and random walk states. The abstraction is efficiently supported by our edge sampler, since our sampler can draw samples from unnormalized probability distribution in constant time complexity. Finally, with the new edge sampler and random walk model abstraction, we carefully implement a scalable NRL framework called UniNet. We conduct comprehensive experiments with five random walk based NRL models over eleven real-world datasets, and the results clearly demonstrate the efficiency of UniNet over billion-edge networks. | computer science |
We study four coupled SYK models and nearly AdS$_2$ gravities. In the SYK model side, we construct a model that couples two copies of two coupled SYK models. In nearly AdS$_2$ gravity side, we entangle matter fields in two copies of traversable wormholes. In both cases, the systems show first order phase transitions at zero temperature by changing couplings, which is understood as the exchange of traversable wormhole configurations. In nearly AdS$_2$ gravity cases, by exchanging the role of space and time the wormholes are interpreted as bra-ket wormholes. In Lorentzian signature, these bra-ket wormholes lead to two closed universes that are entangled with each other as well as matter fields in the flat space without dynamical gravity. We study the effect of projection or entangling operation for matters on flat spaces and they cause phase transitions in bra-ket wormholes, which leads to the pair annihilation of closed universes. Using these bra-ket wormholes, we discuss the way to embed states in 2d holographic CFTs into Hilbert space of many 2d free fields. | high energy physics theory |
The principle of Correlation Optical Time Domain Reflectometry (C-OTDR) is proposed to accurately measure the propagation delay over a multi-span optical fiber link. The delay of the transmission fiber is measured in the reflective mode, while uni-directional node components are measured in a transmissive mode. Delimiting reflectors are required between the sections for accurate demarcation. | electrical engineering and systems science |
We propose a single-site mean-field description, an analogue of Weiss mean-field theory, suitable for narrow-band systems with correlation-induced hybridisation at finite temperatures. Presently this approach, based on the notion of a fluctuating on-site density matrix (OSDM), is developed for the case of extended Falicov-Kimball model (EFKM). In an EFKM, an excitonic insulator phase can be stabilised at zero temperature. With increasing temperature, the excitonic order parameter (interaction-induced hybridisation on-site, characterised by the absolute value and phase) eventually becomes disordered, which involves fluctuations of both its phase and (at higher T) its absolute value. In order to build an adequate finite-temperature description, it is important to clarify the nature of degrees of freedom associated with the phase and absolute value of the induced hybridisation, and correctly account for the corresponding phase-space volume. We show that the OSDM-based treatment of the local fluctuations indeed provides an intuitive and concise description (including the phase-space integration measure). This allows to describe both the lower-temperature regime where phase fluctuations destroy the long-range order, and the higher temperature crossover corresponding to a decrease of the absolute value of hybridisation. In spite of the rapid progress in the studies of excitonic insulators, a unified picture of this kind has not been available to date. We briefly discuss recent experiments on ${\rm Ta_2 Ni Se_5 }$ and also address the amplitude mode of collective excitations in relation to the measurements reported for 1T--${\rm TiSe_2}$. Both the overall scenario and the theoretical framework are also expected to be relevant in other contexts, including the Kondo lattice model. | condensed matter |
In this contribution we establish a dictionary between terms in two different areas in order to show that many of the topics studied are common ones - just with a different terminology. We further analyze the relations between the discrete-time and continuous-time versions of the problem, using results from both of these fields. We will also differentiate between a discretization of the continuous-time dynamical system and a discrete-time dynamical system itself. | mathematics |
Transverse momentum dependent parton distribution functions (TMDPDFs) provide a unique probe of the three-dimensional spin structure of hadrons. We construct spin-dependent quasi-TMDPDFs that are amenable to lattice QCD calculations and that can be used to determine spin-dependent TMDPDFs. We calculate the short-distance coefficients connecting spin-dependent TMDPDFs and quasi-TMDPDFs at one-loop order. We find that the helicity and transversity distributions have the same coefficient as the unpolarized TMDPDF. We also argue that the same is true for pretzelosity and that this spin universality of the matching will hold to all orders in $\alpha_s$. Thus, it is possible to calculate ratios of these distributions as a function of longitudinal momentum and transverse position utilizing simpler Wilson line paths than have previously been considered. | high energy physics phenomenology |
We consider the problem of estimating an expected outcome from a stochastic simulation model. Our goal is to develop a theoretical framework on importance sampling for such estimation. By investigating the variance of an importance sampling estimator, we propose a two-stage procedure that involves a regression stage and a sampling stage to construct the final estimator. We introduce a parametric and a nonparametric regression estimator in the first stage and study how the allocation between the two stages affects the performance of the final estimator. We analyze the variance reduction rates and derive oracle properties of both methods. We evaluate the empirical performances of the methods using two numerical examples and a case study on wind turbine reliability evaluation. | statistics |
In this paper, we study further properties and applications of weighted homology and persistent homology. We introduce the Mayer-Vietoris sequence and generalized Bockstein spectral sequence for weighted homology. For applications, we show an algorithm to construct a filtration of weighted simplicial complexes from a weighted network. We also prove a theorem that allows us to calculate the mod $p^2$ weighted persistent homology given some information on the mod $p$ weighted persistent homology. | mathematics |
Cytoskeletal filaments are capable of self-assembly in the absence of externally supplied chemical energy, but the rapid turnover rates essential for their biological function require a constant flux of ATP or GTP hydrolysis. The same is true for two-dimensional protein assemblies employed in the formation of vesicles from cellular membranes, which rely on ATP-hydrolyzing enzymes to rapidly disassemble upon completion of the process. Recent observations suggest that the nucleolus, p granules and other three-dimensional membraneless organelles may also demand dissipation of chemical energy to maintain their fluidity. Cooperative binding plays a crucial role in the dynamics of these higher-dimensional structures, but is absent from classic models of 1-dimensional cytoskeletal assembly. In this Letter, we present a thermodynamically consistent model of actively regeneration with cooperative assembly, and compute the maximum turnover rate and minimum disassembly time as a function of the chemical driving force and the binding energy. We find that these driven structures resemble different equilibrium states above and below the nucleation barrier. In particular, we show that the maximal acceleration under large binding energies unites infinite-temperature local fluctuations with low-temperature nucleation kinetics. | physics |
A new robust class of multivariate skew distributions is introduced. Practical aspects such as parameter estimation method of the proposed class are discussed, we show that the proposed class can be fitted under a reasonable time frame. Our study shows that the class of distributions is capable to model multivariate skewness structure and does not suffer from the curse of dimensionality as heavily as other distributions of similar complexity do, such as the class of canonical skew distributions. We also derive a nested form of the proposed class which appears to be the most flexible class of multivariate skew distributions in literature that has a closed-form density function. Numerical examples on two data sets, i) a data set containing daily river flow data recorded in the UK; and ii) a data set containing biomedical variables of athletes collected by the Australian Institute of Sports (AIS), are demonstrated. These examples further support the practicality of the proposed class on moderate dimensional data sets. | statistics |
The information encoded into an open quantum system that evolves under a Markovian dynamics is always monotonically non-increasing. Nonetheless, for a given quantifier of the information contained in the system, it is in general not clear if for all non-Markovian dynamics it is possible to observe a non-monotonic evolution of this quantity, namely a backflow. We address this problem by considering correlations of finite-dimensional bipartite systems. For this purpose, we consider a class of correlation measures and prove that if the dynamics is non-Markovian there exists at least one element from this class that provides a correlation backflow. Moreover, we provide a set of initial probe states that accomplish this witnessing task. This result provides the first one-to-one relation between non-Markovian dynamics of finite-dimensional quantum systems and correlation backflows. | quantum physics |
We analyze the exponential stability of distributed parameter systems. The system we consider is described by a coupled parabolic partial differential equation with spatially varying coefficients. We approximate the coefficients by splitting space domains but take into account approximation errors during stability analysis. Using a quadratic Lyapunov function, we obtain sufficient conditions for exponential stability in terms of linear matrix inequalities. | mathematics |
The monogamy relations of entanglement are highly significant. However, they involve only amounts of entanglement shared by different subsystems. Results on monogamy relations between entanglement and other kinds of correlations, and particularly classical correlations, are very scarce. Here we experimentally observe a tradeoff relation between internal quantum nonseparability and external total correlations in a photonic system and found that even purely classical external correlations have a detrimental effect on internal nonseparability. The nonseparability we consider, measured by the concurrence, is between different degrees of freedom within the same photon, and the external classical correlations, measured by the standard quantum mutual information, are generated between the photons of a photon pair using the time-bin method. Our observations show that to preserve the internal entanglement in a system, it is necessary to maintain low external correlations, including classical ones, between the system and its environment. | quantum physics |
This article treats isoperimetric inequalities for integral currents in the setting of stratified nilpotent Lie groups equipped with left-invariant Riemannian metrics. We prove that for each such group there is a dimension in which no Euclidean isoperimetric inequality is admitted, while in all smaller dimensions strictly Euclidean isoperimetric inequalities are satisfied. | mathematics |
Black arsenic (BAs) is a van der Waals layered material with a puckered honeycomb structure and has received increased interest due to its anisotropic properties and promising performance in devices. Here, crystalline structure, thickness-dependent dielectric responses, and ambient stability of BAs nanosheets are investigated using STEM imaging and spectroscopy. Atomic-resolution HAADF-STEM images directly visualize the three-dimensional structure and evaluate the degree of anisotropy. STEM-EELS is used to measure the dielectric response of BAs as a function of the number of layers. Finally, BAs degradation under different ambient environments is studied highlighting high sensitivity to moisture in the air. | condensed matter |
Non-linear electrical transport studies at high-pulsed magnetic fields, above the range accessible by DC magnets, are of direct fundamental relevance to the physics of superconductors, domain-wall, charge-density waves, and topological semi-metal. All-superconducting very-high field magnets also make it technologically relevant to study vortex matter in this regime. However, pulsed magnetic fields reaching 100 T in milliseconds impose technical and fundamental challenges that have prevented the realization of these studies. Here, we present a technique for sub-microsecond, smart, current-voltage measurements, which enables determining the superconducting critical current in pulsed magnetic fields, beyond the reach of any DC magnet. We demonstrate the excellent agreement of this technique with low DC field measurements on Y$_{0.77}$Gd$_{0.23}$Ba$_2$Cu$_3$O$_7$ coated conductors with and without BaHfO$_3$ nanoparticles. Exploring the uncharted high magnetic field region, we discover a characteristic influence of the magnetic field rate of change ($dH/dt$) on the current-voltage curves in a superconductor. We fully capture this unexplored vortex physics through a theoretical model based on the asymmetry of the vortex velocity profile produced by the applied current. | condensed matter |
The Higgs boson pair production via gluon fusion at high-energy hadron colliders, such as the LHC, is vital in deciphering the Higgs potential and in pinning down the electroweak symmetry breaking mechanism. We carry out the next-to-next-to-next-to-leading order (N$^3$LO) QCD calculations in the infinite top-quark mass limit and present predictions for both the inclusive and differential cross sections, albeit the differential distributions other than the invariant mass distribution of the Higgs boson pair are approximated at N$^3$LO. Such corrections are indispensable in stabilising the perturbative expansion of the cross section in the strong coupling $\alpha_s$. At the inclusive level, the scale uncertainties are reduced by a factor of four compared with the next-to-next-to-leading order (NNLO) results. Given that the inclusion of the top-quark mass effects is essential for the phenomenological applications, we use several schemes to incorporate the N$^3$LO results in the infinite top-quark mass limit and the next-to-leading order (NLO) results with full top-quark mass dependence, and present theoretical predictions for the (differential) cross sections in the proton-proton collisions at the centre-of-mass energies $\sqrt{s}=13,14,27$ and $100$ TeV. Our results provide one of the most precise theoretical inputs for the analyses of the Higgs boson pair events. | high energy physics phenomenology |
Polynomials which afford nonnegative, real-rooted symmetric decompositions have been investigated recently in algebraic, enumerative and geometric combinatorics. Br\"and\'en and Solus have given sufficient conditions under which the image of a polynomial under a certain operator associated to barycentric subdivision has such a decomposition. This paper gives a new proof of their result which generalizes to subdivision operators in the setting of uniform triangulations of simplicial complexes, introduced by the first named author. Sufficient conditions under which these decompositions are also interlacing are described. Applications yield new classes of polynomials in geometric combinatorics which afford nonnegative, real-rooted symmetric decompositions. Some interesting questions in $f$-vector theory arise from this work. | mathematics |
We define a method for taking advantage of net reductions in combination with a SMT-based model checker. We prove the correctness of this method using a new notion of equivalence between nets that we call polyhedral abstraction. Our approach has been implemented in a tool, named SMPT, that provides two main procedures: Bounded Model Checking (BMC) and Property Directed Reachability (PDR). Each procedure has been adapted in order to use reductions and to work with arbitrary Petri nets. We tested SMPT on a large collection of queries used during the 2020 edition of the Model Checking Contest. Our experimental results show that our approach works well, even when we only have a moderate amount of reductions. | computer science |
Quantum dynamics of strongly correlated systems is a challenging problem. Although the low energy fractional excitations of one dimensional integrable models are often well-understood, exploring quantum dynamics in these systems remains challenging in the gapless regime, especially at intermediate and high energies. Based on the algebraic Bethe ansatz formalism, we study spin dynamics in a representative one dimensional strongly correlated model, {\it i.e. }, the antiferromagnetic spin-$\frac{1}{2}$ XXZ chain with the Ising anisotropy, via the form-factor formulae. Various excitations at different energy scales are identified crucial to the dynamic spin structure factors under the guidance of sum rules. At small magnetic polarizations, gapless excitations dominate the low energy spin dynamics arising from the magnetic-field-induced incommensurability. In contrast, spin dynamics at intermediate and high energies is characterized by the two- and three-string states, which are multi-particle excitations based on the commensurate N\'eel ordered background. Our work is helpful for experimental studies on spin dynamics in both condensed matter and cold atom systems beyond the low energy effective Luttinger liquid theory. Based on an intuitive physical picture, we speculate that the dynamic feature at high energies due to the multi-particle anti-bound state excitations can be generalized to non-integrable spin systems. | condensed matter |
We provide numerical evidence that the perturbative spectrum of anomalous dimensions in maximally supersymmetric SU(N) Yang-Mills theory is chaotic at finite values of N. We calculate the probability distribution of one-loop level spacings for subsectors of the theory and show that for large N it is given by the Poisson distribution of integrable models, while at finite values it is the Wigner-Dyson distribution of the Gaussian orthogonal ensemble random matrix theory. We extend these results to two-loop order and to a one-parameter family of deformations. We further study the spectral rigidity for these models and show that it is also well described by random matrix theory. Finally we demonstrate that the finite-N eigenvectors possess properties of chaotic states. | high energy physics theory |
Recently used in various machine learning contexts, the Gromov-Wasserstein distance (GW) allows for comparing distributions whose supports do not necessarily lie in the same metric space. However, this Optimal Transport (OT) distance requires solving a complex non convex quadratic program which is most of the time very costly both in time and memory. Contrary to GW, the Wasserstein distance (W) enjoys several properties ({\em e.g.} duality) that permit large scale optimization. Among those, the solution of W on the real line, that only requires sorting discrete samples in 1D, allows defining the Sliced Wasserstein (SW) distance. This paper proposes a new divergence based on GW akin to SW. We first derive a closed form for GW when dealing with 1D distributions, based on a new result for the related quadratic assignment problem. We then define a novel OT discrepancy that can deal with large scale distributions via a slicing approach and we show how it relates to the GW distance while being $O(n\log(n))$ to compute. We illustrate the behavior of this so called Sliced Gromov-Wasserstein (SGW) discrepancy in experiments where we demonstrate its ability to tackle similar problems as GW while being several order of magnitudes faster to compute. | statistics |
Extensive electrical characterization of ring oscillators (ROs) made in high-$\kappa$ metal gate 28nm Fully-Depleted Silicon-on- Insulator (FD-SOI) technology is presented for a set of temperatures between 296 and 4.3K. First, delay per stage ($\tau_P$), static current ($I_{STAT}$), and dynamic current ($I_{DYN}$) are analyzed for the case of the increase of threshold voltage ($V_{TH}$) observed at low temperature. Then, the same analysis is performed by compensating $V_{TH}$ to a constant, temperature independent value through forward body-biasing (FBB). Energy efficiency optimization is proposed for different supply voltages ($V_{DD}$) in order to find an optimal operating point combining both high RO frequencies and low power dissipation. We show that the Energy-Delay product ($EDP$) can be significantly reduced at low temperature by applying a forward body bias voltage ($V_{FBB}$). We demonstrate that outstanding performance of RO in terms of speed ($\tau_P$=37ps) and static power (7nA/stage) can be achieved at 4.3K with $V_{DD}$ reduced down to 0.325V. | physics |
Although the Weyl model with an unbounded linear energy spectrum appropriately describes low-energy electron states in a Weyl semimetal, it cannot capture the anomalous electromagnetic response of the chiral magnetic effect (CME) and anomalous Hall effect (AHE) in a straightforward manner. Here, we propose a regularized continuum model by modifying the Weyl model and show that it properly describes the CME and AHE in a unified manner. It turns out that the absence of the CME at equilibrium is guaranteed by a basic nature of the Berry curvature. We also show that the original Weyl model can properly describe the CME if an energy cutoff procedure is appropriately applied, although it fails to describe the AHE in its present form. | condensed matter |
We investigated magnetic textures in a Sc-doped hexaferrite film by means of phase microscopy (PM) with a hole-free phase plate in a transmission electron microscope. In a zero magnetic field, the stripe-shaped magnetic domains coexist with magnetic bubbles. The magnetization in both magnetic domains was oriented perpendicular to the film and the domain walls have an in-plane magnetization. In the remnant state at 9.2 mT, several magnetic bubbles were formed with the formation of stripe-shaped magnetic domains, and the out-of-plane component in the stripe-shaped domains gradually appeared as the film thickness increased. As the film thickness increases further, the magnetic bubbles with clockwise or counter-clockwise spin helicities formed a triangular lattice. These results in the remnant state suggest that the domain wall energy in the magnetic bubble domains is lower in the thicker region. | condensed matter |
CP-violation in the Higgs sector remains a possible source of the baryon asymmetry of the universe. Recent differential measurements of signed angular distributions in Higgs boson production provide a general experimental probe of the CP structure of Higgs boson interactions. We interpret these measurements using the Standard Model Effective Field Theory and show that they do not distinguish the various CP-violating operators that couple the Higgs and gauge fields. However, the constraints can be sharpened by measuring additional CP-sensitive observables and exploiting phase-space-dependent effects. Using these observables, we demonstrate that perturbatively meaningful constraints on CP-violating operators can be obtained at the LHC with luminosities of ${\cal{O}}$(100/fb). Our results provide a roadmap to a global Higgs boson coupling analysis that includes CP-violating effects. | high energy physics phenomenology |
We consider possible effects of neutrino electric charge (millicharge) and charge radius on the neutrino-atom interaction processes such as (i) atomic ionization by neutrino impact and (ii) coherent elastic neutrino-nucleus scattering. The bounds on the neutrino millicharge and charge radius that follow from, respectively, the GEMMA and COHERENT experiments are presented and discussed. | high energy physics phenomenology |
In the heavy-fermion system $Yb_2Pd_2In_{1-x}Sn_x$, the interplay of crystal-field splitting, Kondo effect, and Ruderman-Kittel-Kasuya-Yosida interactions leads to complex chemical-, pressure-, and magnetic-field phase diagrams, still to be explored in full detail. By using a series of techniques, we show that even modest changes of parameters other than temperature are sufficient to induce multiple quantum-critical transitions in this highly susceptible heavy-fermion family. In particular, we show that, above $\sim 10$ kbar, hydrostatic pressure not only induces an antiferromagnetic phase at low temperature, but it likely leads to a reorientation of the Yb magnetic moments and/or the competition among different antiferromagnetic configurations. | condensed matter |
Several deep learned lossy compression techniques have been proposed in the recent literature. Most of these are optimized by using either MS-SSIM (multi-scale structural similarity) or MSE (mean squared error) as a loss function. Unfortunately, neither of these correlate well with human perception and this is clearly visible from the resulting compressed images. In several cases, the MS-SSIM for deep learned techniques is higher than say a conventional, non-deep learned codec such as JPEG-2000 or BPG. However, the images produced by these deep learned techniques are in many cases clearly worse to human eyes than those produced by JPEG-2000 or BPG. We propose the use of an alternative, deep perceptual metric, which has been shown to align better with human perceptual similarity. We then propose Deep Perceptual Compression (DPC) which makes use of an encoder-decoder based image compression model to jointly optimize on the deep perceptual metric and MS-SSIM. Via extensive human evaluations, we show that the proposed method generates visually better results than previous learning based compression methods and JPEG-2000, and is comparable to BPG. Furthermore, we demonstrate that for tasks like object-detection, images compressed with DPC give better accuracy. | electrical engineering and systems science |
Building a quantum computer is a daunting challenge since it requires good control but also good isolation from the environment to minimize decoherence. It is therefore important to realize quantum gates efficiently, using as few operations as possible, to reduce the amount of required control and operation time and thus improve the quantum state coherence. Here we propose a superconducting circuit for implementing a tunable system consisting of a qutrit coupled to two qubits. This system can efficiently accomplish various quantum information tasks, including generation of entanglement of the two qubits and conditional three-qubit quantum gates, such as the Toffoli and Fredkin gates. Furthermore, the system realizes a conditional geometric gate which may be used for holonomic (non-adiabatic) quantum computing. The efficiency, robustness and universality of the presented circuit makes it a promising candidate to serve as a building block for larger networks capable of performing involved quantum computational tasks. | quantum physics |
Inspired by recent measurement of possible fully charmed tetraquarks in LHCb Collaboration, we investigate the mass spectra of fully heavy tetraquarks $QQ \bar Q \bar Q$ in an extended relativized quark model. Our estimations indicate that the broad structure around 6.4 GeV should contain one or more ground $cc \bar c \bar c$ tetraquark states, while the narrow structure near 6.9 GeV can be categorized as the first radial excitation of $cc \bar c \bar c$ system. Moreover, with the wave functions of the tetraquarks and mesons, the strong decays of tetraquarks into heavy quarkonium pair are qualitatively discussed, which can be further checked by the LHCb and CMS Collaborations. | high energy physics phenomenology |
We discuss the emergence of a low-energy effective theory with quarks, mesons, diquarks and baryons at vanishing and finite baryon density from first principle QCD. The present work also includes an overview on diquarks at vanishing and finite density, and elucidates the physics of transitional changes from baryonic matter to quark matter including diquarks. This set-up is discussed within the functional renormalisation group approach with dynamical hadronisation. In this framework it is detailed how mesons, diquarks, and baryons emerge dynamically from the renormalisation flow of the QCD effective action. Moreover, the fundamental degrees of freedom of QCD, quarks and gluons, decouple from the dynamics of QCD below the respective mass gaps. The resulting global picture unifies the different low energy effective theories used for low and high densities within QCD, and allows for a determination of the respective low energy constants directly from QCD. | high energy physics phenomenology |
A deep objective-prism survey for H-alpha emission stars towards the Canis Major star-forming clouds was performed. A total of 398 Halpha emitters were detected, 353 of which are new detections. There is a strong concentration of these H-alpha emitters towards the molecular clouds surrounding the CMa~OB1 association, and it is likely that these stars are young stellar objects recently born in the clouds. An additional population of H-alpha emitters is scattered all across the region, and probably includes unrelated foreground dMe stars and background Be stars. About 90% of the H-alpha emitters are detected by WISE, of which 75% was detected with usable photometry. When plotted in a WISE colour-colour diagram it appears that the majority are Class II YSOs. Coordinates and finding charts are provided for all the new stars, and coordinates for all the detections. We searched the Gaia-DR2 catalogue and from 334 Halpha emission stars with useful parallaxes, we selected a subset of 98 stars that have parallax errors of less than 20% and nominal distances in the interval 1050 to 1350 pc that surrounds a strong peak at 1185 pc in the distance distribution. Similarly, Gaia distances were obtained for 51 OB-stars located towards Canis Major and selected with the same parallax errors as the H-alpha stars. We find a median distance for the OB stars of 1182 pc, in excellent correspondence with the distance from the H-alpha stars. Two known runaway stars are confirmed as members of the association. Finally, two new Herbig-Haro objects are identified. | astrophysics |
Background: Changes in choroidal thickness are associated with various ocular diseases and the choroid can be imaged using spectral-domain optical coherence tomography (SDOCT) and enhanced depth imaging OCT (EDIOCT). New Method: Eighty macular SDOCT volumes from 80 patients were obtained using the Zeiss Cirrus machine. Eleven additional control subjects had two Cirrus scans done in one visit along with EDIOCT using the Heidelberg Spectralis machine. To automatically segment choroidal layers from the OCT volumes, our graph-theoretic approach was utilized. The segmentation results were compared with reference standards from two graders, and the accuracy of automated segmentation was calculated using unsigned to signed border positioning thickness errors and Dice similarity coefficient (DSC). The repeatability and reproducibility of our choroidal thicknesses were determined by intraclass correlation coefficient (ICC), coefficient of variation (CV), and repeatability coefficient (RC). Results: The mean unsigned to signed border positioning errors for the choroidal inner and outer surfaces are 3.39plusminus1.26microns (mean plusminus SD) to minus1.52 plusminus 1.63microns and 16.09 plusminus 6.21microns to 4.73 plusminus 9.53microns, respectively. The mean unsigned to signed choroidal thickness errors are 16.54 plusminus 6.47microns to 6.25 plusminus 9.91microns, and the mean DSC is 0.949 plusminus 0.025. The ICC (95% CI), CV, RC values are 0.991 (0.977 to 0.997), 2.48%, 3.15microns for the repeatability and 0.991 (0.977 to 0.997), 2.49%, 0.53microns for the reproducibility studies, respectively. Comparison with Existing Method(s): The proposed method outperformed our previous method using choroidal vessel segmentation and inter-grader variability. Conclusions: This automated segmentation method can reliably measure choroidal thickness using different OCT platforms. | electrical engineering and systems science |
Suppose that $X$ is a projective manifold whose tangent bundle $T_X$ contains a locally free strictly nef subsheaf. We prove that $X$ is isomorphic to a projective bundle over a hyperbolic manifold. Moreover, if the fundamental group $\pi_1(X)$ is virtually abelian, then $X$ is isomorphic to a projective space. | mathematics |
Mining clusters from datasets is an important endeavor in many applications. The $k$-means algorithm is a popular and efficient, distribution-free approach for clustering numerical-valued data, but does not apply for categorical-valued observations. The $k$-modes algorithm addresses this lacuna by replacing the Euclidean distance with the Hamming distance and the means with the modes in the $k$-means objective function. We provide a novel, computationally efficient implementation of $k$-modes, called OTQT. We prove that OTQT finds updates, undetectable to existing $k$-modes algorithms, that improve the objective function. Thus, although slightly slower per iteration owing to its algorithmic complexity, OTQT is always more accurate per iteration and almost always faster (and only barely slower on some datasets) to the final optimum. As a result, we recommend OTQT as the preferred, default algorithm for all $k$-modes implementations. We also examine five initialization methods and three types of $K$-selection methods, many of them novel or novel applications to $k$-modes. By examining performance on real and simulated datasets, we show that simple random initialization is the best initializer and that a novel $K$-selection method is more accurate than methods adapted from $k$-means. | statistics |
The design of isophoric phased arrays composed of two-sized square-shaped tiles that fully cover rectangular apertures is dealt with. The number and the positions of the tiles within the array aperture are optimized to fit desired specifications on the power pattern features. Toward this end, starting from the derivation of theoretical conditions for the complete tileability of the aperture, an ad hoc coding of the admissible arrangements, which implies a drastic reduction of the cardinality of the solution space, and their compact representation with a graph are exploited to profitably apply an effective optimizer based on an integer-coded genetic algorithm. A set of representative numerical examples, concerned with state-of-the-art benchmark problems, is reported and discussed to give some insights on the effectiveness of both the proposed tiled architectures and the synthesis strategy. | electrical engineering and systems science |
We have investigated the bulk and microscopic properties of the rhombohedral intermediate valence superconductor CeIr$_3$ by employing magnetization, heat capacity, and muon spin rotation and relaxation ($\mu$SR) measurements. The magnetic susceptibility indicates bulk superconductivity below $T_\mathrm{C} = 3.1$~K. Heat capacity data also reveal a bulk superconducting transition at $T_\mathrm{C} = 3.1$~K with a second weak anomaly near 1.6~K. At $T_{\mathrm{C}}$, the jump in heat capacity $\Delta C$/$\gamma T_{\mathrm{C}} \sim 1.39(1)$, is slightly less than the BCS weak coupling limit of 1.43. Transverse-field $\mu$SR measurements suggest a fully gapped, isotropic, $s$-wave superconductivity with 2$\Delta(0)/k_{\mathrm{B}}T_{\mathrm{C}} = 3.76(3)$, very close to 3.56, the BCS gap value for weak-coupling superconductors. From the temperature variation of magnetic penetration depth, we have also determined the London penetration depth $\lambda_{\mathrm{L}}(0) = 435(2)$~nm, the carriers' effective mass enhancement $m^{*} = 1.69(1)m_{\mathrm{e}}$ and the superconducting carrier density $n_{\mathrm{s}} = 2.5(1)\times 10^{26}$ carriers m$^{-3}$. The fact that LaIr$_3$, with no $4f$-electrons, and CeIr$_3$ with $4f^{n}$ electrons where $n \le 1$-electron (Ce ion in a valence fluctuating state), both exhibit the same $s$-wave gap symmetry indicates that the physics of these two compounds is governed by the Ir-$d$ band near the Fermi-level, which is in agreement with previous band structure calculations. | condensed matter |
First-principle density-functional calculations have been applied to study the interaction of molecular chlorine with the (111) plane of copper. Using transition-state search method, we considered the elementary processes (Cl$_2$ dissociation, adsorption, diffusion, association and desorption) on the chlorinated Cu(111) surface. A systematic study of possible desorption pathways has been carried out for different species (Cl, Cl$_2$, CuCl, CuCl$_2$, and Cu) at various chlorine coverage. As a result, we concluded that chlorine monolayer irrespective of the coverage desorbs in the form of CuCl molecules from step edges. | physics |
Early detection of melanoma is difficult for the human eye but a crucial step towards reducing its death rate. Computerized detection of these melanoma and other skin lesions is necessary. The central research question in this paper is "How to segment skin lesion images using a neural network with low available data?". This question is divided into three sub questions regarding best performing network structure, training data and training method. First theory associated with these questions is discussed. Literature states that U-net CNN structures have excellent performances on the segmentation task, more training data increases network performance and utilizing transfer learning enables networks to generalize to new data better. To validate these findings in the literature two experiments are conducted. The first experiment trains a network on data sets of different size. The second experiment proposes twelve network structures and trains them on the same data set. The experimental results support the findings in the literature. The FCN16 and FCN32 networks perform best in the accuracy, intersection over union and mean BF1 Score metric. Concluding from these results the skin lesion segmentation network is a fully convolutional structure with a skip architecture and an encoder depth of either one or two. Weights of this network should be initialized using transfer learning from the pre trained VGG16 network. Training data should be cropped to reduce complexity and augmented during training to reduce the likelihood of overfitting. | electrical engineering and systems science |
We introduce photon and gluon propagators in which the scalar polarization component is subtracted systematically by making use of the BRST invariance of the off-shell vector boson created from physical on-shell states. The propagator has the light-cone gauge form, where the spacial component of the gauge vector points along the negative of the off-shell vector boson momentum. We call the gauge as parton-shower gauge, since in collinear configurations the absolute value squared of each Feynman amplitude reproduces all the singular behaviors of the corresponding parton shower in this gauge. We introduce new HELAS codes that can be used to calculate the tree-level helicity amplitudes of arbitrary QED and QCD processes by using MadGraph. The absence of subtle gauge cancellation among Feynman amplitudes allows numerical codes to evaluate singular behaviors accurately, and helps us gaining physical insights on interference patterns. | high energy physics phenomenology |
Designing control systems with bounded input is a practical consideration since realizable physical systems are limited by the saturation of actuators. The actuators' saturation degrades the performance of the control system, and in extreme cases, the stability of the closed-loop system may be lost. However, actuator saturation is typically neglected in the design of control systems, with compensation being made in the form of over-designing the actuator or by post-analyzing the resulting system to ensure acceptable performance. The bounded input control of fully actuated systems has been investigated in multiple studies, but it is not generalized for under actuated mechanical systems. This article proposes a systematic framework for finding the upper bound of control effort in underactuated systems, based on interconnection and the damping assignment passivity based control (IDA-PBC) approach. The proposed method also offers design variables for the control law to be tuned, considering the actuator's limit. The major difficulty in finding the control input upper bounds is the velocity dependent kinetic energy related terms. Thus, the upper bound of velocity is computed using a suitable Lyapunov candidate as a function of closed-loop system parameters. The validity and application of the proposed method are investigated in detail through two benchmark systems. | electrical engineering and systems science |
We have analysed the Ca-K images obtained at Kodaikanal Observatory as a function of latitude and time for the period of 1913 - 2004 covering the Solar Cycle 15 to 23. We have classified the chromospheric activity into plage, Enhanced Network (EN), Active Network (AN), and Quiet Network (QN) areas to differentiate between large strong active and small weak active regions. The strong active regions represent toroidal and weak active regions poloidal component of the magnetic field. We find that plages areas mostly up to 50 deg latitude belt vary with about 11-year Solar Cycle. We also find that weak activity represented by EN, AN and QN varies with about 11-year with significant amplitude up to about 50 deg latitude in both the hemispheres. The amplitude of variation is minimum around 50 deg latitude and again increases by small amount in the polar region. In addition, the plots of plages, EN, AN and QN as a function of time indicate the maximum of activity at different latitude occur at different epoch. To determine the phase difference for the different latitude belts, we have computed the cross-correlation coefficients of other latitude belts with 35 deg latitude belt. We find that activity shifts from mid-latitude belts towards equatorial belts at fast speed at the beginning of Solar Cycle and at slower speed as the cycle progresses. The speed of shift varies between approximately 19 and 3 m/s considering all the data for the observed period. This speed can be linked with speed of meridional flows those believed to occur between convection zone and the surface of the Sun. | astrophysics |
Well-established optimization-based methods can guarantee an optimal trajectory for a short optimization horizon, typically no longer than a few seconds. As a result, choosing the optimal trajectory for this short horizon may still result in a sub-optimal long-term solution. At the same time, the resulting short-term trajectories allow for effective, comfortable and provable safe maneuvers in a dynamic traffic environment. In this work, we address the question of how to ensure an optimal long-term driving strategy, while keeping the benefits of classical trajectory planning. We introduce a Reinforcement Learning based approach that coupled with a trajectory planner, learns an optimal long-term decision-making strategy for driving on highways. By online generating locally optimal maneuvers as actions, we balance between the infinite low-level continuous action space, and the limited flexibility of a fixed number of predefined standard lane-change actions. We evaluated our method on realistic scenarios in the open-source traffic simulator SUMO and were able to achieve better performance than the 4 benchmark approaches we compared against, including a random action selecting agent, greedy agent, high-level, discrete actions agent and an IDM-based SUMO-controlled agent. | computer science |
In this paper we consider a Gaussian mixture model where the mixture weight behaves as an unknown function of time. To estimate the mixture weight function, we develop a Bayesian nonlinear dynamic approach for polynomial models. Two estimation methods that can be extended to other situations are considered. One of them, called here component-wise Metropolis-Hastings, is more general and can be used for any situation where the observation and state equations are nonlinearly connected. The other method tends to be faster but must be applied specifically to binary data (by using a probit link function). This kind of Gaussian mixture model is capable of successfully capturing the features of the data, as observed in numerical studies. It can be useful in studies such as clustering, change-point and process control. We apply the proposed method an array Comparative Genomic Hybridization (aCGH) dataset from glioblastoma cancer studies, where we illustrate the ability of the new method to detect chromosome aberrations. | statistics |
Motivated by the recent proposal of Bosonic Dirac materials (BDM), we revisited the Ising model on a honeycomb lattice in the presence of the longitudinal and transverse fields. We apply linear spin-wave theory to obtain the magnon dispersion and its degenerated points. These special degenerated points emerge on the excitation spectrum as a function of the external fields and can be identified as Bosonic Dirac Points (BDP). Since that, in the vicinity of these points the Magnons becomes massless with a linear energy spectrum as well as insensible in relation to weak impurity, exactly as it occurs with a Fermionic Dirac point. We also have calculated the quantum and thermal fluctuations over the ground state of the system using Effective Field Theory. Our results point out that this simple model can host Bosonic Dirac points and therefore is a suitable prototype to build a Bosonic Dirac material only controlled by external field. | condensed matter |
We propose a neural network model for MDG and optical SNR estimation in SDM transmission. We show that the proposed neural-network-based solution estimates MDG and SNR with high accuracy and low complexity from features extracted after DSP. | electrical engineering and systems science |
In this paper, we establish a strategy for the calculation of the proportion of everywhere locally soluble diagonal hypersurfaces of $\mathbb{P}^{n}$ of fixed degree. Our strategy is based on the product formula established by Bright, Browning and Loughran. Their formula reduces the problem into the calculation of the proportions of $\mathbb{Q}_{v}$-soluble diagonal hypersurfaces for all places $v$. As worked examples, we carry out our strategy in the cases of quadratic and cubic hypersurfaces. As a consequence, we prove that around $99.99\%$ of diagonal cubic $4$-folds have $\mathbb{Q}$-rational points under a hypothesis on the Brauer-Manin obstruction. | mathematics |
The growth of single wall carbon nanotubes (SWCNT) inside host SWCNTs remains a compelling alternative to the conventional catalyst induced growth processes. It not only provides a catalyst free process but the ability to control the constituents of the inner tube if appropriate starting molecules are used. We report herein the growth of inner SWCNTs from $^{13}$C labeled toluene and natural carbon C$_{60}$. The latter molecule is essentially a stopper which acts to retain the smaller toluene. The Raman spectrum of the inner nanotubes is anomalous as it contains a highly isotope shifted "tail", which cannot be explained by assuming a homogeneous distribution of the isotopes. {\color{black}Semi-empirical} calculations of the Raman modes indicate that this unsual effect is explicable if small clusters of $^{13}$C are assumed. This indicates the absence of carbon diffusion during the inner tube growth. When combined with appropriate molecular recognition, this may enable a molecular engineering of the atomic and isotope composition of the inner tubes. | condensed matter |
Work on retrieval-based chatbots, like most sequence pair matching tasks, can be divided into Cross-encoders that perform word matching over the pair, and Bi-encoders that encode the pair separately. The latter has better performance, however since candidate responses cannot be encoded offline, it is also much slower. Lately, multi-layer transformer architectures pre-trained as language models have been used to great effect on a variety of natural language processing and information retrieval tasks. Recent work has shown that these language models can be used in text-matching scenarios to create Bi-encoders that perform almost as well as Cross-encoders while having a much faster inference speed. In this paper, we expand upon this work by developing a sequence matching architecture that %takes into account contexts in the training dataset at inference time. utilizes the entire training set as a makeshift knowledge-base during inference. We perform detailed experiments demonstrating that this architecture can be used to further improve Bi-encoders performance while still maintaining a relatively high inference speed. | computer science |
As machine learning becomes increasingly important in engineering and science, it is inevitable that machine learning techniques will be applied to the investigation of materials, and in particular the structural phase transitions common in ferroelectric materials. Here, we build and train an artificial neural network to accurately predict the energy change associated with atom displacements and use the trained artificial neural network in Monte-Carlo simulations on ferroelectric materials to investigate their phase transitions. We apply this approach to two-dimensional monolayer SnTe and show that it can indeed be used to simulate the phase transitions and predict the transition temperature. The artificial neural network, when viewed as a universal mathematical structure, can be readily transferred to the investigation of other ferroelectric materials when training data generated with ab initio methods are available. | condensed matter |
Encoder layer fusion (EncoderFusion) is a technique to fuse all the encoder layers (instead of the uppermost layer) for sequence-to-sequence (Seq2Seq) models, which has proven effective on various NLP tasks. However, it is still not entirely clear why and when EncoderFusion should work. In this paper, our main contribution is to take a step further in understanding EncoderFusion. Many of previous studies believe that the success of EncoderFusion comes from exploiting surface and syntactic information embedded in lower encoder layers. Unlike them, we find that the encoder embedding layer is more important than other intermediate encoder layers. In addition, the uppermost decoder layer consistently pays more attention to the encoder embedding layer across NLP tasks. Based on this observation, we propose a simple fusion method, SurfaceFusion, by fusing only the encoder embedding layer for the softmax layer. Experimental results show that SurfaceFusion outperforms EncoderFusion on several NLP benchmarks, including machine translation, text summarization, and grammatical error correction. It obtains the state-of-the-art performance on WMT16 Romanian-English and WMT14 English-French translation tasks. Extensive analyses reveal that SurfaceFusion learns more expressive bilingual word embeddings by building a closer relationship between relevant source and target embedding. Source code is freely available at https://github.com/SunbowLiu/SurfaceFusion. | computer science |
The giant gamma-ray flares of the Crab nebula discovered by AGILE and Fermi observatories came as a surprise and have challenged the existing models of pulsar wind nebulae. We have carried out an analysis of 10.5 years of Fermi-LAT observations (Aug 2008 -- Feb 2019) and investigated variability of the Crab nebula in the 100-300 MeV range. Besides the flares, we found several month long depressions of the gamma-ray flux and identified several cases of sharp flux drops, where during one week the flux decreased by an order of magnitude with respect to its average value. No statistically significant variations of the nebula flux in the $E>$10 GeV range were found in the data. We discuss possible implications of the observed gamma-ray flux depressions on the model of synchrotron emission of the Crab nebula. | astrophysics |
The Software-defined networking(SDN) paradigm centralizes control decisions to improve programmability and simplify network management. However, this centralization turns the network vulnerable to denial of service (DoS) attacks, and in the case of resource constrained networks, the vulnerabilities escalate. The main shortcoming in current security solutions is the trade off between detection rate and complexity. In this work, we propose a DoS attack detection algorithm for SDN resource constrained networks, based on recent results on non-parametric real-time change point detection, and lightweight enough to run on individual resource constrained devices. Our experiment results show detection rates and attacker identification probabilities equal or over 0.93. | electrical engineering and systems science |
The quantum theory of multiphoton stimulated bremsstrahlung of charged carriers on an arbitrary electrostatic potential of impurity ion in doped bilayer graphene at the presence of coherent electromagnetic radiation is developed. A terahertz wave field is considered exactly, while the electrostatic potential of doped ions as a perturbation. The essentially nonlinear response of bilayer graphene to a pump wave and significant differences from the case of a single layer graphene are shown, which can be associated to nonlinear parabolic dispersion. The latter opens new way to manipulate with the electronic transport properties of conductive electrons of bilayer graphene by coherent radiation field of terahertz or near-infrared frequencies. | physics |
We study the problem of maximizing the minimal value over the sphere $S^{d-1}\subset \mathbb R^d$ of the potential generated by a configuration of $d+1$ points on $S^{d-1}$ (the maximal discrete polarization problem). The points interact via the potential given by a function $f$ of the Euclidean distance squared, where $f:[0,4]\to (-\infty,\infty]$ is continuous (in the extended sense) and decreasing on $[0,4]$ and finite and convex on $(0,4]$ with a concave or convex derivative $f'$. We prove that the configuration of the vertices of a regular $d$-simplex inscribed in $S^{d-1}$ is optimal. This result is new for $d>3$ (certain special cases for $d=2$ and $d=3$ are also new). As a byproduct, we find a simpler proof for the known optimal covering property of the vertices of a regular $d$-simplex inscribed in $S^{d-1}$. | mathematics |
A significant challenge in the development of control systems for diesel airpath applications is to tune the controller parameters to achieve satisfactory output performance, especially whilst adhering to input and safety constraints in the presence of unknown system disturbances. Model-based control techniques, such as model predictive control (MPC), have been successfully applied to multivariable and highly nonlinear systems, such as diesel engines, while considering operational constraints. However, efficient calibration of typical implementations of MPC is hindered by the high number of tuning parameters and their non-intuitive correlation with the output response. In this paper, the number of effective tuning parameters is reduced through suitable structural modifications to the controller formulation and an appropriate redesign of the MPC cost function to aid rapid calibration. Furthermore, a constraint tightening-like approach is augmented to the control architecture to provide robustness guarantees in the face of uncertainties. A switched linear time-varying MPC strategy with recursive feasibility guarantees during controller switching is proposed to handle transient operation of the engine. The robust controller is first implemented on a high fidelity simulation environment, with a comprehensive investigation of its calibration to achieve desired transient response under step changes in the fuelling rate. An experimental study then validates and highlights the performance of the proposed controller architecture for selected tunings of the calibration parameters for fuelling steps and over drive cycles. | computer science |
Bloch oscillations appear when an electric field is superimposed on a quantum particle that evolves on a lattice with a tight-binding Hamiltonian (TBH), i.e., evolves via what we will call an electric TBH; this phenomenon will be referred to as TBH Bloch oscillations. A similar phenomenon is known to show up in so-called electric discrete-time quantum walks (DQWs); this phenomenon will be referred to as DQW Bloch oscillations. This similarity is particularly salient when the electric field of the DQW is weak. For a wide, i.e., spatially extended initial condition, one numerically observes semi-classical oscillations, i.e., oscillations of a localized particle, both for the electric TBH and the electric DQW. More precisely: The numerical simulations strongly suggest that the semi-classical DQW Bloch oscillations correspond to two counter-propagating semi-classical TBH Bloch oscillations. In this work it is shown that, under certain assumptions, the solution of the electric DQW for a weak electric field and a wide initial condition is well approximated by the superposition of two continuous-time expressions, which are counter-propagating solutions of an electric TBH whose hopping amplitude is the cosine of the arbitrary coin-operator mixing angle. In contrast, if one wishes the continuous-time approximation to hold for spatially localized initial conditions, one needs at least the DQW to be lazy, as suggested by numerical simulations and by the fact that this has been proven in the case of a vanishing electric field. | quantum physics |
We show that there exist absolute constants $\Delta > \delta > 0$ such that, for all $n \geqslant 2$, there exists a polynomial $P$ of degree $n$, with $\pm 1$ coefficients, such that $$\delta\sqrt{n} \leqslant |P(z)| \leqslant \Delta\sqrt{n}$$ for all $z\in\mathbb{C}$ with $|z|=1$. This confirms a conjecture of Littlewood from 1966. | mathematics |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.