text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
Dark matter (DM) which sufficiently heats a local region in a white dwarf will trigger runaway fusion, igniting a type Ia supernova (SN). In a companion paper, this instability was used to constrain DM heavier than $10^{16}$ GeV which ignites SN through the violent interaction of one or two individual DM particles with the stellar medium. Here we study the ignition of supernovae by the formation and self-gravitational collapse of a DM core containing many DM particles. For non-annihilating DM, such a core collapse may lead to a mini black hole that can ignite SN through the emission of Hawking radiation, or possibly as a by-product of accretion. For annihilating DM, core collapse leads to an increasing annihilation rate and can ignite SN through a large number of rapid annihilations. These processes extend the previously derived constraints on DM to masses as low as $10^{5}$ GeV. | high energy physics phenomenology |
The frequency-weighted model order reduction techniques are used to find a lower-order approximation of the high-order system that exhibits high-fidelity within the frequency region emphasized by the frequency weights. In this paper, we investigate the frequency-weighted H2-pseudo-optimal model order reduction problem wherein a subset of the optimality conditions for the local optimum is attempted to be satisfied. We propose two iteration-free algorithms, for the single-sided frequency-weighted case of H2-model reduction, where a subset of the optimality conditions is ensured by the reduced system. In addition, the reduced systems retain the stability property of the original system. We also present an iterative algorithm for the double-sided frequency-weighted case, which constructs a reduced-order model that tends to satisfy a subset of the first-order optimality conditions for the local optimum. The proposed algorithm is computationally efficient as compared to the existing algorithms. We validate the theory developed in this paper on three numerical examples. | electrical engineering and systems science |
A key difficulty that arises from real event data is imprecision in the recording of event time-stamps. In many cases, retaining event times with a high precision is expensive due to the sheer volume of activity. Combined with practical limits on the accuracy of measurements, aggregated data is common. In order to use point processes to model such event data, tools for handling parameter estimation are essential. Here we consider parameter estimation of the Hawkes process, a type of self-exciting point process that has found application in the modeling of financial stock markets, earthquakes and social media cascades. We develop a novel optimization approach to parameter estimation of aggregated Hawkes processes using a Monte Carlo Expectation-Maximization (MC-EM) algorithm. Through a detailed simulation study, we demonstrate that existing methods are capable of producing severely biased and highly variable parameter estimates and that our novel MC-EM method significantly outperforms them in all studied circumstances. These results highlight the importance of correct handling of aggregated data. | statistics |
We investigate the transport properties and entanglement between spin and position of one-dimensional quantum walks starting from a qubit over position states following a delta-like (local state) and Gaussian (delocalized state) distributions. We find out that if the initial state is delocalized enough and a NOT gate reflects this state backwards, then the interference pattern extinguishes the position dispersion without preventing the propagation of the state. This effect allows the creation of a Trojan wave packet, a non-spreading and non-stationary double-peak quantum state. | quantum physics |
We compute conformal correlation functions with spinor, tensor, and spinor-tensor primary fields in general dimensions with Euclidean and Lorentzian metrics. The spinors are taken to be Dirac spinors, which exist for any dimensions. For this, the embedding space formalism is employed and the polarisation spinors are introduced to simplify the computations. Three-point functions are rewritten in terms of differential operators acting on scalar-scalar-tensor correlation functions. This enables us to determine conformal blocks for four-point functions with scalar and spinor fields by acting the differential operators on scalar conformal blocks, which will be useful in finding their geodesic Witten diagrams. | high energy physics theory |
We perform three-dimensional (3D) general-relativistic magnetohydrodynamic simulations to model the jet break-out from the ejecta expected to be produced in a binary neutron-star merger. The structure of the relativistic outflow from the 3D simulation confirms our previous results from 2D simulations, namely, that a relativistic magnetized outflow breaking out from the merger ejecta exhibits a hollow core of $\theta_{\rm core}\approx4^{\circ}$, an opening angle of $\theta_{\rm jet}\gtrsim10^{\circ}$, and is accompanied by a wind of ejected matter that will contribute to the kilonova emission. We also compute the non-thermal afterglow emission of the relativistic outflow and fit it to the panchromatic afterglow from GRB170817A, together with the superluminal motion reported from VLBI observations. In this way, we deduce an observer angle of $\theta_{\rm obs}= 35.7^{\circ \,\,+1.8}_{\phantom{\circ \,\,}-2.2}$. We further compute the afterglow emission from the ejected matter and constrain the parameter space for a scenario in which the matter responsible for the thermal kilonova emission will also lead to a non-thermal emission yet to be observed. | astrophysics |
We prove that the Koebe circle domain conjecture is equivalent to the Weyl type problem that every complete hyperbolic surface of genus zero is isometric to the boundary of the hyperbolic convex hull of the complement of a circle domain. It provides a new way to approach the Koebe's conjecture using convex geometry. Combining our result with the work of He-Schramm on the Koebe conjecture, one establishes that every simply connected non-compact polyhedral surface is discrete conformal to the complex plane or the open unit disk. The main tool we use is Schramm's transboundary extremal lengths. | mathematics |
Different techniques were developed to extract urban agglomerations from a big dataset. The urban agglomerations are used to understand the structure and growth of cities. However, the major challenge is to extract urban agglomerations from big data, which can reflect human activities. Community urban cluster refers to spatially clustered geographic events, such as human settlements or activities. It provides a powerful and innovative insight to analyze the structure and growth of the real city. In order to understand the shape and growth of urban agglomerations in Switzerland from spatial and temporal aspects, this work identifies urban clusters from nighttime light data and street network data. Nighttime light data record lights emitted from human settlements at night on the earth's surface. This work uses DMSP-OLS Nighttime light data to extract urban clusters from 1992 to 2013. The street is one of the most important factors to reflect human activities. Hence, urban clusters are also extracted from street network data to understand the structure of cities. Both of these data have a heavy-tailed distribution, which includes power laws as well as lognormal and exponential distributions. The head/tail breaks is a classification method to find the hierarchy of data with a heavy-tailed distribution. This work uses head/tail breaks classification to extract urban clusters of Switzerland. At last, the power law distribution of all the urban clusters was detected at the country level. | statistics |
Motivated by its potential use in constraining the structure of 6D renormalization group flows, we determine the low energy dilaton-axion effective field theory of conformal and global symmetry breaking in 6D conformal field theories (CFTs). While our analysis is largely independent of supersymmetry, we also investigate the case of 6D superconformal field theories (SCFTs), where we use the effective action to present a streamlined proof of the 6D a-theorem for tensor branch flows, as well as to constrain properties of Higgs branch and mixed branch flows. An analysis of Higgs branch flows in some examples leads us to conjecture that in 6D SCFTs, an interacting dilaton effective theory may be possible even when certain 4-dilaton 4-derivative interaction terms vanish, because of large momentum modifications to 4-point dilaton scattering amplitudes. This possibility is due to the fact that in all known $D > 4$ CFTs, the approach to a conformal fixed point involves effective strings which are becoming tensionless. | high energy physics theory |
The recent outbreak of the COVID-19 shocked humanity leading to the death of millions of people worldwide. To stave off the spread of the virus, the authorities in the US, employed different strategies including the mask mandate (MM) order issued by the states' governors. Although most of the previous studies pointed in the direction that MM can be effective in hindering the spread of viral infections, the effectiveness of MM in reducing the degree of exposure to the virus and, consequently, death rates remains indeterminate. Indeed, the extent to which the degree of exposure to COVID-19 takes part in the lethality of the virus remains unclear. In the current work, we defined a parameter called the average death ratio as the monthly average of the ratio of the number of daily deaths to the total number of daily cases. We utilized survey data provided by New York Times to quantify people's abidance to the MM order. Additionally, we implicitly addressed the extent to which people abide by the MM order that may depend on some parameters like population, income, and political inclination. Using different machine learning classification algorithms we investigated how the decrease or increase in death ratio for the counties in the US West Coast correlates with the input parameters. Our results showed a promising score as high as 0.94 with algorithms like XGBoost, Random Forest, and Naive Bayes. To verify the model, the best performing algorithms were then utilized to analyze other states (Arizona, New Jersey, New York and Texas) as test cases. The findings show an acceptable trend, further confirming usability of the chosen features for prediction of similar cases. | statistics |
This research focuses on identifying high redshift galaxies from LABOCA(LArge APEX BOlometer CAmera) and SPIRE(The Spectral and Photometric Imaging Receiver) maps towards proto-cluster candidates initially selected from the SPT (South pole telescope) survey. Based on the Multi-Gaussian fitting algorithm, we cross-match all significant LABOCA sources at SPIRE wavelengths based on their coordinates and signal to noise ratio to derive their photometry at 250, 350, 500 and 870 $\mu m$. We use this information to calculate a photometric redshift for SPT sources towards cluster fields. The code was developed in the Python programming environment. | astrophysics |
From an operational criterion of physical reality, a quantifier of realism-based nonlocality was recently introduced for two-part quantum states. This measure has shown to capture aspects that are rather different from Bell nonlocality. Here we take a step further and introduce a tripartite realism-based nonlocality quantifier. We show that this measure reduces to genuine tripartite entanglement for a certain class of pure tripartite states and manifests itself in correlated mixed states even in the absence of quantum correlations. A case study for noisy GHZ and W states points out the existence of scenarios where the realism-based nonlocality is monogamous. | quantum physics |
Searching for new physics in large data sets needs a balance between two competing effects---signal identification vs background distortion. In this work, we perform a systematic study of both single variable and multivariate jet tagging methods that aim for this balance. The methods preserve the shape of the background distribution by either augmenting the training procedure or the data itself. Multiple quantitative metrics to compare the methods are considered, for tagging 2-, 3-, or 4-prong jets from the QCD background. This is the first study to show that the data augmentation techniques of Planing and PCA based scaling deliver similar performance as the augmented training techniques of Adversarial NN and uBoost, but are both easier to implement and computationally cheaper. | high energy physics phenomenology |
We discuss the rotational cooling of diatomic molecules in a Bose-Einstein condensate (BEC) of ultra-cold atoms by emission of phonons with orbital angular momentum. Despite the superfluidity of the BEC there is no frictionless rotation for typical molecules since the dominant cooling occurs via emission of particle-like phonons. Only for macro-dimers, whose size becomes comparable or larger than the condensate healing length, a Landau-like, critical angular momentum exists below which phonon emission is suppressed. We find that the rotational relaxation of typical molecules is in general faster than the cooling of the linear motion of impurities in a BEC. This also leads to a finite lifetime of angulons, quasi-particles of rotating molecules coupled to phonons with orbital angular-momentum. We analyze the dynamics of rotational cooling for homo-nuclear diatomic molecules based on a quantum Boltzmann equation including single- and two-phonon scattering and discuss the effect of thermal phonons. | quantum physics |
We have trained a fully convolutional spatio-temporal model for fast and accurate representation learning in the challenging exemplar application area of fusion energy plasma science. The onset of major disruptions is a critically important fusion energy science (FES) issue that must be resolved for advanced tokamak. While a variety of statistical methods have been used to address the problem of tokamak disruption prediction and control, recent approaches based on deep learning have proven particularly compelling. In the present paper, we introduce further improvements to the fusion recurrent neural network (FRNN) software suite. Up to now, FRNN was based on the long short-term memory (LSTM) variant of recurrent neural networks to leverage the temporal information in the data. Here, we implement and apply the temporal convolutional neural network (TCN) architecture to the time-dependent input signals, thus rendering the FRNN architecture fully convolutional. This allows highly optimized convolution operations to carry the majority of the computational load of training, thus enabling a reduction in training time, and the effective use of high performance computing (HPC) resources for hyperparameter tuning. At the same time, the TCN based architecture achieves equal or better predictive performance when compared with the LSTM architecture for a large, representative fusion database. Across data-rich scientific disciplines, these results have implications for the resource-effective training of general spatio-temporal feature extractors based on deep learning. Moreover, this challenging exemplar case study illustrates the advantages of a predictive platform with flexible architecture selection options capable of being readily tuned and adapted for responding to prediction needs that increasingly arise in large modern observational dataset. | physics |
Diagnosis of high impedance fault (HIF) is a challenge for nowadays distribution network protections. The fault current of a HIF is much lower than that of a normal load, and fault feature is significantly affected by fault scenarios. A detection and feeder identification algorithm for HIFs is proposed in this paper, based on the high-resolution and synchronous waveform data. In the algorithm, an interval slope is defined to describe the waveform distortions, which guarantees a uniform feature description under various HIF nonlinearities and noise interferences. For three typical types of network neutrals, i.e.,isolated neutral, resonant neutral, and low-resistor-earthed neutral, differences of the distorted components between the zero-sequence currents of healthy and faulty feeders are mathematically deduced, respectively. As a result, the proposed criterion, which is based on the distortion relationships between zero-sequence currents of feeders and the zero-sequence voltage at the substation, is theoretically supported. 28 HIFs grounded to various materials are tested in a 10kV distribution networkwith three neutral types, and are utilized to verify the effectiveness of the proposed algorithm. | electrical engineering and systems science |
The analyticity properties of the scattering amplitude in the nonforward direction are investigated for a field theory in the manifold $\mathbb{R}^{3,1}\times S^1$. A scalar field theory of mass $m_0$ is considered in $D = 5$ Minkowski space to start with. Subsequently, one spatial dimension is compactified to a circle. The mass spectrum of the resulting theory is: (a) a massive scalar of mass, $m_0$, same as the original five dimensional theory and (b) a tower of massive Kaluza-Klein states. We derive nonforward dispersion relations for scattering of the excited Kaluza-Klein states in the Lehmann-Symanzik-Zimmermann formulation of the theory. In order to accomplish this object, first we generalize the Jost-Lehmann-Dyson theorem for a relativistic field theory with a compact spatial dimension. Next, we show the existence of the Lehmann-Martin ellipse inside which the partial wave expansion converges. It is proved that the scattering amplitude satisfies fixed-$t$ dispersion relations when $|t|$ lies within the Lehmann-Martin ellipse. | high energy physics theory |
Smuggling of special nuclear materials (SNM) and nuclear devices through borders and ports of entry constitutes a major risk to global security. Technologies are needed to reliably screen the flow of commerce for the presence of high-$Z$ materials such as uranium and plutonium. Here we present an experimental proof-of-concept of a technique which uses inelastic ($p,p'$) nuclear reactions to generate monoenergetic photons, which provide means to measure the areal density and the effective-$Z$ ($Z_{\text{eff}}$) of an object with an accuracy which surpasses that achieved by current methods. We use an ION-12$^{ \text{SC}}$ superconducting 12~MeV proton cyclotron to produce 4.4, 6.1, 6.9, and 7.1~MeV photons from a variety of nuclear reactions. Using these photons in a transmission mode we show that we are able to accurately reconstruct the areal densities and $Z_{\text{eff}}$ of a test object. This methodology could enable mobile applications to screen commercial cargoes with high material specificity, providing a means of distinguishing common cargo materials from high-Z materials that include uranium and plutonium. | physics |
The factorization form of the integrands in the Cachazo-He-Yuan (CHY) formalism makes the generalized Kawai-Lewellen-Tye (KLT) relations manifest, thus amplitudes of one theory can be expanded in terms of the amplitudes of another theory. Although this claim seems a rather natural consequence of the above structure, finding the exact expansion coefficients to express an amplitude in terms of another amplitudes is, nonetheless, a nontrivial task despite many efforts devoted to it in the literature. In this paper, we propose a new strategy based in using the differential operators introduced by Cheung, Shen and Wen, and taking advantage of the fact these operators already relate the amplitudes of different theories. Using this new method, expansion coefficients can be found effectively. | high energy physics theory |
Let $\operatorname{Con}(\mathbf T)\!\restriction\!x$ denote the finite consistency statement "there are no proofs of contradiction in $\mathbf T$ with $\leq x$ symbols". For a large class of natural theories $\mathbf T$, Pudl\'ak has shown that the lengths of the shortest proofs of $\operatorname{Con}(\mathbf T)\!\restriction\!n$ in the theory $\mathbf T$ itself are bounded by a polynomial in $n$. At the same time he conjectures that $\mathbf T$ does not have polynomial proofs of the finite consistency statements $\operatorname{Con}(\mathbf T+\operatorname{Con}(\mathbf T))\!\restriction\!n$. In contrast we show that Peano arithmetic ($\mathbf{PA}$) has polynomial proofs of $\operatorname{Con}(\mathbf{PA}+\operatorname{Con}^*(\mathbf{PA}))\!\restriction\!n$, where $\operatorname{Con}^*(\mathbf{PA})$ is the slow consistency statement for Peano arithmetic, introduced by S.-D. Friedman, Rathjen and Weiermann. We also obtain a new proof of the result that the usual consistency statement $\operatorname{Con}(\mathbf{PA})$ is equivalent to $\varepsilon_0$ iterations of slow consistency. Our argument is proof-theoretic, while previous investigations of slow consistency relied on non-standard models of arithmetic. | mathematics |
As we rely on machine learning (ML) models to make more consequential decisions, the issue of ML models perpetuating or even exacerbating undesirable historical biases (e.g., gender and racial biases) has come to the fore of the public's attention. In this paper, we focus on the problem of detecting violations of individual fairness in ML models. We formalize the problem as measuring the susceptibility of ML models against a form of adversarial attack and develop a suite of inference tools for the adversarial cost function. The tools allow auditors to assess the individual fairness of ML models in a statistically-principled way: form confidence intervals for the worst-case performance differential between similar individuals and test hypotheses of model fairness with (asymptotic) non-coverage/Type I error rate control. We demonstrate the utility of our tools in a real-world case study. | statistics |
Supermassive primordial stars in hot, atomically-cooling haloes at $z \sim$ 15 - 20 may have given birth to the first quasars in the universe. Most simulations of these rapidly accreting stars suggest that they are red, cool hypergiants, but more recent models indicate that some may have been bluer and hotter, with surface temperatures of 20,000 - 40,000 K. These stars have spectral features that are quite distinct from those of cooler stars and may have different detection limits in the near infrared (NIR) today. Here, we present spectra and AB magnitudes for hot, blue supermassive primordial stars calculated with the TLUSTY and CLOUDY codes. We find that photometric detections of these stars by the James Webb Space Telescope (JWST) will be limited to $z \lesssim$ 10 - 12, lower redshifts than those at which red stars can be found, because of quenching by their accretion envelopes. With moderate gravitational lensing, Euclid and the Wide-Field Infrared Space Telescope (WFIRST) could detect blue supermassive stars out to similar redshifts in wide-field surveys. | astrophysics |
The 2020 Personalized Voice Trigger Challenge (PVTC2020) addresses two different research problems a unified setup: joint wake-up word detection with speaker verification on close-talking single microphone data and far-field multi-channel microphone array data. Specially, the second task poses an additional cross-channel matching challenge on top of the far-field condition. To simulate the real-life application scenario, the enrollment utterances are recorded from close-talking cell-phone only, while the test utterances are recorded from both the close-talking cell-phone and the far-field microphone arrays. This paper introduces our challenge setup and the released database as well as the evaluation metrics. In addition, we present a joint end-to-end neural network baseline system trained with the proposed database for speaker-dependent wake-up word detection. Results show that the cost calculated from the miss rate and the false alarm rate, can reach 0.37 in the close-talking single microphone task and 0.31 in the far-field microphone array task. The official website and the open-source baseline system have been released. | electrical engineering and systems science |
A class of two-dimensional sigma models interpolating between $CP^1$ and the $SU(2)$ principal chiral model is discussed. We add the Wess-Zumino-Novikov-Witten term and examine the renormalization group flow of the two coupling constants which characterize the model under consideration. The model flows to the $SU(2)$ WZNW conformal field theory in the IR. There is an ordinary phase in which the model flows from the asymptotically free $CP^1$ model coupled to an extra massless degree of freedom in the UV. At higher loop order we discover that there is also a phase in which the model can flow from non-trivial fixed points in the UV. A non-perturbative confirmation of these extra fixed points would be desirable. | high energy physics theory |
This paper addresses the problem of selecting an optimal sampling set for signals on graphs. The proposed sampling set selection (SSS) is based on a localization operator that can consider both vertex domain and spectral domain localization. We clarify the relationships among the proposed method, sensor position selection methods in machine learning, and conventional SSS methods based on graph frequency. In contrast to the conventional graph signal processing-based approaches, the proposed method does not need to compute the eigendecomposition of a variation operator, while still considering (graph) frequency information. We evaluate the performance of our approach through comparisons of prediction errors and execution time. | electrical engineering and systems science |
Aspect-based sentiment analysis of review texts is of great value for understanding user feedback in a fine-grained manner. It has in general two sub-tasks: (i) extracting aspects from each review, and (ii) classifying aspect-based reviews by sentiment polarity. In this paper, we propose a weakly-supervised approach for aspect-based sentiment analysis, which uses only a few keywords describing each aspect/sentiment without using any labeled examples. Existing methods are either designed only for one of the sub-tasks, neglecting the benefit of coupling both, or are based on topic models that may contain overlapping concepts. We propose to first learn <sentiment, aspect> joint topic embeddings in the word embedding space by imposing regularizations to encourage topic distinctiveness, and then use neural models to generalize the word-level discriminative information by pre-training the classifiers with embedding-based predictions and self-training them on unlabeled data. Our comprehensive performance analysis shows that our method generates quality joint topics and outperforms the baselines significantly (7.4% and 5.1% F1-score gain on average for aspect and sentiment classification respectively) on benchmark datasets. Our code and data are available at https://github.com/teapot123/JASen. | computer science |
Color-flavor locked (CFL) quark matter expels color-magnetic fields due to the Meissner effect. One of these fields carries an admixture of the ordinary abelian magnetic field and therefore flux tubes may form if CFL matter is exposed to a magnetic field, possibly in the interior of neutron stars or in quark stars. We employ a Ginzburg-Landau approach for three massless quark flavors, which takes into account the multi-component nature of color superconductivity. Based on the weak-coupling expressions for the Ginzburg-Landau parameters, we identify the regime where CFL is a type-II color superconductor and compute the radial profiles of different color-magnetic flux tubes. Among the configurations without baryon circulation we find a new solution that is energetically preferred over the flux tubes previously discussed in the literature in the parameter regime relevant for compact stars. Within the same setup, we also find a new defect in the 2SC phase, namely magnetic domain walls, which emerge naturally from the previously studied flux tubes if a more general ansatz for the order parameter is used. Color-magnetic defects in the interior of compact stars allow for sustained deformations of the star, potentially strong enough to produce detectable gravitational waves. | high energy physics phenomenology |
We study various aspects of the mass deformed SYK model which can escape the interiors of pure boundary state black holes. SYK boundary states are given by a simple local boundary condition on the Majorana fermions and then evolved in Euclidean time in the SYK Hamiltonian. We study the ground state of this mass deformed SYK model in detail. We also use SYK boundary states as a variational approximation to the ground state of the mass deformed SYK model. We compare variational approximation with the exact ground state results and they showed a good agreement. We also study the time evolution of the mass deformed ground state under the SYK Hamiltonian. We give a gravity interpretation of the mass deformed ground state and its time evolutions. In gravity side, mass deformation gives a way to prepare black hole microstates that are similar to pure boundary state black holes. Escaping protocol on these ground states simply gives a global AdS2 with an IR end of the world brane. We also study the thermodynamics and quantum chaotic properties of this mass deformed SYK model. Interestingly, we do not observe the Hawking Page like phase transition in this model in spite of similarity of the Hamiltonian with eternal traversable wormhole model where we have the phase transition. | high energy physics theory |
Using the space-time analogy, we compare the performance of quantum temporal imaging with its classical counterpart. We consider a temporal imaging scheme, based on the sum-frequency generation (SFG) time lens, but our results can be applied to other temporal imaging schemes such as, for instance, four-wave mixing. Extending the theory presented in our previous publications, in this paper we take into account the finite time aperture of the imaging system, characterized by its pupil function. Using the quantum theory, we obtain a unitary transformation of the quantum field from the input to the output of the imaging scheme and identify the contribution of the vacuum fluctuations missing in the classical theory. This contribution plays a key role in the quantum temporal imaging of nonclassical temporal waveforms, characterized by nonclassical fluctuations of the electromagnetic field. As an example, we consider quantum temporal imaging of broadband squeezed light and formulate the criteria for conservation of its squeezing properties at the output of the system. | quantum physics |
We propose a quantum algorithm to solve systems of nonlinear differential equations. Using a quantum feature map encoding, we define functions as expectation values of parametrized quantum circuits. We use automatic differentiation to represent function derivatives in an analytical form as differentiable quantum circuits (DQCs), thus avoiding inaccurate finite difference procedures for calculating gradients. We describe a hybrid quantum-classical workflow where DQCs are trained to satisfy differential equations and specified boundary conditions. As a particular example setting, we show how this approach can implement a spectral method for solving differential equations in a high-dimensional feature space. From a technical perspective, we design a Chebyshev quantum feature map that offers a powerful basis set of fitting polynomials and possesses rich expressivity. We simulate the algorithm to solve an instance of Navier-Stokes equations, and compute density, temperature and velocity profiles for the fluid flow in a convergent-divergent nozzle. | quantum physics |
In the present work we provide a constructive method to describe contact structures on compact homogeneous contact manifolds. The main feature of our approach is to describe the Cartan-Ehresmann connection (gauge field) for principal circle bundles over complex flag manifolds by using elements of representation theory of simple Lie algebras. This description allows us to compute explicitly the expression of the contact form for any Boothby-Wang fibration over complex flag manifolds as well as their induced homogeneous Sasaki-Einstein structures. As an application of our results we use the Cartan-Remmert reduction and the Calabi ansatz technique to provide many explicit examples of crepant resolutions of Calabi-Yau cones with certain homogeneous Sasaki-Einstein manifolds realized as links of isolated hypersurface singularities. | mathematics |
We study the evolution of the configuration entropy of HI distribution in the post-reionization era assuming different time evolution of HI bias. We describe time evolution of linear bias of HI distribution using a simple form $b(a)=b_{0} a^{n}$ with different index $n$. The derivative of the configuration entropy rate is known to exhibit a peak at the scale factor corresponding to the $\Lambda$-matter equality in the unbiased $\Lambda$CDM model. We show that in the $\Lambda$CDM model with time-dependent linear bias, the peak shifts to smaller scale factors for negative values of $n$. This is related to the fact that the growth of structures in the HI density field can significantly slow down even before the onset of $\Lambda$ domination in presence of a strong time evolution of the HI bias. We find that the shift is linearly related to the index $n$. We obtain the best fit relation between these two parameters and propose that identifying the location of this peak from observations would allow us to constrain the time evolution of HI bias within the framework of the $\Lambda$CDM model. | astrophysics |
We investigate finite stochastic partial monitoring, which is a general model for sequential learning with limited feedback. While Thompson sampling is one of the most promising algorithms on a variety of online decision-making problems, its properties for stochastic partial monitoring have not been theoretically investigated, and the existing algorithm relies on a heuristic approximation of the posterior distribution. To mitigate these problems, we present a novel Thompson-sampling-based algorithm, which enables us to exactly sample the target parameter from the posterior distribution. Besides, we prove that the new algorithm achieves the logarithmic problem-dependent expected pseudo-regret $\mathrm{O}(\log T)$ for a linearized variant of the problem with local observability. This result is the first regret bound of Thompson sampling for partial monitoring, which also becomes the first logarithmic regret bound of Thompson sampling for linear bandits. | statistics |
Humans are remarkably flexible when understanding new sentences that include combinations of concepts they have never encountered before. Recent work has shown that while deep networks can mimic some human language abilities when presented with novel sentences, systematic variation uncovers the limitations in the language-understanding abilities of networks. We demonstrate that these limitations can be overcome by addressing the generalization challenges in a recently-released dataset, gSCAN, which explicitly measures how well an agent is able to interpret novel ideas grounded in vision, e.g., novel pairings of adjectives and nouns. The key principle we employ is compositionality: that the compositional structure of networks should reflect the compositional structure of the problem domain they address, while allowing other parameters and properties to be learned end-to-end with weak supervision. We build a general-purpose mechanism that enables robots to generalize their language understanding to compositional domains. Crucially, our network has the same state-of-the-art performance as prior work while at the same time generalizing its knowledge when prior work does not. Our network also provides a level of interpretability that enables users to inspect what each part of networks learns. Robust language understanding without dramatic failures and without corner cases is critical to building safe and fair robots; we demonstrate the significant role that compositionality can play in achieving that goal. | computer science |
We analyse the $3+1$ D equilibrium chiral magnetic effect (CME). We apply derivative expansion to the Wigner transform of the two - point Green function. This technique allows us to express the response of electric current to external electromagnetic field strength through the momentum space topological invariant. We consider the wide class of the lattice regularizations of quantum field theory (that includes, in particular, the regularization with Wilson fermions) and also certain lattice models of solid state physics (including those of Dirac semimetals). It appears, that in these models the mentioned topological invariant vanishes identically at nonzero chiral chemical potential. That means, that the bulk equilibrium CME is absent in those systems. | high energy physics phenomenology |
Hyperuniform states of matter are characterized by anomalous suppression of long-wavelength density fluctuations. While most of interesting cases of disordered hyperuniformity are provided by complex many-body systems like liquids or amorphous solids, classical spin chains with certain long-range interactions have been shown to demonstrate the same phenomenon. It is well-known that the transverse field Ising model shows a quantum phase transition (QPT) at zero temperature. Under the quantum effects of a transverse magnetic field, classical hyperuniform spin chains are expected to lose their hyperuniformity. High-precision simulations of these cases are complicated because of the presence of highly nontrivial long-range interactions. We perform extensive analysis of these systems using density matrix renormalization group to study the possibilities of phase transitions and the mechanism by which they lose hyperuniformity. We discover first-order QPTs in the hyperuniform spin chains. An interesting feature of the phase transitions in these disordered hyperuniform spin chains is that, depending on the parameter values, the presence of transverse magnetic field may remarkably lead to increase in the order of the ground state as measured by the "$\tau$ order metric," even if hyperuniformity is lost. Therefore, it would be possible to design materials to target specific novel quantum behaviors in the presence of a transverse magnetic field. Our numerical investigations suggest that these spin chains can show no more than two QPTs. We further analyze the long-range interacting spin chains via the Jordan-Wigner mapping, showing that under the pairwise interacting approximation and a mean-field treatment, there can be at most two QPTs. Based on these numerical and theoretical explorations, we conjecture that these spin chains can show a maximum of two QPTs at zero temperature. | quantum physics |
Code switching is a linguistic phenomenon that may occur within a multilingual setting where speakers share more than one language. With the increasing communication between groups with different languages, this phenomenon is more and more popular. However, there are little research and data in this area, especially in code-mixing sentiment classification. In this work, the domain transfer learning from state-of-the-art uni-language model ERNIE is tested on the code-mixing dataset, and surprisingly, a strong baseline is achieved. Furthermore, the adversarial training with a multi-lingual model is used to achieve 1st place of SemEval-2020 Task 9 Hindi-English sentiment classification competition. | computer science |
Thermal infrared cameras are increasingly being used in various applications such as robot vision, industrial inspection and medical imaging, thanks to their improved resolution and portability. However, the performance of traditional computer vision techniques developed for electro-optical imagery does not directly translate to the thermal domain due to two major reasons: these algorithms require photometric assumptions to hold, and methods for photometric calibration of RGB cameras cannot be applied to thermal-infrared cameras due to difference in data acquisition and sensor phenomenology. In this paper, we take a step in this direction, and introduce a novel algorithm for online photometric calibration of thermal-infrared cameras. Our proposed method does not require any specific driver/hardware support and hence can be applied to any commercial off-the-shelf thermal IR camera. We present this in the context of visual odometry and SLAM algorithms, and demonstrate the efficacy of our proposed system through extensive experiments for both standard benchmark datasets, and real-world field tests with a thermal-infrared camera in natural outdoor environments. | electrical engineering and systems science |
We demonstrate within the quantum field theoretical framework that an asymptotic particle falling into the black hole implants soft graviton hair on the horizon, conforming with the classical proposal of Hawking, Perry and Strominger. A key ingredient to this result is the construction of gravitational Wilson line dressings of an infalling scalar field, carrying a definite horizon supertranslation charge. It is shown that a typical Schwarzschild state is degenerate, and can be labeled by different soft supertranslation hairs parametrized for radial trajectories by the mass and energy of the infalling particle and its asymptotic point of contact with the horizon. The supertranslation zero modes are also obtained in terms of zero-frequency graviton operators, and are shown to be the expected canonical partners of the linearized horizon charge that enlarge the horizon Hilbert space. | high energy physics theory |
We revisit the N=6 superconformal Chern-Simons-matter theories and their supergravity duals in the context of generalized symmetries. This allows us to finally clarify how the $SU(N)\times SU(N)$ and $(SU(N)\times SU(N))/\mathbb{Z}_N$ theories, as well as other quotient theories that have recently been discussed, fit into the holographic framework. It also resolves a long standing puzzle regarding the di-baryon operator in the $U(N)\times U(N)$ theory. | high energy physics theory |
Manipulating spin currents in magnetic insulators is a key technology in spintronics. We theoretically study a simple inversion-asymmetric model of quantum antiferromagnets, where both the exchange interaction and the magnetic field are staggered. We calculate spin currents generated by external electric and magnetic fields by using a quantum master equation. We show that an ac electric field with amplitude $E_0$ leads, through exchange-interaction modulation, to the dc and second-harmonic spin currents proportional to $E_0^2$. We also show that dc and ac staggered magnetic fields $B_0$ generate the dc and ac spin currents proportional to $B_0$, respectively. We elucidate the mechanism by an exactly solvable model, and thereby propose the ways of spin current manipulation by electromagnetic fields. | condensed matter |
Accurate local fiber orientation distribution (FOD) modeling based on diffusion magnetic resonance imaging (dMRI) capable of resolving complex fiber configurations benefits from specific acquisition protocols that sample a high number of gradient directions (b-vecs), a high maximum b-value(b-vals), and multiple b-values (multi-shell). However, acquisition time is limited in a clinical setting and commercial scanners may not provide such dMRI sequences. Therefore, dMRI is often acquired as single-shell (single b-value). In this work, we learn improved FODs for commercially acquired MRI. We evaluate patch-based 3D convolutional neural networks (CNNs)on their ability to regress multi-shell FOD representations from single-shell representations, where the representation is a spherical harmonics obtained from constrained spherical deconvolution (CSD) to model FODs. We evaluate U-Net and HighResNet 3D CNN architectures on data from the Human Connectome Project and an in-house dataset. We evaluate how well each CNN model can resolve local fiber orientation 1) when training and testing on datasets with the same dMRI acquisition protocol; 2) when testing on a dataset with a different dMRI acquisition protocol than used to train the CNN models; and 3) when testing on a dataset with a fewer number of gradient directions than used to train the CNN models. Our approach may enable robust CSD model estimation on single-shell dMRI acquisition protocols with few gradient directions, reducing acquisition times, facilitating translation of improved FOD estimation to time-limited clinical environments. | electrical engineering and systems science |
Future cellular systems will make use of millimeter wave (mmWave) frequency bands. Many users in these bands are located indoors, i.e., inside buildings, homes, and offices. Typical building material attenuations in these high frequency ranges are of interest for link budget calculations. In this paper, we report on a collaborative measurement campaign to find the attenuation of several typical building materials in three potential mmWave bands (28, 73, 91 GHz). Using directional antennas, we took multiple measurements at multiple locations using narrow-band and wide-band signals, and averaged out residual small-scale fading effects. Materials include clear glass, drywall (plasterboard), plywood, acoustic ceiling tile, and cinder blocks. Specific attenuations range from approximately 0.5 dB/cm for ceiling tile at 28 GHz to approximately 19 dB/cm for clear glass at 91 GHz. | electrical engineering and systems science |
Electric-field noise due to surfaces disturbs the motion of nearby trapped ions, compromising the fidelity of gate operations that are the basis for quantum computing algorithms. We present a method that predicts the effect of dielectric materials on the ion's motion. Such dielectrics are integral components of ion traps. Quantitative agreement is found between a model with no free parameters and measurements of a trapped ion in proximity to dielectric mirrors. We expect that this approach can be used to optimize the design of ion-trap-based quantum computers and network nodes. | quantum physics |
Hexagonal FeN monolayers on Cu(001) show a stripe pattern composed of dark and bright stripes as observed recently by scanning tunneling microscopy (STM). Here a harmonic mesh model is proposed to interpret those results. It uses a planar Fourier analysis with two different wave vectors to calculate small displacements of Fe atoms caused by lattice mismatch stress. The area of displaced Fe atom triangles is used as a measure for the protrusion of N atoms in their center. Proper selection of the wave vectors allows to visualize the dominant protrusions of (Sqrt(3) x Sqrt(3)) ordered nitrogen atoms. The wave vectors selected are related to Fe configurations which minimize lattice mismatch stress. The model is kept simple, so just the most basic features of the STM experiments are covered. The limits of the model are discussed and ways for extending it are proposed. | condensed matter |
The muon identification system of the ALICE experiment at the CERN LHC is based on Resistive Plate Chamber (RPC) detectors. These RPCs are operated in the so-called maxi-avalanche mode with a gas mixture made of tetrafluoroethane (C$_{2}$H$_{2}$F$_{4}$), sulfur hexafluoride (SF$_{6}$) and isobutane (i-C$_{4}$H$_{10}$). All of these components are greenhouse gases: in particular, the first two gases are already phasing out of production, due to recent European Union regulations, and their cost is progressively increasing. Therefore, finding a new eco-friendly gas mixture has become extremely important in order to reduce the impact of the RPC operation on the environment, and for economic reasons. Due to the similar chemical structure, hydrofluoroolefins appear appropriate candidates to replace C$_{2}$H$_{2}$F$_{4}$ thanks to their very low GWPs, especially tetrafluoropropene (C$_{3}$H$_{2}$F$_{4}$) with the trade name HFO1234ze. In order to identify an eco-friendly gas mixture fulfilling the requirements for operation in the ALICE environment in the coming years, a dedicated experimental set-up has been built to carry out R&D studies on promising gas mixtures. Measurements have been performed with a small-size RPC equipped with the front-end electronics, providing signal amplification, developed for ALICE operation at high luminosity after the LHC Long Shutdown 2. HFO1234ze-based mixtures with the addition of CO$_{2}$ are discussed in this paper as well as the role of i-C$_{4}$H$_{10}$ and SF$_{6}$ as quenchers in such mixtures. | physics |
Several decades of observations of the most massive and most luminous stars have revealed a complex upper HR Diagram, shaped by mass loss, and inhabited by a variety of evolved stars exhibiting the consequences of their mass loss histories. This review presents a brief historical overview of the HR Diagram for massive stars, highlighting some of the primary discoveries and results from their observation in nearby galaxies. The chapters in this volume include reviews of our current understanding of different groups of evolved massive stars, all losing mass and in different stages of their evolution; the Luminous Blue Variables (LBVs), B[e] supergiants, the warm hypergiants, Wolf-Rayet stars, and the population of OB stars and supergiants in the Magellanic Clouds. | astrophysics |
Most modeling approaches lie in either of the two categories: physics-based or data-driven. Recently, a third approach which is a combination of these deterministic and statistical models is emerging for scientific applications. To leverage these developments, our aim in this perspective paper is centered around exploring numerous principle concepts to address the challenges of (i) trustworthiness and generalizability in developing data-driven models to shed light on understanding the fundamental trade-offs in their accuracy and efficiency, and (ii) seamless integration of interface learning and multifidelity coupling approaches that transfer and represent information between different entities, particularly when different scales are governed by different physics, each operating on a different level of abstraction. Addressing these challenges could enable the revolution of digital twin technologies for scientific and engineering applications. | physics |
Pulsar glitches offer an insight into the dynamics of superfluids in the high density interior of a neutron star. To model these phenomena, however, one needs to have an understanding of the dynamics of a turbulent array of superfluid vortices moving through a pinning lattice. In this paper we develop a theoretical approach to describe vortex mediated mutual friction in a pinned, turbulent and rotating superfluid. Our model is then applied to the study of the post glitch rotational evolution in the Vela pulsar and in PSR J0537-6910. We show that in both cases a turbulent model fits the evolution of the spin frequency derivative better than a laminar one. We also predict that the second derivative of the frequency after a glitch should be correlated with the waiting time since the previous glitch, which we find to be consistent with observational data for these pulsars. The main conclusion of this paper is that in the post-glitch rotational evolution of these two pulsars we are most likely observing the response to the glitch of a pinned turbulent region of the star (possibly the crust) and not the laminar response of a regular straight vortex array. | astrophysics |
Model selection in the large-P small-N scenario is discussed in the framework of two-stage models. Two specific models are considered, namely, two-stage least squares (TSLS) involving instrumental variables (IVs), and mediation models. In both cases, the number of putative variables (e.g. instruments or mediators) is large, but only a small subset should be included in the two-stage model. We use two variable selection methods which are designed for high-dimensional settings, and compare their performance in terms of their ability to find the true IVs or mediators. Our approach is demonstrated via simulations and case studies. | statistics |
Knowledge graphs suffer from sparsity which degrades the quality of representations generated by various methods. While there is an abundance of textual information throughout the web and many existing knowledge bases, aligning information across these diverse data sources remains a challenge in the literature. Previous work has partially addressed this issue by enriching knowledge graph entities based on "hard" co-occurrence of words present in the entities of the knowledge graphs and external text, while we achieve "soft" augmentation by proposing a knowledge graph enrichment and embedding framework named Edge. Given an original knowledge graph, we first generate a rich but noisy augmented graph using external texts in semantic and structural level. To distill the relevant knowledge and suppress the introduced noise, we design a graph alignment term in a shared embedding space between the original graph and augmented graph. To enhance the embedding learning on the augmented graph, we further regularize the locality relationship of target entity based on negative sampling. Experimental results on four benchmark datasets demonstrate the robustness and effectiveness of Edge in link prediction and node classification. | computer science |
In this work, we propose a hybrid Bayesian approach towards clock offset and skew estimation, thereby synchronizing large scale networks. In particular, we demonstrate the advantage of Bayesian Recursive Filtering (BRF) in alleviating time-stamping errors for pairwise synchronization. Moreover, we indicate the benefit of Factor Graph (FG), along with Belief Propagation (BP) algorithm in achieving high precision end-to-end network synchronization. Finally, we reveal the merit of hybrid synchronization, where a large-scale network is divided into local synchronization domains, for each of which a suitable synchronization algorithm (BP- or BRF-based) is utilized. The simulation results show that, despite the simplifications in the hybrid approach, the Root Mean Square Errors (RMSEs) of clock offset and skew estimation remain below 5 ns and 0.3 ppm, respectively. | electrical engineering and systems science |
Hotspot is a ubiquitous phenomenon in microdevices/chips. In homogeneous nanoscale graphene disk with a hotspot, a graded thermal conductivity is observed previously even when the system size is fixed. However, the underlying physical mechanism is not clear. In this work, the hotspots in homogeneous 2D disk/3D ball and graphene disk are studied based on phonon Boltzmann transport equation. The mechanisms of phonon scattering are analyzed. It is found that for a system with fixed size, the graded thermal conductivity is predictable as long as there is not sufficient phonon scattering, which is independent on material properties, dimensions or system size. This work may shed light on both theoretical and experimental studies on heat dissipation of microelectronics. | condensed matter |
The paper [Ras15a] introduced distribution-valued games. This game-theoretic model uses probability distributions as payoffs for games in order to express uncertainty about the payoffs. The player's preferences for different payoffs are expressed by a stochastic order which we call the tail order. This thesis formalizes distribution-valued games with preferences expressed by general stochastic orders, and specifically analyzes properties of the tail order. It identifies sufficient conditions for tail-order preference to hold, but also finds that some claims in [Ras15a] about the tail order are incorrect, for which counter-examples are constructed. In particular, it is demonstrated that a proof for the totality of the order on a certain set of distributions contains an error; the thesis proceeds to show that the ordering is not total on the slightly less restricted set of distributions with non-negative bounded support. It is also shown that not all tail-ordered games have mixed-strategy Nash equilibria, and in fact almost all tail-ordered games with finitely-supported payoff distributions can only have a Nash equilibrium if they have a pure-strategy Nash equilibrium. The thesis subsequently extends an idea from [AM19] and proposes a new solution concept for distribution-valued games. This concept is based on constructing multi-objective real-valued games from distribution-valued games by segmenting their payoff distributions. | mathematics |
5G millimeter wave (mmWave) signals can be used to jointly localize the receiver and map the propagation environment in vehicular networks, which is a typical simultaneous localization and mapping (SLAM) problem. Mapping the environment is challenging, due to measurements comprising both specular and diffuse multipath components, and diffuse multipath is usually considered as a perturbation. We here propose a novel method to utilize all available multipath signals from each landmark for mapping and incorporate this into a Poisson multi-Bernoulli mixture for the 5G SLAM problem. Simulation results demonstrate the efficacy of the proposed scheme. | electrical engineering and systems science |
By the important applications of Gabor transform in time-frequency analysis and signal analysis, in this paper, we consider the Gabor quaternion Fourier transform (GQFT), and we prove of it a version Benedicks-type uncertainty principle for GQFT and some local concentration uncertainty principles. | mathematics |
Variational Bayesian inference is an important machine-learning tool that finds application from statistics to robotics. The goal is to find an approximate probability density function (PDF) from a chosen family that is in some sense `closest' to the full Bayesian posterior. Closeness is typically defined through the selection of an appropriate loss functional such as the Kullback-Leibler (KL) divergence. In this paper, we explore a new formulation of variational inference by exploiting the fact that the set of PDFs constitutes a Bayesian Hilbert space under careful definitions of vector addition, scalar multiplication and an inner product. We show that variational inference based on KL divergence then amounts to an iterative projection of the Bayesian posterior onto a subspace corresponding to the selected approximation family. In fact, the inner product chosen for the Bayesian Hilbert space suggests the definition of a new measure of the information contained in a PDF and in turn a new divergence is introduced. Each step in the iterative projection is equivalent to a local minimization of this divergence. We present an example Bayesian subspace based on exponentiated Hermite polynomials as well as work through the details of this general framework for the specific case of the multivariate Gaussian approximation family and show the equivalence to another Gaussian variational inference approach. We furthermore discuss the implications for systems that exhibit sparsity, which is handled naturally in Bayesian space. | computer science |
Recent string theory tests of swampland ideas like the distance or the dS conjectures have been performed at weak coupling. Testing these ideas beyond the weak coupling regime remains challenging. We propose to exploit the modular symmetries of the moduli effective action to check swampland constraints beyond perturbation theory. As an example we study the case of heterotic 4d $\mathcal{N}=1$ compactifications, whose non-perturbative effective action is known to be invariant under modular symmetries acting on the K\"ahler and complex structure moduli, in particular $SL(2,Z)$ T-dualities (or subgroups thereof) for 4d heterotic or orbifold compactifications. Remarkably, in models with non-perturbative superpotentials, the corresponding duality invariant potentials diverge at points at infinite distance in moduli space. The divergence relates to towers of states becoming light, in agreement with the distance conjecture. We discuss specific examples of this behavior based on gaugino condensation in heterotic orbifolds. We show that these examples are dual to compactifications of type I' or Horava-Witten theory, in which the $SL(2,Z)$ acts on the complex structure of an underlying 2-torus, and the tower of light states correspond to D0-branes or M-theory KK modes. The non-perturbative examples explored point to potentials not leading to weak coupling at infinite distance, but rather diverging in the asymptotic corners of moduli space, dynamically forbidding the access to points with global symmetries. We perform a study of general modular invariant potentials and find that there are dS maxima and saddle points but no dS minima, and that all examples explored obey the refined dS conjecture. | high energy physics theory |
We propose a method to organize experimental data from particle collision experiments in a general format which can enable a simple visualisation and effective classification of collision data using machine learning techniques. The method is based on sparse fixed-size matrices with single- and two-particle variables containing information on identified particles and jets. We illustrate this method using an example of searches for new physics at the LHC experiments. | high energy physics phenomenology |
Understanding what online users may pay attention to is key to content recommendation and search services. These services will benefit from a highly structured and web-scale ontology of entities, concepts, events, topics and categories. While existing knowledge bases and taxonomies embody a large volume of entities and categories, we argue that they fail to discover properly grained concepts, events and topics in the language style of online population. Neither is a logically structured ontology maintained among these notions. In this paper, we present GIANT, a mechanism to construct a user-centered, web-scale, structured ontology, containing a large number of natural language phrases conforming to user attentions at various granularities, mined from a vast volume of web documents and search click graphs. Various types of edges are also constructed to maintain a hierarchy in the ontology. We present our graph-neural-network-based techniques used in GIANT, and evaluate the proposed methods as compared to a variety of baselines. GIANT has produced the Attention Ontology, which has been deployed in various Tencent applications involving over a billion users. Online A/B testing performed on Tencent QQ Browser shows that Attention Ontology can significantly improve click-through rates in news recommendation. | computer science |
Motivated by the apparently conflicting results reported in the literature on the effect of environment on nuclear activity, we have carried out a new analysis by comparing the fraction of galaxies hosting active galactic nuclei (AGNs) in the most overdense regions (rich galaxy clusters) and the most underdense ones (voids) in the local universe. Exploiting the classical BPT diagnostics, we have extracted volume limited samples of star forming and AGN galaxies. We find that, at variance with star-forming galaxies, AGN galaxies have similar distributions of specific star formation rates and of galactic ages (as indicated by the Dn4000 parameter) both in clusters and in voids. In both environments galaxies hosting AGNs are generally old, with low star formation activity. The AGN fraction increases faster with stellar mass in clusters than in voids, especially above 10^10.2 M(sun). Our results indicate that, in the local universe, the nuclear activity correlates with stellar mass and galaxy morphology and is weakly, if at all, affected by the local galaxy density. | astrophysics |
We present a study of manipulating the multiphoton blockade phenomenon in a single mode cavity with two ladder-type three-level atoms. Combining the cavity QED with electromagnetically induced transparency technique, we show that it is possible to actively manipulate the photon blockade when two atoms are in phase radiations. As a result, the two-photon blockade can be changed to three-photon blockade by changing the control field Rabi frequency. In the case of out-phase radiations, we show that the three-photon blockade can be improved with enhanced mean photon number. In addition, we show that the nonclassical field with sub-Poissonian distribution can be changed to the classical field with super-Poissonian distribution by tuning the Rabi frequency of the control field. The results presented in this work open up the possibility for achieving a two-photon gateway operation, which could be used in network of atom-cavity systems to control the quantum property of photons leaking from the cavity. | quantum physics |
Using recently derived results for one-loop hadronic splitting functions from a nonlocal implementation of chiral effective theory, we study the contributions from pseudoscalar meson loops to flavor asymmetries in the proton. Constraining the parameters of the regulating functions by inclusive production of $n$, $\Delta^{++}$, $\Lambda$ and $\Sigma^{*+}$ baryons in $pp$ collisions, we compute the shape of the light antiquark asymmetry $\bar{d}-\bar{u}$ in the proton and the strange asymmetry $s-\bar{s}$ in the nucleon sea. With these constraints, the magnitude of the $\bar{d}-\bar{u}$ asymmetry is found to be compatible with that extracted from the Fermilab E866 Drell-Yan measurement, with no indication of a sign change at large values of $x$, and an integrated value in the range $\langle \bar d-\bar u \rangle \approx 0.09-0.17$. The $s-\bar s$ asymmetry is predicted to be positive at $x > 0$, with compensating negative contributions at $x=0$, and an integrated $x$-weighted moment in the range $\langle x (s-\bar s) \rangle \approx (0.9-2.5) \times 10^{-3}$. | high energy physics phenomenology |
We study the determination of the top-quark mass using leptonic observables in t-channel single top-quark production at the LHC. We demonstrate sensitivity of transverse momentum of the charged lepton on the input top-quark mass. We present predictions at next-to-next-to-leading order (NNLO) in QCD with narrow width approximation and structure function approach. Further corrections due to parton shower and hadronization, non-resonant and non-factorized contributions are discussed. To reduce impact of SM backgrounds we propose to use the charge weighted distribution for the measurement, i.e., differences between distributions of charged lepton with positive and negative charges. By modeling both signal and background processes, we found the projections for (HL-)LHC to be promising, with a total theoretical uncertainty on the extracted top-quark mass of about 1 $\sim$ 2 GeV. | high energy physics phenomenology |
The mechanical behavior of antigorite strongly influences the strength and deformation of the subduction interface. Although there is microstructural evidence elucidating the nature of brittle deformation at low pressures, there is often conflicting evidence regarding the potential for plastic deformation in the ductile regime at higher pressures. Here, we present a series of spherical nanoindentation experiments on aggregates of natural antigorite. These experiments effectively investigate the single-crystal mechanical behavior because the volume of deformed material is significantly smaller than the grain size. Individual indents reveal elastic loading followed by yield and strain hardening. The magnitude of the yield stress is a function of crystal orientation, with lower values associated with indents parallel to the basal plane. Unloading paths reveal more strain recovery than expected for purely elastic unloading. The magnitude of inelastic strain recovery is highest for indents parallel to the basal plane. We also imposed indents with cyclical loading paths, and observed strain energy dissipation during unloading-loading cycles conducted up to a fixed maximum indentation load and depth. The magnitude of this dissipated strain energy was highest for indents parallel to the basal plane. Subsequent scanning electron microscopy revealed surface impressions accommodated by shear cracks and a general lack of lattice misorientation around indents, indicating the absence of dislocations. Based on these observations, we suggest that antigorite deformation at high pressures is dominated by sliding on shear cracks. We develop a microphysical model that is able to quantitatively explain the Young's modulus and dissipated strain energy data during cyclic loading experiments, based on either frictional or cohesive sliding of an array of cracks contained in the basal plane. | physics |
We have investigated how the wakes in the induced charge density and in the potential due to the passage of highly energetic partons through a thermal QCD medium get affected by the presence of strong magnetic field. For that purpose, we wish to analyze first the dielectric responses of the medium both in presence and absence of strong magnetic field. We found that for slow moving partons, the real part of dielectric function is not affected by the magnetic field whereas for fast moving partons, for small $|\textbf{k}|$, it becomes very large and approaches towards its counterpart at $B=0$, for large $|\textbf{k}|$. On the other hand the imaginary part is decreased for both slow and fast moving partons, due to the fact that the imaginary contribution due to quark-loop vanishes. With these ingredients, we found that the oscillation in the (scaled) induced charge density, due to the very fast partons becomes less pronounced in the presence of strong magnetic field whereas for smaller parton velocity, no significant change is observed. For the (scaled) wake potential along the motion of fast moving partons (which is of Lennard-Jones (LJ) type), the depth of negative minimum in the backward region gets reduced drastically, resulting in the reduction of the amplitude of oscillation. On the other hand in the forward region, it remains as the screened Coulomb one, except the screening now becomes much stronger for higher parton velocity. Similarly for the wake potential transverse to the motion of partons in both forward and backward regions, the depth of LJ potential for fast moving partons gets decreased severely, but still retains the forward-backward symmetry. However, for lower parton velocity, the magnetic field does not affect it significantly. | high energy physics phenomenology |
In modern business modeling and analytics, data monitoring plays a critical role. Nowadays, sophisticated models often rely on hundreds or even thousands of input variables. Over time, structural changes such as abrupt level shifts or trend slope changes may occur among some of these variables, likely due to changes in economy or government policies. As a part of data monitoring, it is important to identify these changepoints, in terms of which variables exhibit such changes, and what time locations do the changepoints occur. Being alerted about the changepoints can help modelers decide if models need modification or rebuilds, while ignoring them may increase risks of model degrading. Simple process control rules often flag too many false alarms because regular seasonal fluctuations or steady upward or downward trends usually trigger alerts. To reduce potential false alarms, we create a novel statistical method based on the Bayesian Minimum Description Length (BMDL) framework to perform multiple change-point detection. Our method is capable of detecting all structural breaks occurred in the past, and automatically handling data with or without seasonality and/or autocorrelation. It is implemented with computation algorithms such as Markov chain Monte Carlo (MCMC), and can be applied to all variables in parallel. As an explainable anomaly detection tool, our changepoint detection method not only triggers alerts, but provides useful information about the structural breaks, such as the times of changepoints, and estimation of mean levels and linear slopes before and after the changepoints. This makes future business analysis and evaluation on the structural breaks easier. | statistics |
This study proposes sparse estimation methods for the generalized linear models, which run one of least angle regression (LARS) and least absolute shrinkage and selection operator (LASSO) in the tangent space of the manifold of the statistical model. This study approximates the statistical model and subsequently uses exact calculations. LARS was proposed as an efficient algorithm for parameter estimation and variable selection for the normal linear model. The LARS algorithm is described in terms of Euclidean geometry regarding the correlation as the metric of the parameter space. Since the LARS algorithm only works in Euclidean space, we transform a manifold of the statistical model into the tangent space at the origin. In the generalized linear regression, this transformation allows us to run the original LARS algorithm for the generalized linear models. The proposed methods are efficient and perform well. Real-data analysis indicates that the proposed methods output similar results to that of the $l_1$-regularized maximum likelihood estimation for the aforementioned models. Numerical experiments reveal that our methods work well and they may be better than the $l_1$-regularization in generalization, parameter estimation, and model selection. | statistics |
Here we present an algorithm to procedurally remap spectral contents of natural signals. The algorithm takes in two inputs: a signal whose spectral component needs to be remapped and a warping or remapping function. The algorithm generates one output, which is a remapped version of the original signal. The input signal is remapped into the output signal in two steps. In the analysis step, the algorithm performs a series of operations to modify the spectral content, i.e., compute the warped phase of the signal according to the given remapping function. In the synthesis step, the modified spectral content is combined with the envelope information of the input signal to reconstruct the warped or remapped output signal. | electrical engineering and systems science |
We expand on Nekov\'a\v{r}'s construction of the plectic half transfer to define a plectic Galois action on Hilbert modular varieties. More precisely, we study in a unifying fashion Shimura varieties associated to groups that differ only in the centre from $R_{F/\mathbb{Q}}{\rm GL}_2$. We define plectic Galois actions on the CM points and on the set of connected components of these Shimura varieties, and show that these two actions are compatible. This extends the plectic conjecture of Nekov\'a\v{r}--Scholl. | mathematics |
This paper investigates how a disturbance in the power network affects the nodal frequencies of certain network buses. To begin with, we show that the inertia of a single generator is in inverse proportion to the initial rate of change of frequency (RoCoF) under disturbances. Then, we present how the initial RoCoF of the nodal frequencies are related to the inertia constants of multiple generators in a power network, which leads to a performance metric to analyze nodal frequency performance. To be specific, the proposed metric evaluates the impact of disturbances on the nodal frequency performance. The validity and effectiveness of the proposed metric are illustrated via simulations on a multi-machine power system. | electrical engineering and systems science |
Searching for distinctive signatures, which characterize different formation channels of binary black holes (BBHs), is a crucial step towards the interpretation of current and future gravitational wave detections. Here, we investigate the demography of merging BBHs in young star clusters (SCs), which are the nursery of massive stars. We performed $4\times{} 10^3$ N-body simulations of SCs with metallicity $Z=0.002$, initial binary fraction $0.4$ and fractal initial conditions, to mimic the clumpiness of star forming regions. Our simulations include a novel population-synthesis approach based on the code MOBSE. We find that SC dynamics does not affect the merger rate significantly, but leaves a strong fingerprint on the properties of merging BBHs. More than 50 % of merging BBHs in young SCs form by dynamical exchanges in the first few Myr. Dynamically formed merging BBHs are significantly heavier than merging BBHs in isolated binaries: merging BBHs with total mass up to $\sim{}120$ M$_\odot$ form in young SCs, while the maximum total mass of merging BBHs in isolated binaries with the same metallicity is only $\sim{}70$ M$_\odot$. Merging BBHs born via dynamical exchanges tend to have smaller mass ratios than BBHs in isolated binaries. Furthermore, SC dynamics speeds up the merger: the delay time between star formation and coalescence is significantly shorter in young SCs. In our simulations, massive systems such as GW170729 form only via dynamical exchanges. Finally $\sim{}2$ % of merging BBHs in young SCs have mass in the pair-instability mass gap ($\sim{}60-120$ M$_\odot$). This represents a unique fingerprint of merging BBHs in SCs. | astrophysics |
In the first papers of our series on interstellar generation ships we have demonstrated that the numerical code HERITAGE is able to calculate the success rate of multi-generational space missions. Thanks to the social and breeding constraints we examined, a multi-generational crew can safely reach an exoplanet after centuries of deep space travel without risks of consanguinity or genetic disorders. We now turn to addressing an equally important question : how to feed the crew? Dried food stocks are not a viable option due to the deterioration of vitamins with time and the tremendous quantities that would be required for long-term storage. The best option relies on farming aboard the spaceship. Using an updated version of HERITAGE that now accounts for age-dependent biological characteristics such as height and weight, and features related to the varying number of colonists, such as infertility, pregnancy and miscarriage rates, we can estimate the annual caloric requirements aboard using the Harris-Benedict principle. By comparing those numbers with conventional and modern farming techniques we are able to predict the size of artificial land to be allocated in the vessel for agricultural purposes. We find that, for an heterogeneous crew of 500 people living on an omnivorous, balanced diet, 0.45 km2 of artificial land would suffice in order to grow all the necessary food using a combination of aeroponics (for fruits, vegetables, starch, sugar, and oil) and conventional farming (for meat, fish, dairy, and honey). | physics |
The generation of chiral laser emission offers promising opportunities for modern photonic applications and the study of chiral light-mater interactions. Despite the great process made in recent years, the direct generation of chiral lasers with controllable chirality remains challenging in a microcavity. This study reports a strategy to control the emission chirality of whispering-gallery-mode organic microlasers with topological-structured chiral liquid crystal droplets. The findings suggest that the topological transformations in a microdroplet can induce different optical chirality strength and lasing polarization characteristics. In particular, the role of optical rotatory power was also investigated under linear and circular polarized excitation, where a vast lasing intensity difference between left/right circular polarized excitation was revealed. Theoretical analysis and simulation were carried out to support the significant findings. This study provides a facile tool for manipulating chiral laser emissions in microcavity, giving inspiration for topological-controlled photonics, chiral optical devices, and chiral molecular detection. | physics |
For any congruence subgroup $\Gamma$, we study the vertex operator algebra $\Omega^{ch}(\mathbb H,\Gamma)$ constructed from the $\Gamma$-invariant global sections of the chiral de Rham complex on the upper half plane, which are holomorphic at all the cusps. We introduce an $SL(2,\mathbb R)$-invariant filtration on the global sections and show that the $\Gamma$-invariants on the graded algebra is isomorphic to certain copies of modular forms. We also give an explicit formula for the lifting of modular forms to $\Omega^{ch}(\mathbb H,\Gamma)$ and compute the character formula of $\Omega^{ch}(\mathbb H,\Gamma)$. Furthermore, we show that the vertex algebra structure modifies the Rankin-Cohen bracket, and the modified bracket becomes non-zero between constant modular forms involving the Eisenstein series. | mathematics |
A simple and efficient adaptive Markov Chain Monte Carlo (MCMC) method, called the Metropolized Adaptive Subspace (MAdaSub) algorithm, is proposed for sampling from high-dimensional posterior model distributions in Bayesian variable selection. The MAdaSub algorithm is based on an independent Metropolis-Hastings sampler, where the individual proposal probabilities of the explanatory variables are updated after each iteration using a form of Bayesian adaptive learning, in a way that they finally converge to the respective covariates' posterior inclusion probabilities. We prove the ergodicity of the algorithm and present a parallel version of MAdaSub with an adaptation scheme for the proposal probabilities based on the combination of information from multiple chains. The effectiveness of the algorithm is demonstrated via various simulated and real data examples, including a high-dimensional problem with more than 20,000 covariates. | statistics |
For graphene nanoribbons with Rashba spin-orbit coupling, the peculiar magnetic response due to the presence of a magnetization and geometric confinement are analyzed within a tight-binding model. We observe a sizable transverse susceptibility that can be considered as a gate voltage-induced magnetoelectric torque without the need of a bias voltage, with different directions for zigzag and armchair ribbons. The local torque generates non-collinear spin polarization between the two edges and/or along the ribbon, and the net torque averages to zero if the magnetization is homogeneous. Nevertheless, a nonzero net torque can appear in partially magnetized nanoribbons or in nanoflakes of irregular shapes. The equilibrium spin current produced by the spin-orbit coupling also appears in nanoribbons, but the component flowing in the direction of confinement is strongly suppressed. Even without the magnetization, an out-of-plane polarized chiral edge spin current is produced, resembling that in the quantum spin Hall effect. Moreover, a magnetization pointing perpendicular to the edge produces a laminar flow of edge charge currents, whose flow direction is symmetric (non chiral) or antisymmetric (chiral) between the two edges depends on whether the magnetization points in-plane or out-of-plane. | condensed matter |
On the way towards quantum gravity and the unification of interaction, several ideas have been rejected and avenues avoided because they were perceived as physically unviable. But in the literature there are works in which it was found the contrary, namely that those rejected topics make sense after all. Such topics, reviewed in this article, are negative energies occurring in higher derivative theories and ultrahyperbolic spaces, ordering ambiguity of operators in curved spaces, the vast landscape of possible compactifications of extra dimensions in string theory, and quantization of a 3-brane in braneworld scenarios. | high energy physics theory |
The future interior of black holes in AdS/CFT can be described in terms of a quantum circuit. We investigate boundary quantities detecting properties of this quantum circuit. We discuss relations between operator size, quantum complexity, and the momentum of an infalling particle in the black hole interior. We argue that the trajectory of the infalling particle in the interior close to the horizon is related to the growth of operator size. The notion of size here differs slightly from the size which has previously been related to momentum of exterior particles and provides an interesting generalization. The fact that both exterior and interior momentum are related to operator size growth is a manifestation of complementarity. | high energy physics theory |
Pumping a nonlinear crystal by an intense radiation results in the optical parametric generation of photons in two modes (the signal and the idler). The quantized electromagnetic field in these modes is described by a continuous-variable quantum state, which is entangled if the pump is a coherent state produced by a laser. The signal and the idler modes remain populated by photons even if the pump becomes incoherent (dephased by a medium, superposed with a thermal state, or produced by an alternative source such as the superluminescent diode). However, the incoherent pump does effect the entanglement and purity of the signal and the idler modes, which is of vital importance for quantum information applications and interferometry. Here we develop an approach to infer the signal-idler entanglement and purity for a general quantum incoherent pump with the given Glauber-Sudarshan function. We show that the signal-idler entanglement is extremely sensitive to the phase distribution of the pump and illustrate our findings by physically relevant examples of the incoherent pump: the noisy coherent state, slightly dephased and phase-averaged coherent states, the thermal state, and states modulated by the Kerr medium. The effect of an incoherent pump on the combined quadratures is discussed as well. | quantum physics |
We recast the joint $J\bar{T}$, $T\bar{J}$ and $T\bar{T}$ deformations as coupling the original theory to a mixture of topological gravity and gauge theory. This geometrizes the general flow triggered by irrelevant deformations built out of conserved currents and the stress-energy tensor, by means of a path integral kernel. The partition function of the deformed theory satisfies a diffusion-like flow equation similar to that found in the pure $T\bar{T}$ case. Our proposal passes two stringent tests. Firstly, we recover the classical deformed actions from the kernel, reproducing the known expressions for the free boson and fermion. Secondly, we explicitly compute the torus path integral along the flow and show it localizes to a finite-dimensional, one-loop exact integral over base space torus moduli. The dressed energy levels so obtained match exactly onto those previously reported in the literature. | high energy physics theory |
Several applications in the scientific simulation of physical systems can be formulated as control/optimization problems. The computational models for such systems generally contain hyperparameters, which control solution fidelity and computational expense. The tuning of these parameters is non-trivial and the general approach is to manually `spot-check' for good combinations. This is because optimal hyperparameter configuration search becomes impractical when the parameter space is large and when they may vary dynamically. To address this issue, we present a framework based on deep reinforcement learning (RL) to train a deep neural network agent that controls a model solve by varying parameters dynamically. First, we validate our RL framework for the problem of controlling chaos in chaotic systems by dynamically changing the parameters of the system. Subsequently, we illustrate the capabilities of our framework for accelerating the convergence of a steady-state CFD solver by automatically adjusting the relaxation factors of discretized Navier-Stokes equations during run-time. The results indicate that the run-time control of the relaxation factors by the learned policy leads to a significant reduction in the number of iterations for convergence compared to the random selection of the relaxation factors. Our results point to potential benefits for learning adaptive hyperparameter learning strategies across different geometries and boundary conditions with implications for reduced computational campaign expenses. \footnote{Data and codes available at \url{https://github.com/Romit-Maulik/PAR-RL}} | physics |
To study the heavy quark production processes, we use the transverse momentum dependent (TMD, or unintegrated) gluon distribution function in a proton obtained recently using the Kimber-Martin-Ryskin prescription from the Bessel-inspired behavior of parton densities at small Bjorken $x$ values. We obtained a good agreement of our results with the latest HERA experimental data for reduced cross sections $\sigma^{c\overline{c}}_{\rm red}(x,Q^2)$ and $\sigma^{b\overline{b}}_{\rm red}(x,Q^2)$, and also for deep inelastic structure functions $F_2^c(x,Q^2)$ and $F_2^b(x,Q^2)$ in a wide range of $x$ and $Q^2$ values. Comparisons with the predictions based on the Ciafaloni-Catani-Fiorani-Marchesini evolution equation and with the results of conventional pQCD calculations performed at first three orders of perturbative expansion are presented. | high energy physics phenomenology |
We argue that the description of Feynman loop integrals as integrable systems is intimately connected with their motivic properties and the action of the Cosmic Galois Group. We show how in the case of a family of fishnet graphs, coaction relations between graphs follow directly from iterative constructions of Q-functions in the Quantum Spectral Curve formalism. Using this observation we conjecture a "differential equation for numbers" that enter these periods. | high energy physics theory |
Twisted \'etale groupoid algebras have been studied recently in the algebraic setting by several authors in connection with an abstract theory of Cartan pairs of rings. In this paper, we show that extensions of ample groupoids correspond in a precise manner to extensions of Boolean inverse semigroups. In particular, discrete twists over ample groupoids correspond to certain abelian extensions of Boolean inverse semigroups and we show that they are classified by Lausch's second cohomology group of an inverse semigroup. The cohomology group structure corresponds to the Baer sum operation on twists. We also define a novel notion of inverse semigroup crossed product, generalizing skew inverse semigroup rings, and prove that twisted Steinberg algebras of Hausdorff ample groupoids are instances of inverse semigroup crossed products. The cocycle defining the crossed product is the same cocycle that classifies the twist in Lausch cohomology. | mathematics |
The observation of quasars at very high redshift such as Poniuaena is a challenge for models of super-massive black hole (SMBH) formation. This work presents a study of SMBH formation via known physical processes in star-burst clusters formed at the onset of the formation of their hosting galaxy. While at the early stages hyper-massive star-burst clusters reach the luminosities of quasars, once their massive stars die, the ensuing gas accretion from the still forming host galaxy compresses its stellar black hole (BH) component to a compact state overcoming heating from the BH--BH binaries such that the cluster collapses, forming a massive SMBH-seed within about a hundred Myr. Within this scenario the SMBH--spheroid correlation emerges near-to-exactly. The highest-redshift quasars may thus be hyper-massive star-burst clusters or young ultra-compact dwarf galaxies (UCDs), being the precursors of the SMBHs that form therein within about 200 Myr of the first stars. For spheroid masses <10^9.6 Msun a SMBH cannot form and instead only the accumulated nuclear cluster remains. The number evolution of the quasar phases with redshift is calculated and the possible problem of missing quasars at very high redshift is raised. SMBH-bearing UCDs and the formation of spheroids are discussed critically in view of the high redshift observations. A possible tension is found between the high star-formation rates (SFRs) implied by downsizing and the observed SFRs, which may be alleviated within the IGIMF theory and if the downsizing times are somewhat longer. | astrophysics |
Solving hard problems is one of the most important issues in computing to be addressed by a quantum computer. Previously, we have shown that the H-SEARCH; which is the problem of finding a Hadamard matrix (H-matrix) among all possible binary matrices of corresponding order, is a hard problem that can be solved by a quantum computer. However, due to the limitation on the number of qubits and connections in present day quantum processors, only low orders H-SEARCH are implementable. In this paper, we show that by adopting classical construction/search techniques of the H-matrix, we can develop new quantum computing methods to find higher order H-matrices. Especially, the Turyn-based quantum computing method can be further developed to find an arbitrarily high order H-matrix by balancing the classical and quantum resources. This method is potentially capable to find some unknown H-matrices of practical and scientific interests, where a classical computer alone cannot do because of the exponential grow of the complexity. We present some results of finding H-matrix of order more than one hundred and a prototypical experiment to find even higher order matrix by using the classical-quantum resource balancing method. Although heuristic optimizations generally only achieve approximate solutions, whereas the exact one should be determined by exhaustive listing; which is difficult to perform, in the H-SEARCH we can assure such exactness in polynomial time by checking the orthogonality of the solution. Since quantum advantage over the classical computing should have been measured by comparing the performance in solving a problem up to a definitive solution, the proposed method may lead to an alternate route for demonstrating practical quantum supremacy in the near future. | quantum physics |
The structure of interconnected systems and its impact on the system dynamics is a much-studied cross-disciplinary topic. Although various critical phenomena have been found in different models, the study on the connections between different percolation transitions is still lacking. Here we propose a unified framework to study the origins of the discontinuous transitions of the percolation process on interacting networks. The model evolves in generations with the result of the present percolation depending on the previous state and thus is history-dependent. Both theoretical analysis and Monte Carlo simulations reveal that the nature of the transition remains the same at finite generations but exhibits an abrupt change for the infinite generation. We use brain functional correlation and morphological similarity data to show that our model also provides a general method to explore the network structure and can contribute to many practical applications, such as detecting the abnormal structures of human brain networks. | condensed matter |
The twin group $T_n$ is a Coxeter group generated by $n-1$ involutions and the pure twin group $PT_n$ is the kernel of the natural surjection of $T_n$ onto the symmetric group on $n$ letters. In this paper, we investigate structural aspects of twin and pure twin groups. We prove that the twin group $T_n$ decomposes into a free product with amalgamation for $n>4$. It is shown that the pure twin group $PT_n$ is free for $n=3,4$, and not free for $n\ge 6$. We determine a generating set for $PT_n$, and give an upper bound for its rank. We also construct a natural faithful representation of $T_4$ into $\operatorname{Aut}(F_7)$. In the end, we propose virtual and welded analogues of these groups and some directions for future work. | mathematics |
We show that the X-ray transform restricted along the moment curve possesses extremizers and that $L^p$-normalized extremizing sequences are pre-compact modulo symmetry. | mathematics |
Combinatorial optimization on near-term quantum devices is a promising path to demonstrating quantum advantage. However, the capabilities of these devices are constrained by high noise levels and limited error mitigation. In this paper, we propose an iterative Layer VQE (L-VQE) approach, inspired by the Variational Quantum Eigensolver (VQE). We present a large-scale numerical study, simulating circuits with up to 40 qubits and 352 parameters, that demonstrates the potential of the proposed approach. We evaluate quantum optimization heuristics on the problem of detecting multiple communities in networks, for which we introduce a novel qubit-frugal formulation. We numerically compare L-VQE with QAOA and demonstrate that QAOA achieves lower approximation ratios while requiring significantly deeper circuits. We show that L-VQE is more robust to sampling noise and has a higher chance of finding the solution as compared with standard VQE approaches. Our simulation results show that L-VQE performs well under realistic hardware noise. | quantum physics |
We propose a method to directly measure the complex phase distribution, superfluid density and velocity field in an ultracold atomic superfluid. The method consists of mapping the momentum distribution of the gas to real space using matterwave focusing, and manipulating the amplitude and phase by means of tailor made optical potentials. This makes it possible to find analogues of well-known techniques in optical microscopy such as Zernike phase contrast imaging, dark field imaging and schlieren imaging. Applying these ideas directly at the level of the macroscopic wavefunction of the superfluid will allow visualization of interesting effects such as phase fluctuations and topological defects, and enable measurements of transport properties such as vorticity. | condensed matter |
Paul Busch has emphasized on various occasions the importance for physics of going beyond a merely instrumentalist view of quantum mechanics. Even if we cannot be sure that any particular realist interpretation describes the world as it actually is, the investigation of possible realist interpretations helps us to develop new physical ideas and better intuitions about the nature of physical objects at the micro level. In this spirit, Paul Busch himself pioneered the concept of ``unsharp quantum reality'', according to which there is an objective non-classical indeterminacy---a lack of sharpness---in the properties of individual quantum systems. We concur with Busch's motivation for investigating realist interpretations of quantum mechanics and with his willingness to move away from classical intuitions. In this article we try to take some further steps on this road. In particular, we pay attention to a number of \textit{prima facie} implausible and counter-intuitive aspects of realist interpretations of unitary quantum mechanics. We shall argue that from a realist viewpoint, quantum contextuality naturally leads to ``perspectivalism'' with respect to properties of spatially extended quantum systems, and that this perspectivalism is important for making relativistic covariance possible. | quantum physics |
Insurance companies gather a growing variety of data for use in the insurance process, but most traditional ratemaking models are not designed to support them. In particular, many emerging data sources (text, images, sensors) may complement traditional data to provide better insights to predict the future losses in an insurance contract. This paper presents some of these emerging data sources and presents a unified framework for actuaries to incorporate these in existing ratemaking models. Our approach stems from representation learning, whose goal is to create representations of raw data. A useful representation will transform the original data into a dense vector space where the ultimate predictive task is simpler to model. Our paper presents methods to transform non-vectorial data into vectorial representations and provides examples for actuarial science. | statistics |
All Italian consonants affected by gemination, that is affricates, fricatives, liquids, nasals, and stops, were analyzed within a project named GEMMA that lasted over a span of about 25 years. Results of the analysis on stops, as published in (Esposito, A., and Di Benedetto, M. G. (1999). "Acoustic and Perceptual Study of Gemination in Italian Stops," The Journal of the Acoustical Society of America, ASA, Vol. 30, pp. 175-185) showed that the main acoustic cue to gemination in Italian was closure duration, while frequency and energy domain parameters were not significantly affected by gemination. This paper - the first of a set of two covering all remaining consonants - addresses nasals and liquids; its companion paper addresses affricates and fricatives. Results on nasals and liquids confirm the findings on stops, in particular that the primary acoustic cue to gemination in Italian is durational in nature and corresponds to a lengthened consonant duration. Results also show an inverse correlation between consonant and pre-consonant vowel durations which is, however, also present when considering singleton vs. geminate word sets separately, indicating a sort of duration compensation between these segments to eventually preserve rhythmical structures; this inverse correlation is reinforced when considering singleton and geminate sets combined. Classification tests of singleton vs. geminate consonants show that, for both nasals and liquids, best classification scores are obtained when consonant duration is used as a classification parameter. Although slightly less performing, the ratio between consonant and pre-consonant vowel durations is also a potential good candidate for automatic classification of geminate vs singleton nasals and liquids in Italian. | electrical engineering and systems science |
In our work we propose a generalization of the port-based teleportation scheme, allowing for transmitting more than one unknown quantum state (or a composite quantum state) in one go, where the state ends up in several ports at Bob's side. We investigate the efficiency of our scheme discussing both deterministic and probabilistic case, where the resource state is maximally entangled. It turns out that the new scheme gives better performance than the optimal PBT protocol with the respective larger dimension of the port. We exploit the same number of maximally entangled states in the resource state as in the ordinary port-based teleportation with a number of measurements scaling polynomially in the number of shared maximally entangled states. To obtain our results, i.e. explicit expressions for the performance of the new scheme, we deliver novel mathematical tools concerning representation theory of the algebra of partially transposed permutation operators, where the transposition acts on more than one subsystem. | quantum physics |
The depressurization of the surrounding chamber or superheating of the injected liquid is responsible for the flashing of the sprays, which promotes micro-explosion of many bubbles near free-surface and thereby leading to primary atomization of sprays. In this work, we propose a mathematical model, and the Dirichlet hyperboloids are used to explain the micro-explosion process while the external flashing phenomenon is observed in superheated liquid jets. The developed mathematical model is implemented in the Lagrangian framework to study the spray structure, and the results of numerical simulations are found to be in good agreement with the experimental results from the literature. In the onset region of a fully flashing regime, the bell-shaped spray structure becomes eminent due to increased drag on the high radial-velocity ejected droplets. However, at a lower degree of superheat, the droplet size is found to increase with a decrease in ambient pressure. Whereas the opposite trend is observed at a higher degree of superheat. | physics |
The notion of tree network has sparked renewed interest in recent years, particularly in computer science and biology (neural network). However, this notion is usually interpreted in an extremely restrictive way: essentially linked to data processing, today tree networks are hybrid network topologies in which star networks are generally interconnected via bus networks. These networks are, most often, hierarchical and regular, and each of their nodes can have an arbitrary number of child nodes. At the outset, however, the notion of tree network, introduced in 1936 by Belgian physicist Julien Pacotte, was quite different: more general and, at the same time, more constrained, it should also serve an ambitious objective: the reconstruction of mathematics from concrete empirical structures. Usually poorly commented on and poorly understood (especially by philosophers), it had no real posterity. In this article, we first try to clarify this notion of "tree network" in the sense of Julien Pacotte, which makes it possible to eliminate the bad interpretations to which this notion has given rise. To this end, we use the language and concepts of graph theory and formalize the main properties of these networks which, contrary to popular belief, are not, in general, trees. In a second part, we then try to follow and explain, step by step, how Pacotte intended, using concepts borrowed from projective geometry, to reconstruct all of mathematics from such a network. | physics |
Under extensional strain, fiber networks can exhibit an anomalously large and nonlinear Poisson effect accompanied by a dramatic transverse contraction and volume reduction for applied strains as small as a few percent. We demonstrate that this phenomenon is controlled by a collective mechanical phase transition that occurs at a critical uniaxial strain that depends on network connectivity. This transition is punctuated by an anomalous peak in the apparent Poisson's ratio and other critical signatures such as diverging nonaffine strain fluctuations. | condensed matter |
Automated drusen segmentation in retinal optical coherence tomography (OCT) scans is relevant for understanding age-related macular degeneration (AMD) risk and progression. This task is usually performed by segmenting the top/bottom anatomical interfaces that define drusen, the outer boundary of the retinal pigment epithelium (OBRPE) and the Bruch's membrane (BM), respectively. In this paper we propose a novel multi-decoder architecture that tackles drusen segmentation as a multitask problem. Instead of training a multiclass model for OBRPE/BM segmentation, we use one decoder per target class and an extra one aiming for the area between the layers. We also introduce connections between each class-specific branch and the additional decoder to increase the regularization effect of this surrogate task. We validated our approach on private/public data sets with 166 early/intermediate AMD Spectralis, and 200 AMD and control Bioptigen OCT volumes, respectively. Our method consistently outperformed several baselines in both layer and drusen segmentation evaluations. | electrical engineering and systems science |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.