abstract
stringlengths
42
2.09k
Topological Dirac semimetals are a class of semimetals that host symmetry-protected Dirac points near the Fermi level, which arise due to a band inversion of the conduction and valence bands. In this work, we study the less explored class of \emph{noncentrosymmetric} topological Dirac semimetals in three dimensions. We identify the noncentrosymmetric crystallographic point groups required to stabilize fourfold degenerate band crossings and derive model Hamiltonians for all distinct types of band inversions allowed by symmetry. Using these model Hamiltonians, which emphasize the physical nature of the allowed couplings, we establish the generic electronic phase diagram noncentrosymmetric Dirac semimetals and show that it generically includes phases with coexistent Weyl point nodes or Weyl line nodes. In particular, for one specific type of band inversion in sixfold symmetric systems we show that Weyl line nodes are always present. Based on first-principles calculations, we predict that BiPd$_2$O$_4$ is a noncentrosymmetric Dirac semimetal under 20 Gpa pressure and hosts topological type-II Dirac points on the fourfold rotation axis. Furthermore, we propose that the hexagonal polar alloy LiZnSb$_{x}$Bi$_{1-x}$ realizes a Dirac semimetal with coexistent Weyl points. Interestingly, the emergence and location of the Weyl points is highly tunable and can be controlled by the alloy concentration $x$. More generally, our results not only establish band-inverted noncentrosymmetric systems as a broad and versatile class of topological semimetals, but also provide a framework for studying the quantum nonlinear Hall effect and nonlinear optical properties in the Dirac semimetals.
The ability to engineer the properties of quantum optical states is essential for quantum information processing applications. Here, we demonstrate tunable control of spatial correlations between photon pairs produced by spontaneous parametric down-conversion. By shaping the spatial pump beam profile in a type-I collinear configuration, we tailor the spatial structure of coincidences between photon pairs entangled in high dimensions, without effect on intensity. The results highlight fundamental aspects of spatial coherence and hold potential for the development of quantum technologies based on high-dimensional spatial entanglement.
The arithmetic of N, Z, Q, R can be extended to a graph arithmetic where N is the semiring of finite simple graphs and where Z and Q are integral domains, culminating in a Banach algebra R. A single network completes to the Wiener algebra. We illustrate the compatibility with topology and spectral theory. Multiplicative linear functionals like Euler characteristic, the Poincare polynomial or the zeta functions can be extended naturally. These functionals can also help with number theoretical questions. The story of primes is a bit different as the integers are not a unique factorization domain, because there are many additive primes. Most graphs are multiplicative primes.
An ultrafast laser based on coherent beam combination of four ytterbium-doped step-index fiber amplifiers is presented. The system delivers an average power of 3.5 kW and a pulse duration of 430 fs at 80 MHz repetition rate. The beam quality is excellent (M2<1.24x1.10) and the relative intensity noise is as low as 1% in the frequency span from 1 Hz to 1 MHz. The system is turn-key operable as it features an automated spatial and temporal alignment of the interferometric amplification channels.
The recently described pushframe imager, a parallelized single pixel camera capturing with a pushbroom-like motion, is intrinsically suited to both remote-sensing and compressive sampling. It optically applies a 2D mask to the imaged scene, before performing light integration along a single spatial axis, but previous work has not made use of the architecture's potential for taking measurements sparsely. In this paper we develop a strongly performing static binarized noiselet compressive sampling mask design, tailored to pushframe hardware, allowing both a single exposure per motion time-step, and retention of 2D correlations in the scene. Results from simulated and real-world captures are presented, with performance shown to be similar to that of immobile -- and hence inappropriate for satellite use -- whole-scene imagers. A particular feature of our sampling approach is that the degree of compression can be varied without altering the pattern, and we demonstrate the utility of this for efficiently storing and transmitting multi-spectral images.
When network products and services become more valuable as their userbase grows (network effects), this tendency can become a major determinant of how they compete with each other in the market and how the market is structured. Network effects are traditionally linked to high market concentration, early-mover advantages, and entry barriers, and in the cryptoasset market they have been used as a valuation tool too. The recent resurgence of Bitcoin has been partly attributed to network effects too. We study the existence of network effects in six cryptoassets from their inception to obtain a high-level overview of the application of network effects in the cryptoasset market. We show that contrary to the usual implications of network effects, they do not serve to concentrate the cryptoasset market, nor do they accord any one cryptoasset a definitive competitive advantage, nor are they consistent enough to be reliable valuation tools. Therefore, while network effects do occur in cryptoasset networks, they are not a defining feature of the cryptoasset market as a whole.
We consider electronic and magnetic properties of chromium, a well-known itinerant antiferromagnet, by a combination of density functional theory (DFT) and dynamical mean-field theory (DMFT). We find that electronic correlation effects in chromium, in contrast to its neighbours in the periodic table, are weak, leading to the quasiparticle mass enhancement factor ${m^*/m \approx 1.2}$. Our results for local spin-spin correlation functions and distribution of weigths of atomic configurations indicate that the local magnetic moments are not formed. Similarly to previous results of DFT at ambient pressure, the non-uniform magnetic susceptibility as a function of momentum possesses close to the wave vector ${{\mathbf Q}_{\rm H}=(0,0,2\pi/a)}$ ($a$ is the lattice constant) sharp maxima, corresponding to Kohn anomalies. We find that these maxima are preserved by the interaction and are not destroyed by pressure. Our calculations qualitatively capture a decrease of the N\'eel temperature with pressure and a breakdown of itinerant antiferomagnetism at pressure of $\sim$9 GPa in agreement with experimental data, although the N\'eel temperature is significantly overestimated because of the mean-field nature of DMFT.
While most existing segmentation methods usually combined the powerful feature extraction capabilities of CNNs with Conditional Random Fields (CRFs) post-processing, the result always limited by the fault of CRFs . Due to the notoriously slow calculation speeds and poor efficiency of CRFs, in recent years, CRFs post-processing has been gradually eliminated. In this paper, an improved Generative Adversarial Networks (GANs) for image semantic segmentation task (semantic segmentation by GANs, Seg-GAN) is proposed to facilitate further segmentation research. In addition, we introduce Convolutional CRFs (ConvCRFs) as an effective improvement solution for the image semantic segmentation task. Towards the goal of differentiating the segmentation results from the ground truth distribution and improving the details of the output images, the proposed discriminator network is specially designed in a full convolutional manner combined with cascaded ConvCRFs. Besides, the adversarial loss aggressively encourages the output image to be close to the distribution of the ground truth. Our method not only learns an end-to-end mapping from input image to corresponding output image, but also learns a loss function to train this mapping. The experiments show that our method achieves better performance than state-of-the-art methods.
We propose a relativistic quantum Otto cycle between an entangled state of two qubits and their composite excited (or ground) state whose efficiency can be greater than the usual single qubit quantum Otto engine. The hot and cold reservoirs are constructed by providing uniform accelerations to these qubits along with the interaction between the background field and individual qubits. The efficiency, as measured from one of the qubits' frame, not only depends on the energy gap of the states but also the relative acceleration between them. For lower acceleration of our observer's qubit compared to the other one, the cycle is more efficient than the single qubit quantum Otto engine. Furthermore, a complete protocol to construct such a cycle is being provided.
In recent years, trends towards studying simulated games have gained momentum in the fields of artificial intelligence, cognitive science, psychology, and neuroscience. The intersections of these fields have also grown recently, as researchers increasing study such games using both artificial agents and human or animal subjects. However, implementing games can be a time-consuming endeavor and may require a researcher to grapple with complex codebases that are not easily customized. Furthermore, interdisciplinary researchers studying some combination of artificial intelligence, human psychology, and animal neurophysiology face additional challenges, because existing platforms are designed for only one of these domains. Here we introduce Modular Object-Oriented Games, a Python task framework that is lightweight, flexible, customizable, and designed for use by machine learning, psychology, and neurophysiology researchers.
We use a hybrid k dot p theory - tight binding (HkpTB) model to describe interlayer coupling simultaneously in both Bernal and twisted graphene structures. For Bernal-aligned interfaces, HkpTB is parametrized using the full Slonczewski-Weiss-McClure (SWMcC) Hamiltonian of graphite, which is then used to refine the commonly used minimal model for twisted interfaces, by deriving additional terms that reflect all details of the full SWMcC model of graphite. We find that these terms introduce some electron-hole asymmetry in the band structure of twisted bilayers, but in twistronic multilayer graphene, they produce only a subtle change of moire miniband spectra, confirming the broad applicability of the minimal model for implementing the twisted interface coupling in such systems.
What will the future of UAV cellular communications be? In this tutorial article, we address such a compelling yet difficult question by embarking on a journey from 5G to 6G and sharing a large number of realistic case studies supported by original results. We start by overviewing the status quo on UAV communications from an industrial standpoint, providing fresh updates from the 3GPP and detailing new 5G NR features in support of aerial devices. We then show the potential and the limitations of such features. In particular, we demonstrate how sub-6 GHz massive MIMO can successfully tackle cell selection and interference challenges, we showcase encouraging mmWave coverage evaluations in both urban and suburban/rural settings, and we examine the peculiarities of direct device-to-device communications in the sky. Moving on, we sneak a peek at next-generation UAV communications, listing some of the use cases envisioned for the 2030s. We identify the most promising 6G enablers for UAV communication, those expected to take the performance and reliability to the next level. For each of these disruptive new paradigms (non-terrestrial networks, cell-free architectures, artificial intelligence, reconfigurable intelligent surfaces, and THz communications), we gauge the prospective benefits for UAVs and discuss the main technological hurdles that stand in the way. All along, we distil our numerous findings into essential takeaways, and we identify key open problems worthy of further study.
Composite minimization is a powerful framework in large-scale convex optimization, based on decoupling of the objective function into terms with structurally different properties and allowing for more flexible algorithmic design. In this work, we introduce a new algorithmic framework for complementary composite minimization, where the objective function decouples into a (weakly) smooth and a uniformly convex term. This particular form of decoupling is pervasive in statistics and machine learning, due to its link to regularization. The main contributions of our work are summarized as follows. First, we introduce the problem of complementary composite minimization in general normed spaces; second, we provide a unified accelerated algorithmic framework to address broad classes of complementary composite minimization problems; and third, we prove that the algorithms resulting from our framework are near-optimal in most of the standard optimization settings. Additionally, we show that our algorithmic framework can be used to address the problem of making the gradients small in general normed spaces. As a concrete example, we obtain a nearly-optimal method for the standard $\ell_1$ setup (small gradients in the $\ell_\infty$ norm), essentially matching the bound of Nesterov (2012) that was previously known only for the Euclidean setup. Finally, we show that our composite methods are broadly applicable to a number of regression problems, leading to complexity bounds that are either new or match the best existing ones.
The formation of the most massive quasars observed at high redshifts requires extreme inflows of gas down to the length scales of the central compact object. Here, we estimate the maximum inflow rate allowed by gravity down to the surface of supermassive stars, the possible progenitors of these supermassive black holes. We use the continuity equation and the assumption of free-fall to derive maximum allowed inflow rates for various density profiles. We apply our approach to the mass-radius relation of rapidly accreting supermassive stars to estimate an upper limit to the accretion rates allowed during the formation of these objects. We find that the maximum allowed rate $\dot M_{\rm max}$ is given uniquely by the compactness of the accretor. For the compactness of rapidly accreting supermassive stars, $\dot M_{\rm max}$ is related to the stellar mass $M$ by a power-law $\dot M_{\rm max}\propto M^{3/4}$. The rates of atomically cooled halos (0.1 -- 10 M$_\odot$ yr$^{-1}$) are allowed as soon as $M\gtrsim1$ M$_\odot$. The largest rates expected in galaxy mergers ($10^4-10^5$ M$_\odot$ yr$^{-1}$) become accessible once the accretor is supermassive ($M\gtrsim10^4$ M$_\odot$). These results suggest that supermassive stars can accrete up to masses $>10^6$ M$_\odot$ before they collapse via the general-relativistic instability. At such masses, the collapse is expected to lead to the direct formation of a supermassive black hole even within metal-rich gas, resulting in a black hole seed that is significantly heavier than in conventional direct collapse models for atomic cooling halos.
We give a new proof of the uniformization theorem of the leaves of a lamination by surfaces of hyperbolic conformal type. We use a laminated version of the Ricci flow to prove the existence of a laminated Riemannian metric (smooth on the leaves, transversaly continuous) with leaves of constant Gaussian curvature equal to -1, which is conformally equivalent to the original metric.
How to handle gender with machine learning is a controversial topic. A growing critical body of research brought attention to the numerous issues transgender communities face with the adoption of current automatic gender recognition (AGR) systems. In contrast, we explore how such technologies could potentially be appropriated to support transgender practices and needs, especially in non-Western contexts like Japan. We designed a virtual makeup probe to assist transgender individuals with passing, that is to be perceived as the gender they identify as. To understand how such an application might support expressing transgender individuals gender identity or not, we interviewed 15 individuals in Tokyo and found that in the right context and under strict conditions, AGR based systems could assist transgender passing.
The background magnetic-field formalism of Lattice QCD has been used recently to calculate the magnetic polarizability of the charged pion. These $n_f = 2 + 1$ numerical simulations are electro-quenched, such that the virtual sea-quarks of the QCD vacuum do not interact with the background field. To understand the impact of this, we draw on partially quenched chiral perturbation theory. In this case, the leading term proportional to $1/M_\pi$ arises at tree level from $\mathcal{L}_4$. To describe the results from lattice QCD, while maintaining the exact leading terms of chiral perturbation theory, we introduce a Pad\'e approximant designed to reproduce the slow variation observed in the lattice QCD results. Two-loop contributions are introduced to assess the systematic uncertainty associated with higher-order terms of the expansion. Upon extrapolation, the magnetic polarizability of the charged pion at the physical pion mass is found to be $\beta_{\pi^\pm}=-1.70\,(14)_{\rm stat}(25)_{\rm syst}\times 10^{-4}$ fm$^3$, in good agreement with the recent experimental measurement.
In this paper, we focus on the fairness issues regarding unsupervised outlier detection. Traditional algorithms, without a specific design for algorithmic fairness, could implicitly encode and propagate statistical bias in data and raise societal concerns. To correct such unfairness and deliver a fair set of potential outlier candidates, we propose Deep Clustering based Fair Outlier Detection (DCFOD) that learns a good representation for utility maximization while enforcing the learnable representation to be subgroup-invariant on the sensitive attribute. Considering the coupled and reciprocal nature between clustering and outlier detection, we leverage deep clustering to discover the intrinsic cluster structure and out-of-structure instances. Meanwhile, an adversarial training erases the sensitive pattern for instances for fairness adaptation. Technically, we propose an instance-level weighted representation learning strategy to enhance the joint deep clustering and outlier detection, where the dynamic weight module re-emphasizes contributions of likely-inliers while mitigating the negative impact from outliers. Demonstrated by experiments on eight datasets comparing to 17 outlier detection algorithms, our DCFOD method consistently achieves superior performance on both the outlier detection validity and two types of fairness notions in outlier detection.
Presence of large-scale surface magnetic field in early-type stars leads to several unique electromagnetic phenomena producing radiation over X-ray to radio bands. Among them, the rarest type of emission is electron cyclotron maser emission (ECME) observed as periodic, circularly polarized radio pulses. The phenomenon was first discovered in the hot magnetic star CU Vir. Past observations of this star led to the consensus that the star produces only right circularly polarized ECME, suggesting that only one magnetic hemisphere takes part in the phenomenon. Here we present the first ultra-wideband (0.4$-$4 GHz) study of this star using the upgraded Giant Metrewave Radio telescope and the Karl G. Jansky Very Large Array, which led to the surprising discovery of ECME of both circular polarizations up to around 1.5 GHz. The GHz observations also allowed us to infer that the upper ECME cut-off frequency is at $\gtrsim 5\,\mathrm{GHz}$. The sub-GHz observation led to the unexpected observation of more than two pairs of ECME pulses per rotation cycle. In addition, we report the discovery of a `giant pulse', and transient enhancements, which are potentially the first observational evidence of `centrifugal breakout' of plasma from the innermost part of the stellar magnetosphere. The stark contrast between the star's behavior at GHz and sub-GHz frequencies could either be due to propagation effects, a manifestation of varying magnetic field topology as a function of height, or a signature of an additional `ECME engine'.
Sub-Terahertz frequencies (frequencies above 100 GHz) have the potential to satisfy the unprecedented demand on data rate on the order of hundreds of Gbps for sixth-generation (6G) wireless communications and beyond. Accurate beam tracking and rapid beam selection are increasingly important since antenna arrays with more elements generate narrower beams to compensate for additional path loss within the first meter of propagation distance at sub-THz frequencies. Realistic channel models for above 100 GHz are needed, and should include spatial consistency to model the spatial and temporal channel evolution along the user trajectory. This paper introduces recent outdoor urban microcell (UMi) propagation measurements at 142 GHz along a 39 m $\times$ 12 m rectangular route (102 m long), where each consecutive and adjacent receiver location is 3 m apart from each other. The measured power delay profiles and angular power spectrum at each receiver location are used to study spatial autocorrelation properties of various channel parameters such as shadow fading, delay spread, and angular spread along the track. Compared to the correlation distances reported in the 3GPP TR 38.901 for frequencies below 100 GHz, the measured correlation distance of shadow fading at 142 GHz (3.8 m) is much shorter than the 10-13 m as specified in 3GPP; the measured correlation distances of delay spread and angular spread at 142 GHz (both 12 m) are comparable to the 7-10 m as specified in 3GPP. This result may guide the development of a statistical spatially consistent channel model for frequencies above 100 GHz in the UMi street canyon environment.
We derive a general expression for the absorptive part of the one-loop photon polarization tensor in a strongly magnetized quark-gluon plasma at nonzero baryon chemical potential. To demonstrate the application of the main result in the context of heavy-ion collisions, we study the effect of a nonzero baryon chemical potential on the photon emission rate. The rate and the ellipticity of photon emission are studied numerically as a function the transverse momentum (energy) for several values of temperature and chemical potential. When the chemical potential is small compared to the temperature, the rates of the quark and antiquark splitting processes (i.e., $q\rightarrow q +\gamma$ and $\bar{q}\rightarrow \bar{q} +\gamma$, respectively) are approximately the same. However, the quark splitting gradually becomes the dominant process with increasing the chemical potential. We also find that increasing the chemical potential leads to a growing total photon production rate but has only a small effect on the ellipticity of photon emission. The quark-antiquark annihilation ($q+\bar{q}\rightarrow \gamma$) also contributes to the photon production, but its contribution remains relatively small for a wide range of temperatures and chemical potentials investigated.
Topic trajectory information provides crucial insight into the dynamics of topics and their evolutionary relationships over a given time. Also, this information can help to improve our understanding on how new topics have emerged or formed through a sequential or interrelated events of emergence, modification and integration of prior topics. Nevertheless, the implementation of the existing methods for topic trajectory identification is rarely available as usable software. In this paper, we present TopicTracker, a platform for topic trajectory identification and visualisation. The key of Topic Tracker is that it can represent the three facets of information together, given two kinds of input: a time-stamped topic profile consisting of the set of the underlying topics over time, and the evolution strength matrix among them: evolutionary pathways of dynamic topics, evolution states of the topics, and topic importance. TopicTracker is a publicly available software implemented using the R software.
Mechanical strength of amyloid beta fibrils has been known to be correlated with neuronal cell death. Here, we resorted to steered molecular dynamics (SMD) simulations to mechanically stretch a single S-shape amyloid beta Abeta11-42 dodecamer fibril in vacuum. It was found that the weakest sites at which the fibril was ruptured due to mechanical extension were exclusively at the interfaces of alanine and glutamic acid distributed throughout the fibril. It was also revealed that the free energy required to unfold the fibril to form a long linear conformation is equivalent to ~ 210 eV, being several thousand times larger than thermal voltage at room temperature. As a consequence, within solution a larger free energy is needed for such a maximal stretching based on the fact that amyloid beta fibrils are structurally more stable in solution due to the interplay between their hydrophobic cores and solution's entropy.
Multivariate functional data can be intrinsically multivariate like movement trajectories in 2D or complementary like precipitation, temperature, and wind speeds over time at a given weather station. We propose a multivariate functional additive mixed model (multiFAMM) and show its application to both data situations using examples from sports science (movement trajectories of snooker players) and phonetic science (acoustic signals and articulation of consonants). The approach includes linear and nonlinear covariate effects and models the dependency structure between the dimensions of the responses using multivariate functional principal component analysis. Multivariate functional random intercepts capture both the auto-correlation within a given function and cross-correlations between the multivariate functional dimensions. They also allow us to model between-function correlations as induced by e.g.\ repeated measurements or crossed study designs. Modeling the dependency structure between the dimensions can generate additional insight into the properties of the multivariate functional process, improves the estimation of random effects, and yields corrected confidence bands for covariate effects. Extensive simulation studies indicate that a multivariate modeling approach is more parsimonious than fitting independent univariate models to the data while maintaining or improving model fit.
In this paper we prove the existence and uniqueness of strong solution of the incompressible Navier-Stokes equations with damping $\alpha (e^{\beta|u|^2}-1)u$.
The hypothesis that strange quark matter is the true ground state of matter has been investigated for almost four decades, but only a few works have explored the dynamics of binary systems of quark stars. This is partly due to the numerical challenges that need to be faced when modelling the large discontinuities at the surface of these stars. We here present a novel technique in which the EOS of a quark star is suitably rescaled to produce a smooth change of the specific enthalpy across a very thin crust. The introduction of the crust has been carefully tested by considering the oscillation properties of isolated quark stars, showing that the response of the simulated quark stars matches accurately the perturbative predictions. Using this technique, we have carried out the first fully general-relativistic simulations of the merger of quark-star binaries finding several important differences between quark-star binaries and hadronic-star binaries with the same mass and comparable tidal deformability. In particular, we find that dynamical mass loss is significantly suppressed in quark-star binaries. In addition, quark-star binaries have merger and post-merger frequencies that obey the same quasi-universal relations derived from hadron stars if expressed in terms of the tidal deformability, but not when expressed in terms of the average stellar compactness. Hence, it may be difficult to distinguish the two classes of stars if no information on the stellar radius is available. Finally, differences are found in the distributions in velocity and entropy of the ejected matter, for which quark-stars have much smaller tails. Whether these differences in the ejected matter will leave an imprint in the electromagnetic counterpart and nucleosynthetic yields remains unclear, calling for the construction of an accurate model for the evaporation of the ejected quarks into nucleons.
We introduce a new benchmark dataset, namely VinDr-RibCXR, for automatic segmentation and labeling of individual ribs from chest X-ray (CXR) scans. The VinDr-RibCXR contains 245 CXRs with corresponding ground truth annotations provided by human experts. A set of state-of-the-art segmentation models are trained on 196 images from the VinDr-RibCXR to segment and label 20 individual ribs. Our best performing model obtains a Dice score of 0.834 (95% CI, 0.810--0.853) on an independent test set of 49 images. Our study, therefore, serves as a proof of concept and baseline performance for future research.
Incentive mechanism design is crucial for enabling federated learning. We deal with clustering problem of agents contributing to federated learning setting. Assuming agents behave selfishly, we model their interaction as a stable coalition partition problem using hedonic games where agents and clusters are the players and coalitions, respectively. We address the following question: is there a family of hedonic games ensuring a Nash-stable coalition partition? We propose the Nash-stable set which determines the family of hedonic games possessing at least one Nash-stable partition, and analyze the conditions of non-emptiness of the Nash-stable set. Besides, we deal with the decentralized clustering. We formulate the problem as a non-cooperative game and prove the existence of a potential game.
Magnetic resonance imaging (MRI) acquisition, reconstruction, and segmentation are usually processed independently in the conventional practice of MRI workflow. It is easy to notice that there are significant relevances among these tasks and this procedure artificially cuts off these potential connections, which may lead to losing clinically important information for the final diagnosis. To involve these potential relations for further performance improvement, a sequential multi-task joint learning network model is proposed to train a combined end-to-end pipeline in a differentiable way, aiming at exploring the mutual influence among those tasks simultaneously. Our design consists of three cascaded modules: 1) deep sampling pattern learning module optimizes the $k$-space sampling pattern with predetermined sampling rate; 2) deep reconstruction module is dedicated to reconstructing MR images from the undersampled data using the learned sampling pattern; 3) deep segmentation module encodes MR images reconstructed from the previous module to segment the interested tissues. The proposed model retrieves the latently interactive and cyclic relations among those tasks, from which each task will be mutually beneficial. The proposed framework is verified on MRB dataset, which achieves superior performance on other SOTA methods in terms of both reconstruction and segmentation.
Clustering is one of the most fundamental tasks in machine learning. Recently, deep clustering has become a major trend in clustering techniques. Representation learning often plays an important role in the effectiveness of deep clustering, and thus can be a principal cause of performance degradation. In this paper, we propose a clustering-friendly representation learning method using instance discrimination and feature decorrelation. Our deep-learning-based representation learning method is motivated by the properties of classical spectral clustering. Instance discrimination learns similarities among data and feature decorrelation removes redundant correlation among features. We utilize an instance discrimination method in which learning individual instance classes leads to learning similarity among instances. Through detailed experiments and examination, we show that the approach can be adapted to learning a latent space for clustering. We design novel softmax-formulated decorrelation constraints for learning. In evaluations of image clustering using CIFAR-10 and ImageNet-10, our method achieves accuracy of 81.5% and 95.4%, respectively. We also show that the softmax-formulated constraints are compatible with various neural networks.
The phase stability and equilibria of carbon dioxide is investigated from 125 -- 325K and 1 -- 10,000 atm using extensive molecular dynamics (MD) simulations and the Two-Phase Thermodynamics (2PT) method. We devise a direct approach for calculating phase diagrams in general, by considering the separate chemical potentials of the isolated phase at specific points on the P-T diagram. The unique ability of 2PT to accurately and efficiently approximate the entropy and Gibbs energy of liquids thus allows for assignment of phase boundaries from relatively short ($\mathrm{\sim}$ 100ps) MD simulations. We validate our approach by calculating the critical properties of the flexible Elementary Physical Model 2 (FEPM2), showing good agreement with previous results. We show, however, that the incorrect description of the short-range Pauli force and the lack of molecular charge polarization leads to deviations from experiments at high pressures. We thus develop a many-body, fluctuating charge model for CO${}_{2}$, termed CO${}_{2}$-Fq, from high level quantum mechanics (QM) calculations, that accurately captures the condensed phase vibrational properties of the solid (including the Fermi resonance at 1378 cm${}^{-1}$) as well as the diffusional properties of the liquid, leading to overall excellent agreement with experiments over the entire phase diagram. This work provides an efficient computational approach for determining phase diagrams of arbitrary systems and underscore the critical role of QM charge reorganization physics in molecular phase stability.
K\"ahler's geometric approach in which relativistic fermion fields are treated as differential forms is applied in three spacetime dimensions. It is shown that the resulting continuum theory is invariant under global U($N)\otimes$U($N)$ field transformations, and has a parity-invariant mass term, both symmetries shared in common with staggered lattice fermions. The formalism is used to construct a version of the Thirring model with contact interactions between conserved Noether currents. Under reasonable assumptions about field rescaling after quantum corrections, a more general interaction term is derived, sharing the same symmetries but now including terms which entangle spin and taste degrees of freedom, which exactly coincides with the leading terms in the staggered lattice Thirring model in the long-wavelength limit. Finally truncated versions of the theory are explored; it is found that excluding scalar and pseudoscalar components leads to a theory of six-component fermion fields describing particles with spin 1, with fermion and antifermion corresponding to states with definite circular polarisation. In the UV limit only transverse states with just four non-vanishing components propagate. Implications for the description of dynamics at a strongly interacting renormalisation-group fixed point are discussed.
Chiral optical effects are generally quantified along some specific incident directions of exciting waves (especially for extrinsic chiralities of achiral structures) or defined as direction-independent properties by averaging the responses among all structure orientations. Though of great significance for various applications, chirality extremization (maximized or minimized) with respect to incident directions or structure orientations have not been explored, especially in a systematic manner. In this study we examine the chiral responses of open photonic structures from perspectives of quasi-normal modes and polarization singularities of their far-field radiations. The nontrivial topology of the momentum sphere secures the existence of singularity directions along which mode radiations are either circularly or linearly polarized. When plane waves are incident along those directions, the reciprocity ensures ideal maximization and minimization of optical chiralities, for corresponding mode radiations of circular and linear polarizations respectively. For directions of general elliptical polarizations, we have unveiled the subtle equality of a Stokes parameter and the circular dichroism, showing that an intrinsically chiral structure can unexpectedly exhibit no chirality at all or even chiralities of opposite handedness for different incident directions. The framework we establish can be applied to not only finite scattering bodies but also infinite periodic structures, encompassing both intrinsic and extrinsic optical chiralities. We have effectively merged two vibrant disciplines of chiral and singular optics, which can potentially trigger more optical chirality-singularity related interdisciplinary studies.
The semantic matching capabilities of neural information retrieval can ameliorate synonymy and polysemy problems of symbolic approaches. However, neural models' dense representations are more suitable for re-ranking, due to their inefficiency. Sparse representations, either in symbolic or latent form, are more efficient with an inverted index. Taking the merits of the sparse and dense representations, we propose an ultra-high dimensional (UHD) representation scheme equipped with directly controllable sparsity. UHD's large capacity and minimal noise and interference among the dimensions allow for binarized representations, which are highly efficient for storage and search. Also proposed is a bucketing method, where the embeddings from multiple layers of BERT are selected/merged to represent diverse linguistic aspects. We test our models with MS MARCO and TREC CAR, showing that our models outperforms other sparse models
We explore the physics of topological lattice models in c-QED architectures for arbitrary coupling strength, and the possibility of using the cavity transmission as a topological marker. For this, we develop an approach combining the input-output formalism with Mean-Field theory, which includes self-consistency and quantum fluctuations to first order, and allows to go beyond the small-coupling regime. We apply our formalism to the case of a fermionic Su-Schrieffer-Heeger (SSH) chain. Our findings confirm that the cavity can indeed act as a quantum sensor for topological phases, where the initial state preparation plays a crutial role. Additionally, we discuss the persistence of topological features when the coupling strength increases, in terms of an effective Hamiltonian, and calculate the entanglement entropy. Our approach can be applied to other fermionic systems, opening a route to the characterization of their topological properties in terms of experimental observables.
Hyperactive comets have high water production rates, with inferred sublimation areas of order the surface area of the nucleus. Comets 46P/Wirtanen and 103P/Hartley 2 are two examples of this cometary class. Based on observations of comet Hartley 2 by the Deep Impact spacecraft, hyperactivity appears to be caused by the ejection of water-ice grains and/or water-ice rich chunks of nucleus into the coma. These materials increase the sublimating surface area, and yield high water production rates. The historic close approach of comet Wirtanen to Earth in 2018 afforded an opportunity to test Hartley 2 style hyperactivity in a second Jupiter-family comet. We present high spatial resolution, near-infrared spectroscopy of the inner coma of Wirtanen. No evidence for the 1.5- or 2.0-$\mu$m water-ice absorption bands is found in six 0.8-2.5 $\mu$m spectra taken around perihelion and closest approach to Earth. In addition, the strong 3.0-$\mu$m water-ice absorption band is absent in a 2.0-5.3 $\mu$m spectrum taken near perihelion. Using spectroscopic and sublimation lifetime models we set constraints on the physical properties of the ice grains in the coma, assuming they are responsible for the comet's hyperactivity. We rule out pure water-ice grains of any size, given their long lifetime. Instead, the hyperactivity of the nucleus and lack of water-ice absorption features in our spectra can be explained either by icy grains on the order of 1 $\mu$m in size with a small amount of low albedo dust (greater than 0.5% by volume), or large chunks containing significant amounts of water ice.
Human movement disorders or paralysis lead to the loss of control of muscle activation and thus motor control. Functional Electrical Stimulation (FES) is an established and safe technique for contracting muscles by stimulating the skin above a muscle to induce its contraction. However, an open challenge remains on how to restore motor abilities to human limbs through FES, as the problem of controlling the stimulation is unclear. We are taking a robotics perspective on this problem, by developing robot learning algorithms that control the ultimate humanoid robot, the human body, through electrical muscle stimulation. Human muscles are not trivial to control as actuators due to their force production being non-stationary as a result of fatigue and other internal state changes, in contrast to robot actuators which are well-understood and stationary over broad operation ranges. We present our Deep Reinforcement Learning approach to the control of human muscles with FES, using a recurrent neural network for dynamic state representation, to overcome the unobserved elements of the behaviour of human muscles under external stimulation. We demonstrate our technique both in neuromuscular simulations but also experimentally on a human. Our results show that our controller can learn to manipulate human muscles, applying appropriate levels of stimulation to achieve the given tasks while compensating for advancing muscle fatigue which arises throughout the tasks. Additionally, our technique can learn quickly enough to be implemented in real-world human-in-the-loop settings.
Let $(\kappa_n(a))_{n\geq 1}$ denote the sequence of free cumulants of a random variable $a$ in a non-commutative probability space $(\mathcal{A},\varphi)$. Based on some considerations on bipartite graphs, we provide a formula to compute the cumulants $(\kappa_n(ab+ba))_{n\geq 1}$ in terms of $(\kappa_n(a))_{n\geq 1}$ and $(\kappa_n(b))_{n\geq 1}$, where $a$ and $b$ are freely independent. Our formula expresses the $n$-th free cumulant of $ab+ba$ as a sum indexed by partitions in the set $\mathcal{Y}_{2n}$ of non-crossing partitions of the form \[ \sigma=\{B_1,B_3,\dots, B_{2n-1},E_1,\dots,E_r\}, \quad \text{with }r\geq 0, \] such that $i\in B_{i}$ for $i=1,3,\dots,2n-1$ and $|E_j|$ even for $j\leq r$. Therefore, by studying the sets $\mathcal{Y}_{2n}$ we obtain new results regarding the distribution of $ab+ba$. For instance, the size $|\mathcal{Y}_{2n}|$ is closely related to the case when $a,b$ are free Poisson random variables of parameter 1. Our formula can also be expressed in terms of cacti graphs. This graph theoretic approach suggests a natural generalization that allows us to study quadratic forms in $k$ free random variables.
Deep reinforcement learning (DRL) is applied in safety-critical domains such as robotics and autonomous driving. It achieves superhuman abilities in many tasks, however whether DRL agents can be shown to act safely is an open problem. Atari games are a simple yet challenging exemplar for evaluating the safety of DRL agents and feature a diverse portfolio of game mechanics. The safety of neural agents has been studied before using methods that either require a model of the system dynamics or an abstraction; unfortunately, these are unsuitable to Atari games because their low-level dynamics are complex and hidden inside their emulator. We present the first exact method for analysing and ensuring the safety of DRL agents for Atari games. Our method only requires access to the emulator. First, we give a set of 43 properties that characterise "safe behaviour" for 30 games. Second, we develop a method for exploring all traces induced by an agent and a game and consider a variety of sources of game non-determinism. We observe that the best available DRL agents reliably satisfy only very few properties; several critical properties are violated by all agents. Finally, we propose a countermeasure that combines a bounded explicit-state exploration with shielding. We demonstrate that our method improves the safety of all agents over multiple properties.
The phase structure of baryonic matter is investigated with focus on the role of fluctuations beyond the mean-field approximation. The prototype test case studied is the chiral nucleon-meson model, with added comments on the chiral quark-meson model. Applications to the liquid-gas phase transition in nuclear matter and extensions to dense matter are performed. The role of vacuum fluctuations and thermal excitations is systematically explored. It is pointed out that such fluctuations tend to stabilise the hadronic phase characterised by spontaneously broken chiral symmetry, shifting the chiral restoration transition to very high densities. This stabilisation effect is shown to be further enhanced by additional dynamical fluctuations treated with functional renormalisation group methods.
Deep learning (DL) has recently attracted increasing interest to improve object type classification for automotive radar.In addition to high accuracy, it is crucial for decision making in autonomous vehicles to evaluate the reliability of the predictions; however, decisions of DL networks are non-transparent. Current DL research has investigated how uncertainties of predictions can be quantified, and in this article, we evaluate the potential of these methods for safe, automotive radar perception. In particular we evaluate how uncertainty quantification can support radar perception under (1) domain shift, (2) corruptions of input signals, and (3) in the presence of unknown objects. We find that in agreement with phenomena observed in the literature,deep radar classifiers are overly confident, even in their wrong predictions. This raises concerns about the use of the confidence values for decision making under uncertainty, as the model fails to notify when it cannot handle an unknown situation. Accurate confidence values would allow optimal integration of multiple information sources, e.g. via sensor fusion. We show that by applying state-of-the-art post-hoc uncertainty calibration, the quality of confidence measures can be significantly improved,thereby partially resolving the over-confidence problem. Our investigation shows that further research into training and calibrating DL networks is necessary and offers great potential for safe automotive object classification with radar sensors.
ESA's INTEGRAL space mission has achieved unique results for solar and terrestrial physics, although spacecraft operations nominally excluded the possibility to point at the Sun or the Earth. The Earth avoidance was, however, exceptionally relaxed for special occultation observations of the Cosmic X-ray Background (CXB), which on some occasions allowed the detection of strong X-ray auroral emission. In addition, the most intense solar flares can be bright enough to be detectable from outside the field of view of the main instruments. This article presents for the first time the auroral observations by INTEGRAL and reviews earlier studies of the most intense solar flares. We end by briefly summarising the studies of the Earth's radiation belts, which can be considered as another topic of serendipitous science with INTEGRAL.
Non-invasive therapeutic ultrasound methods, such as high-intensity focused ultrasound (HIFU), have limited access to tissue targets shadowed by bones or presence of gas. This study demonstrates that an ultrasonically actuated medical needle can be used to translate nanoparticles and fluids under the action of nonlinear phenomena, potentially overcoming some limitations of HIFU. A simulation study was first conducted to study the delivery of a tracer with an ultrasonically actuated needle (33 kHz) inside a porous medium acting as a model for soft tissue. The model was then validated experimentally in different concentrations of agarose gel showing a close match with the experimental results, when diluted soot nanoparticles (diameter < 150 nm) were employed as delivered entity. An additional simulation study demonstrated a threefold increase of the volume covered by the delivered agent in liver under a constant injection rate, when compared to without ultrasound. This method, if developed to its full potential, could serve as a cost effective way to improve safety and efficacy of drug therapies by maximizing the concentration of delivered entities within e.g. a small lesion, while minimizing exposure outside the lesion.
Cone Beam Computed Tomography(CBCT) is a now known method to conduct CT imaging. Especially, The Low Dose CT imaging is one of possible options to protect organs of patients when conducting CT imaging. Therefore Low Dose CT imaging can be an alternative instead of Standard dose CT imaging. However Low Dose CT imaging has a fundamental issue with noises within results compared to Standard Dose CT imaging. Currently, there are lots of attempts to erase the noises. Most of methods with artificial intelligence have many parameters and unexplained layers or a kind of black-box methods. Therefore, our research has purposes related to these issues. Our approach has less parameters than usual methods by having Iterative learn-able bilateral filtering approach with Deep reinforcement learning. And we applied The Iterative learn-able filtering approach with deep reinforcement learning to sinograms and reconstructed volume domains. The method and the results of the method can be much more explainable than The other black box AI approaches. And we applied the method to Helical Cone Beam Computed Tomography(CBCT), which is the recent CBCT trend. We tested this method with on 2 abdominal scans(L004, L014) from Mayo Clinic TCIA dataset. The results and the performances of our approach overtake the results of the other previous methods.
In this paper, we consider the energy conservation and regularity of the weak solution $u$ to the Navier-Stokes equations in the endpoint case. We first construct a divergence-free field $u(t,x)$ which satisfies $\lim_{t\to T}\sqrt{T-t}||u(t)||_{BMO}<\infty$ and $\lim_{t\to T}\sqrt{T-t}||u(t)||_{L^\infty}=\infty$ to demonstrate that the Type II singularity is admissible in the endpoint case $u\in L^{2,\infty}(BMO)$. Secondly, we prove that if a suitable weak solution $u(t,x)$ satisfying $||u||_{L^{2,\infty}([0,T];BMO(\Omega))}<\infty$ for arbitrary $\Omega\subseteq\mathbb{R}^3$ then the local energy equality is valid on $[0,T]\times\Omega$. As a corollary, we also prove $||u||_{L^{2,\infty}([0,T];BMO(\mathbb{R}^3))}<\infty$ implies the global energy equality on $[0,T]$. Thirdly, we show that as the solution $u$ approaches a finite blowup time $T$, the norm $||u(t)||_{BMO}$ must blow up at a rate faster than $\frac{c}{\sqrt{T-t}}$ with some absolute constant $c>0$. Furthermore, we prove that if $||u_3||_{L^{2,\infty}([0,T];BMO(\mathbb{R}^3))}=M<\infty$ then there exists a small constant $c_M$ depended on $M$ such that if $||u_h||_{L^{2,\infty}([0,T];BMO(\mathbb{R}^3))}\leq c_M$ then $u$ is regular on $(0,T]\times\mathbb{R}^3$.
We propose a series-based nonparametric specification test for a regression function when data are spatially dependent, the `space' being of a general economic or social nature. Dependence can be parametric, parametric with increasing dimension, semiparametric or any combination thereof, thus covering a vast variety of settings. These include spatial error models of varying types and levels of complexity. Under a new smooth spatial dependence condition, our test statistic is asymptotically standard normal. To prove the latter property, we establish a central limit theorem for quadratic forms in linear processes in an increasing dimension setting. Finite sample performance is investigated in a simulation study, with a bootstrap method also justified and illustrated, and empirical examples illustrate the test with real-world data.
Inelastic neutron scattering (INS) is a key method for studying magnetic excitations in spin systems, including molecular spin clusters. The method has significantly advanced in recent years and now permits to probe the scattering intensity as a function of the energy transfer and the momentum-transfer vector Q. It was recently shown that high molecular symmetry facilitates the analysis of spectra. Point-group symmetry imposes selection rules in isotropic as well as anisotropic spin models. Furthermore, the Q-dependence of the INS intensity may be completely determined by the point-group symmetry of the states involved in a transition, thereby affording a clear separation of dynamics (energies, transition strengths) and geometrical features (Q-dependencies). The present work addresses this issue for anisotropic spin models. We identify a number of cases where the Q-dependence is completely fixed by the point-group symmetry. For six- and eight-membered planar spin rings and two polyhedra (the cube and the icosahedron) we tabulate and plot the corresponding powder-averaged universal intensity functions. The outlined formalism straightforwardly applies to other highly-symmetric systems and should be useful for future analyses of INS spectra by focusing on those features that contain information on either spin dynamics or the point-group symmetry of states.
We report the detection of CH$_3$OH emission in comet 46P/Wirtanen on UT 2018 December 8 and 9 using the Atacama Compact Array (ACA), part of the Atacama Large Millimeter/Submillimeter Array (ALMA). These interferometric measurements of CH$_3$OH along with continuum emission from dust probed the inner coma ($<$2000 km from the nucleus) of 46P/Wirtanen approximately one week before its closest approach to Earth ($\Delta$ = 0.089 -- 0.092 au), revealing rapidly varying and anisotropic CH$_3$OH outgassing during five separate ACA executions between UT 23:57 December 7 and UT 04:55 December 9, with a clear progression in the spectral line profiles over a timescale of minutes. We present spectrally integrated flux maps, production rates, rotational temperatures, and spectral line profiles of CH$_3$OH during each ACA execution. The variations in CH$_3$OH outgassing are consistent with Wirtanen's 9 hr nucleus rotational period derived from optical and millimeter wavelength measurements and thus are likely coupled to the changing illumination of active sites on the nucleus. The consistent blue offset of the line center indicates enhanced CH$_3$OH sublimation from the sunward hemisphere of the comet, perhaps from icy grains. These results demonstrate the exceptional capabilities of the ACA for time-resolved measurements of comets such as 46P/Wirtanen.
Direct numerical simulations have been performed for heat and momentum transfer in internally heated turbulent shear flow with constant bulk mean velocity and temperature, $u_{b}$ and $\theta_{b}$, between parallel, isothermal, no-slip and permeable walls. The wall-normal transpiration velocity on the walls $y=\pm h$ is assumed to be proportional to the local pressure fluctuations, i.e. $v=\pm \beta p/\rho$ (Jim\'enez et al., J. Fluid Mech., vol. 442, 2001, pp.89-117). The temperature is supposed to be a passive scalar, and the Prandtl number is set to unity. Turbulent heat and momentum transfer in permeable-channel flow for $\beta u_{b}=0.5$ has been found to exhibit distinct states depending on the Reynolds number $Re_b=2h u_b/\nu$. At $Re_{b}\lesssim 10^4$, the classical Blasius law of the friction coefficient and its similarity to the Stanton number, $St\approx c_{f}\sim Re_{b}^{-1/4}$, are observed, whereas at $Re_{b}\gtrsim 10^4$, the so-called ultimate scaling, $St\sim Re_b^0$ and $c_{f}\sim Re_b^0$, is found. The ultimate state is attributed to the appearance of large-scale intense spanwise rolls with the length scale of $O(h)$ arising from the Kelvin-Helmholtz type of shear-layer instability over the permeable walls. The large-scale rolls can induce large-amplitude velocity fluctuations of $O(u_b)$ as in free shear layers, so that the Taylor dissipation law $\epsilon\sim u_{b}^{3}/h$ (or equivalently $c_{f}\sim Re_b^0$) holds. In spite of strong turbulence promotion there is no flow separation, and thus large-amplitude temperature fluctuations of $O(\theta_b)$ can also be induced similarly. As a consequence, the ultimate heat transfer is achieved, i.e., a wall heat flux scales with $u_{b}\theta_{b}$ (or equivalently $St\sim Re_b^0$) independent of thermal diffusivity, although the heat transfer on the walls is dominated by thermal conduction.
This paper considers the problem of nonstationary process monitoring under frequently varying operating conditions. Traditional approaches generally misidentify the normal dynamic deviations as faults and thus lead to high false alarms. Besides, they generally consider single relatively steady operating condition and suffer from the catastrophic forgetting issue when learning successive operating conditions. In this paper, recursive cointegration analysis (RCA) is first proposed to distinguish the real faults from normal systems changes, where the model is updated once a new normal sample arrives and can adapt to slow change of cointegration relationship. Based on the long-term equilibrium information extracted by RCA, the remaining short-term dynamic information is monitored by recursive principal component analysis (RPCA). Thus a comprehensive monitoring framework is built. When the system enters a new operating condition, the RCA-RPCA model is rebuilt to deal with the new condition. Meanwhile, elastic weight consolidation (EWC) is employed to settle the `catastrophic forgetting' issue inherent in RPCA, where significant information of influential parameters is enhanced to avoid the abrupt performance degradation for similar modes. The effectiveness of the proposed method is illustrated by a practical industrial system.
In this paper, we prove that a Sasakian pseudo-metric manifold which admits an $\eta-$Ricci soliton is an $\eta-$Einstein manifold, and if the potential vector field of the $\eta-$Ricci soliton is not a Killing vector field then the manifold is $\mathcal{D}-$homothetically fixed, and the vector field leaves the structure tensor field invariant. Next, we prove that a $K-$contact pseudo-metric manifold with a gradient $\eta-$Ricci soliton metric is $\eta-$Einstein. Moreover, we study contact pseudo-metric manifolds admitting an $\eta-$Ricci soliton with a potential vector field point-wise colinear with the Reeb vector field. Finally, we study gradient $\eta-$Ricci solitons on $(\kappa, \mu)$-contact pseudo-metric manifolds.
Self-supervised learning for depth estimation possesses several advantages over supervised learning. The benefits of no need for ground-truth depth, online fine-tuning, and better generalization with unlimited data attract researchers to seek self-supervised solutions. In this work, we propose a new self-supervised framework for stereo matching utilizing multiple images captured at aligned camera positions. A cross photometric loss, an uncertainty-aware mutual-supervision loss, and a new smoothness loss are introduced to optimize the network in learning disparity maps end-to-end without ground-truth depth information. To train this framework, we build a new multiscopic dataset consisting of synthetic images rendered by 3D engines and real images captured by real cameras. After being trained with only the synthetic images, our network can perform well in unseen outdoor scenes. Our experiment shows that our model obtains better disparity maps than previous unsupervised methods on the KITTI dataset and is comparable to supervised methods when generalized to unseen data. Our source code and dataset are available at https://sites.google.com/view/multiscopic.
Resource Public Key Infrastructure (RPKI) is vital to the security of inter-domain routing. However, RPKI enables Regional Internet Registries (RIRs) to unilaterally takedown IP prefixes - indeed, such attacks have been launched by nation-state adversaries. The threat of IP prefix takedowns is one of the factors hindering RPKI adoption. In this work, we propose the first distributed RPKI system, based on threshold signatures, that requires the coordination of a number of RIRs to make changes to RPKI objects; hence, preventing unilateral prefix takedown. We perform extensive evaluations using our implementation demonstrating the practicality of our solution. Furthermore, we show that our system is scalable and remains efficient even when RPKI is widely deployed.
Dujmovi\'{c}, Joret, Micek, Morin, Ueckerdt and Wood recently in [Planar graphs have bounded queue-number, Journal of the ACM, Volume 67, Issue 4, Article No.: 22, August 2020] showed some attractive graph product structure theorems for planar graphs. By using the product structure, they proved that planar graphs have bounded queue-number $48$; in [Planar graphs have bounded nonrepetitive chromatic number, Advances in Combinatorics, 5, 11 pp, 2020], the authors proved that planar graphs have bounded nonrepetitive chromatic number $768$. In this paper, still by using some product structure theorem, we improve the upper bound of queue-number of planar graphs to $27$ and the non-repetitive chromatic number to $320$. We also study powers of trees. We show a graph product structure theorem of the $k$-th power $T^k$ of tree $T$, then use it giving an upper bound of the nonrepetitive~chromatic~number of $T^k$. We also give an asymptotically tight upper bound of the queue-number of $T^k$.
We study sets of recurrence, in both measurable and topological settings, for actions of $(\mathbb{N},\times)$ and $(\mathbb{Q}^{>0},\times)$. In particular, we show that autocorrelation sequences of positive functions arising from multiplicative systems have positive additive averages. We also give criteria for when sets of the form $\{(an+b)^{\ell}/(cn+d)^{\ell}: n \in \mathbb{N}\}$ are sets of multiplicative recurrence, and consequently we recover two recent results in number theory regarding completely multiplicative functions and the Omega function.
The interaction between magnetic and acoustic excitations have recently inspired many interdisciplinary studies ranging from fundamental physics to circuit implementation. Specifically, the exploration of their coherent interconversion enabled via the magnetoelastic coupling opens a new playground combining straintronics and spintronics, and provides a unique platform for building up on-chip coherent information processing networks with miniaturized magnonic and acoustic devices. In this Perspective, we will focus on the recent progress of magnon-phonon coupled dynamic systems, including materials, circuits, imaging and new physics. In particular, we highlight the unique features such as nonreciprocal acoustic wave propagation and strong coupling between magnons and phonons in magnetic thin-film systems, which provides a unique platform for their coherent manipulation and transduction. We will also review the frontier of surface acoustic wave resonators in coherent quantum transduction and discuss how the novel acoustic circuit design can be applied in microwave spintronics.
We conjecture and verify a set of universal relations between global parameters of hot and fast-rotating compact stars, including a relation connecting the masses of the mass-shedding (Kepler) and static configurations. We apply these relations to the GW170817 event by adopting the scenario in which a hypermassive compact star remnant formed in a merger evolves into a supramassive compact star that collapses into a black hole once the stability line for such stars is crossed. We deduce an upper limit on the maximum mass of static, cold neutron stars $ 2.15^{+0.10}_{-0.07}\le M^\star_{\mathrm{TOV}} \le 2.24^{+0.12}_{-0.10} $ for the typical range of entropy per baryon $2 \le S/A \le 3$ and electron fraction $Y_e = 0.1$ characterizing the hot hypermassive star. Our result implies that accounting for the finite temperature of the merger remnant relaxes previously derived constraints on the value of the maximum mass of a cold, static compact star.
For distant iris recognition, a long focal length lens is generally used to ensure the resolution ofiris images, which reduces the depth of field and leads to potential defocus blur. To accommodate users at different distances, it is necessary to control focus quickly and accurately. While for users in motion, it is expected to maintain the correct focus on the iris area continuously. In this paper, we introduced a novel rapid autofocus camera for active refocusing ofthe iris area ofthe moving objects using a focus-tunable lens. Our end-to-end computational algorithm can predict the best focus position from one single blurred image and generate a lens diopter control signal automatically. This scene-based active manipulation method enables real-time focus tracking of the iris area ofa moving object. We built a testing bench to collect real-world focal stacks for evaluation of the autofocus methods. Our camera has reached an autofocus speed ofover 50 fps. The results demonstrate the advantages of our proposed camera for biometric perception in static and dynamic scenes. The code is available at https://github.com/Debatrix/AquulaCam.
We minimize the stray electric field in a linear Paul trap quickly and accurately, by applying interferometry pulse sequences to a trapped ion optical qubit. The interferometry sequences are sensitive to the change of ion equilibrium position when the trap stiffness is changed, and we use this to determine the stray electric field. The simplest pulse sequence is a two-pulse Ramsey sequence, and longer sequences with multiple pulses offer a higher precision. The methods allow the stray field strength to be minimized beyond state-of-the-art levels, with only modest experimental requirements. Using a sequence of nine pulses we reduce the 2D stray field strength to $(10.5\pm0.8)\,\mathrm{mV\,m^{-1}}$ in $11\,\mathrm{s}$ measurement time. The pulse sequences are easy to implement and automate, and they are robust against laser detuning and pulse area errors. We use interferometry sequences with different lengths and precisions to measure the stray field with an uncertainty below the standard quantum limit. This marks a real-world case in which quantum metrology offers a significant enhancement. Also, we minimize micromotion in 2D using a single probe laser, by using an interferometry method together with the resolved sideband method; this is useful for experiments with restricted optical access. Furthermore, a technique presented in this work is related to quantum protocols for synchronising clocks; we demonstrate these protocols here.
We learn an interactive vision-based driving policy from pre-recorded driving logs via a model-based approach. A forward model of the world supervises a driving policy that predicts the outcome of any potential driving trajectory. To support learning from pre-recorded logs, we assume that the world is on rails, meaning neither the agent nor its actions influence the environment. This assumption greatly simplifies the learning problem, factorizing the dynamics into a nonreactive world model and a low-dimensional and compact forward model of the ego-vehicle. Our approach computes action-values for each training trajectory using a tabular dynamic-programming evaluation of the Bellman equations; these action-values in turn supervise the final vision-based driving policy. Despite the world-on-rails assumption, the final driving policy acts well in a dynamic and reactive world. At the time of writing, our method ranks first on the CARLA leaderboard, attaining a 25% higher driving score while using 40 times less data. Our method is also an order of magnitude more sample-efficient than state-of-the-art model-free reinforcement learning techniques on navigational tasks in the ProcGen benchmark.
We present new radio observations of the binary neutron star merger GW170817 carried out with the Karl G. Jansky Very large Array (VLA) more than 3\,yrs after the merger. Our combined dataset is derived by co-adding more than $\approx32$\,hours of VLA time on-source, and as such provides the deepest combined observation (rms sensitivity $\approx 0.99\,\mu$Jy) of the GW170817 field obtained to date at 3\,GHz. We find no evidence for a late-time radio re-brightening at a mean epoch of $t\approx 1200$\,d since merger, in contrast to a $\approx 2.1\,\sigma$ excess observed at X-ray wavelengths at the same mean epoch. Our measurements agree with expectations from the post-peak decay of the radio afterglow of the GW170817 structured jet. Using these results, we constrain the parameter space of models that predict a late-time radio re-brightening possibly arising from the high-velocity tail of the GW170817 kilonova ejecta, which would dominate the radio and X-ray emission years after the merger (once the structured jet afterglow fades below detection level). Our results point to a steep energy-speed distribution of the kilonova ejecta (with energy-velocity power law index $\alpha \gtrsim 5$). We suggest possible implications of our radio analysis, when combined with the recent tentative evidence for a late-time re-brightening in the X-rays, and highlight the need for continued radio-to-X-ray monitoring to test different scenarios.
We have obtained new observations of the absorption system at $z_\mathrm{abs}=0.48$ toward QSO Q0454-220, which we use to constrain its chemical and physical conditions. The system features metal-enriched gas and previously unknown low-metallicity gas detected $\sim 200 \, \mathrm{km \, s^{-1}}$ blueward of the metal-enriched gas. The low-metallicity gas is detected in multiple Lyman series lines but is not detected in any metal lines. Our analysis includes low-ionization (e.g., Fe II, Mg II) metal lines, high-ionization (e.g., C IV, O VI, N V) metal lines, and several Lyman series lines. We use new UV spectra taken with HST/COS along with data taken from HST/STIS, Keck/HIRES, and VLT/UVES. We find that the absorption system can be explained with a photoionized low-ionization phase with $\mathrm{[Fe/H]} \sim -0.5$ and $n_\mathrm{H} \sim 10^{-2.3} \, \mathrm{cm}^{-3}$, a photoionized high-ionization phase with a conservative lower limit of $-3.3 < \mathrm{[Fe/H]}$ and $n_\mathrm{H} \sim 10^{-3.8} \, \mathrm{cm}^{-3}$, and a low-metallicity component with a conservative upper limit of $\mathrm{[Fe/H]} < -2.5$ that may be photoionized or collisionally ionized. We suggest that the low-ionization phase may be due to cold-flow accretion via large-scale filamentary structure or due to recycled accretion while the high-ionization phase is the result of ancient outflowing material from a nearby galaxy. The low-metallicity component may come from pristine accretion. The velocity spread and disparate conditions among the absorption system's components suggest a combination of gas arising near galaxies along with gas arising from intergroup material.
Minimally invasive surgery mainly consists of a series of sub-tasks, which can be decomposed into basic gestures or contexts. As a prerequisite of autonomic operation, surgical gesture recognition can assist motion planning and decision-making, and build up context-aware knowledge to improve the surgical robot control quality. In this work, we aim to develop an effective surgical gesture recognition approach with an explainable feature extraction process. A Bidirectional Multi-Layer independently RNN (BML-indRNN) model is proposed in this paper, while spatial feature extraction is implemented via fine-tuning of a Deep Convolutional Neural Network(DCNN) model constructed based on the VGG architecture. To eliminate the black-box effects of DCNN, Gradient-weighted Class Activation Mapping (Grad-CAM) is employed. It can provide explainable results by showing the regions of the surgical images that have a strong relationship with the surgical gesture classification results. The proposed method was evaluated based on the suturing task with data obtained from the public available JIGSAWS database. Comparative studies were conducted to verify the proposed framework. Results indicated that the testing accuracy for the suturing task based on our proposed method is 87.13%, which outperforms most of the state-of-the-art algorithms.
R Coronae Borealis (RCB) stars are rare hydrogen-deficient carbon-rich variable supergiants thought to be the result of dynamically unstable white dwarf mergers. We attempt to model RCBs through all the relevant timescales by simulating a merger event in Octo-tiger, a 3D adaptive mesh refinement (AMR) hydrodynamics code and mapping the post-merger object into MESA, a 1D stellar evolution code. We then post-process the nucleosynthesis on a much larger nuclear reaction network to study the enhancement of s-process elements. We present models that match observations or previous studies in most surface abundances, isotopic ratios, early evolution and lifetimes. We also observe similar mixing behavior as previous modeling attempts which result in the partial He-burning products visible on the surface in observations. However, we do note that our sub-solar models lack any enhancement in s-process elements, which we attribute to a lack of hydrogen in the envelope. We also find that the Oxygen-16/Oxygen-18 isotopic ratio is very sensitive to initial hydrogen abundance and increases outside of the acceptable range with a hydrogen mass fraction greater than $10^{-4}$.
We report a detailed theoretical study of a coherent macroscopic quantum-mechanical phenomenon - quantum beats of a single magnetic fluxon trapped in a two-cell SQUID of high kinetic inductance. We calculate numerically and analytically the low-lying energy levels of the fluxon, and explore their dependence on externally applied magnetic fields. The quantum dynamics of the fluxon shows quantum beats originating from its coherent quantum tunneling between the SQUID cells. We analyze the experimental setup based on a three-cell SQUID, allowing for time-resolved measurements of quantum beats of the fluxon.
We present an approach for continual learning (CL) that is based on fully probabilistic (or generative) models of machine learning. In contrast to, e.g., GANs that are "generative" in the sense that they can generate samples, fully probabilistic models aim at modeling the data distribution directly. Consequently, they provide functionalities that are highly relevant for continual learning, such as density estimation (outlier detection) and sample generation. As a concrete realization of generative continual learning, we propose Gaussian Mixture Replay (GMR). GMR is a pseudo-rehearsal approach using a Gaussian Mixture Model (GMM) instance for both generator and classifier functionalities. Relying on the MNIST, FashionMNIST and Devanagari benchmarks, we first demonstrate unsupervised task boundary detection by GMM density estimation, which we also use to reject untypical generated samples. In addition, we show that GMR is capable of class-conditional sampling in the way of a cGAN. Lastly, we verify that GMR, despite its simple structure, achieves state-of-the-art performance on common class-incremental learning problems at very competitive time and memory complexity.
We report measurements of the parity-conserving beam-normal single-spin elastic scattering asymmetries $B_n$ on $^{12}$C and $^{27}$Al, obtained with an electron beam polarized transverse to its momentum direction. These measurements add an additional kinematic point to a series of previous measurements of $B_n$ on $^{12}$C and provide a first measurement on $^{27}$Al. The experiment utilized the Qweak apparatus at Jefferson Lab with a beam energy of 1.158 GeV. The average lab scattering angle for both targets was 7.7 degrees, and the average $Q^2$ for both targets was 0.02437 GeV$^2$ (Q=0.1561 GeV). The asymmetries are $B_n$ = -10.68 $\pm$ 0.90 stat) $\pm$ 0.57 (syst) ppm for $^{12}$C and $B_n$ = -12.16 $\pm$ 0.58 (stat) $\pm$ 0.62 (syst) ppm for $^{27}$Al. The results are consistent with theoretical predictions, and are compared to existing data. When scaled by Z/A, the Q-dependence of all the far-forward angle (theta < 10 degrees) data from $^{1}$H to $^{27}$Al can be described by the same slope out to $Q \approx 0.35$ GeV. Larger-angle data from other experiments in the same Q range are consistent with a slope about twice as steep.
Multi Layer Capacitors MLCs are considered as the most promising refrigerant elements to design and develop electrocaloric cooling devices. Recently, the heat transfer of these MLCs has been considered. However, the heat exchange with the surrounding environment has been poorly, if not, addressed. In this work, we measure by infrared thermography the temperature change versus time in four different heat exchange configurations. Depending on the configurations, Newtonian and non-Newtonian regimes with their corresponding Biot number are determined allowing to provide useful thermal characteristics. Indeed, in case of large area thermal pad contacts, heat transfer coefficients up to 3400 W m-2 K-1 are obtained showing that the standard MLCs already reach the needs for designing efficient prototypes. We also determine the ideal Brayton cooling power in case of thick wires contact which varies between 3.4 mW and 9.8 mW for operating frequencies varying from 0.25 Hz to 1 Hz. While only heat conduction is considered here, our work provides some design rules for improving heat exchanges in future devices.
Complex plasmas consist of microparticles embedded in a low-temperature plasma and allow investigating various effects by tracing the motion of these microparticles. Dust density waves appear in complex plasmas as self-excited acoustic waves in the microparticle fluid at low neutral gas pressures. Here we show that various properties of these waves depend on the position of the microparticle cloud with respect to the plasma sheath and explain this finding in terms of the underlying ion-drift instability. These results may be helpful in better understanding the propagation of dust density waves in complex plasmas and beyond, for instance, in astrophysical dusty plasmas.
In the second of two papers on the peculiar interacting transient AT 2016jbu, we present the bolometric lightcurve, identification and analysis of the progenitor candidate, as well as preliminary modelling to help elucidate the nature of this event. We identify the progenitor candidate for AT 2016jbu in quiescence, and find it to be consistent with a $\sim$20 M$_{\odot}$ yellow hypergiant surrounded by a dusty circumstellar shell. We see evidence for significant photometric variability in the progenitor, as well as strong H$\alpha$ emission consistent with pre-existing circumstellar material. The age of the resolved stellar population surrounding AT 2016jbu, as well as integral-field unit spectra of the region support a progenitor age of >16 Myr, again consistent with a progenitor mass of $\sim$20 M$_{\odot}$. Through a joint analysis of the velocity evolution of AT 2016jbu, and the photospheric radius inferred from the bolometric lightcurve, we find that the transient is consistent with two successive outbursts or explosions. The first outburst ejected a shell of material with velocity 650 km $s^{-1}$, while the second more energetic event ejected material at 4500 km $s^{-1}$. Whether the latter is the core-collapse of the progenitor remains uncertain, as the required ejecta mass is relatively low (few tenths of M$_{\odot}$). We also place a restrictive upper limit on the ejected $^{56}$Ni mass of <0.016 M$_{\odot}$. Using the BPASS code, we explore a wide range of possible progenitor systems, and find that the majority of these are in binaries, some of which are undergoing mass transfer or common envelope evolution immediately prior to explosion. Finally, we use the SNEC code to demonstrate that the low-energy explosion of some of these systems together with sufficient CSM can reproduce the overall morphology of the lightcurve of AT 2016jbu.
In the present paper, we introduce a concept of Ricci curvature on hypergraphs for a nonlinear Laplacian. We prove that our definition of the Ricci curvature is a generalization of Lin-Lu-Yau coarse Ricci curvature for graphs to hypergraphs. We also show a lower bound of nonzero eigenvalues of Laplacian, gradient estimate of heat flow, and diameter bound of Bonnet-Myers type for our curvature notion. This research leads to understanding how nonlinearity of Laplacian causes complexity of curvatures.
The Jahn-Teller theorem constitutes one of the most popular and stringent concepts, applicable to all fields of chemistry. In open shell transition elements chemistry and physics, 3d4, 3d9, and 3d7(low-spin) configurations in octahedral complexes serve as particular illustrative and firm examples, where a striking change (distortion) in local geometry is associated to a substantial reduction of electronic energy. However, there has been a lasting debate, about the fact that the octahedra are found to exclusively elongate, (at least for eg electrons). Against this background, the title compound displays two marked features, (1) the octahedron of oxygen atoms around Os6+ (d2) is drastically compressed, in contrast to the standard JT expectations, and (2) the splitting of the t2g set induced by this compression is extreme, such that a diamagnetic ground state results. What we see is obviously a Jahn-Teller distortion resulting in a compression of the respective octahedron and acting on the t2g set of orbitals. Both these issues are unprecedented. Noteworthy, the splitting into a lower dxy (hosting two d electrons with opposite spin) and two higher dxz and dyz orbitals is so large that for the first time ever the Hund's coupling for t2g electrons is overcome. We show that these effects are not forced by structural frustration, the structure offers sufficient space for Os to shift the apical oxygen atoms to a standard distance. Local electronic effects appear to be responsible, instead. The relevance of these findings is far reaching, since they provide insights in the hierarchy of perturbations defining ground states of open shell electronic systems. The system studied here, offers substantially more structural and compositional degrees of freedom, such that a configuration could form that enables Os6+ to adopt its apparently genuine diamagnetic ground state.
There are several methods for model selection in cosmology which have at least two major goals, that of finding the correct model or predicting well. In this work we discuss through a study of well-known model selection methods like Akaike information criterion (AIC), Bayesian information criterion (BIC), deviance information criterion (DIC) and Bayesian evidence, how these different goals are pursued in each paradigm. We also apply another method for model selection which less seen in cosmological literature, the Cross-validation method. Using these methods we will compare two different scenarios in cosmology, $\Lambda$CDM model and dynamical dark energy. We show that each of the methods tends to different results in model selection. While BIC and Bayesian evidence overrule the dynamical dark energy scenarios with 2 or 3 extra degree of freedom, the DIC and cross-validation method prefer these dynamical models to $\Lambda$CDM model. Assuming the numerical results of different analysis and combining cosmological and statistical aspects of the subject, we propose cross-validation as an interesting method for model selection in cosmology that can lead to different results in comparison with usual methods of model selection.
Sound event detection is a core module for acoustic environmental analysis. Semi-supervised learning technique allows to largely scale up the dataset without increasing the annotation budget, and recently attracts lots of research attention. In this work, we study on two advanced semi-supervised learning techniques for sound event detection. Data augmentation is important for the success of recent deep learning systems. This work studies the audio-signal random augmentation method, which provides an augmentation strategy that can handle a large number of different audio transformations. In addition, consistency regularization is widely adopted in recent state-of-the-art semi-supervised learning methods, which exploits the unlabelled data by constraining the prediction of different transformations of one sample to be identical to the prediction of this sample. This work finds that, for semi-supervised sound event detection, consistency regularization is an effective strategy, especially the best performance is achieved when it is combined with the MeanTeacher model.
We derive the non-relativistic limit of a massive vector field. We show that the Cartesian spatial components of the vector behave as three identical, non-interacting scalar fields. We find classes of spherical, cylindrical, and planar self-gravitating vector solitons in the Newtonian limit. The gravitational properties of the lowest-energy vector solitons$\mathrm{-}$the gravitational potential and density field$\mathrm{-}$depend only on the net mass of the soliton and the vector particle mass. In particular, these self-gravitating, ground-state vector solitons are independent of the distribution of energy across the vector field components, and are indistinguishable from their scalar-field counterparts. Fuzzy Vector Dark Matter models can therefore give rise to halo cores with identical observational properties to the ones in scalar Fuzzy Dark Matter models. We also provide novel hedgehog vector soliton solutions, which cannot be observed in scalar-field theories. The gravitational binding of the lowest-energy hedgehog halo is about three times weaker than the ground-state vector soliton. Finally, we show that no spherically symmetric solitons exist with a divergence-free vector field.
Given two algebraic groups $G$, $H$ over a field $k$, we investigate the representability of the functor of morphisms (of schemes) $\mathbf{Hom}(G,H)$ and the subfunctor of homomorphisms (of algebraic groups) $\mathbf{Hom}_{\rm gp}(G,H)$. We show that $\mathbf{Hom}(G,H)$ is represented by a group scheme, locally of finite type, if the $k$-vector space $\mathcal{O}(G)$ is finite-dimensional; the converse holds if $H$ is not \'etale. When $G$ is linearly reductive and $H$ is smooth, we show that $\mathbf{Hom}_{\rm gp}(G,H)$ is represented by a smooth scheme $M$; moreover, every orbit of $H$ acting by conjugation on $M$ is open.
Modern power systems face a grand challenge in grid management due to increased electricity demand, imminent disturbances, and uncertainties associated with renewable generation, which can compromise grid security. The security assessment is directly connected to the robustness of the operating condition and is evaluated by analyzing proximity to the power flow solution space's boundary. Calculating location of such a boundary is a computationally challenging task, linked to the power flow equations' non-linear nature, presence of technological constraints, and complicated network topology. In this paper we introduce a general framework to characterize points on the power flow solution space boundary in terms of auxiliary variables subject to algebraic constraints. Then we develop an adaptive continuation algorithm to trace 1-dimensional sections of boundary curves which exhibits robust performance and computational tractability. Implementation of the algorithm is described in detail, and its performance is validated on different test networks.
We investigate optimal states of photon pairs to excite a target transition in a multilevel quantum system. With the help of coherent control theory for two-photon absorption with quantum light, we infer the maximal population achievable by optimal entangled vs. separable states of light. Interference between excitation pathways, as well as the presence of nearby states, may hamper the selective excitation of a particular target state, but we show that quantum correlations can help to overcome this problem, and enhance the achievable "selectivity" between two energy levels, i.e. the relative difference in population transferred into each of them. We find that the added value of optimal entangled states of light increases with broadening linewidths of the target states.
In non-centrosymmetric metals, spin-orbit coupling (SOC) induces momentum-dependent spin polarization at the Fermi surfaces. This is exemplified by the valley-contrasting spin polarization in monolayer transition metal dichalcogenides (TMDCs) with in-plane inversion asymmetry. However, the valley configuration of massive Dirac fermions in TMDCs is fixed by the graphene-like structure, which limits the variety of spin-valley coupling. Here, we show that the layered polar metal BaMn$X_2$ ($X =$Bi, Sb) hosts tunable spin-valley-coupled Dirac fermions, which originate from the distorted $X$ square net with in-plane lattice polarization. We found that in spite of the larger SOC, BaMnBi$_2$ has approximately one-tenth the lattice distortion of BaMnSb$_2$, from which a different configuration of spin-polarized Dirac valleys is theoretically predicted. This was experimentally observed as a clear difference in the Shubnikov-de Haas oscillation at high fields between the two materials. The chemically tunable spin-valley coupling in BaMn$X_2$ makes it a promising material for various spin-valleytronic devices.
Traditionally, for most machine learning settings, gaining some degree of explainability that tries to give users more insights into how and why the network arrives at its predictions, restricts the underlying model and hinders performance to a certain degree. For example, decision trees are thought of as being more explainable than deep neural networks but they lack performance on visual tasks. In this work, we empirically demonstrate that applying methods and architectures from the explainability literature can, in fact, achieve state-of-the-art performance for the challenging task of domain generalization while offering a framework for more insights into the prediction and training process. For that, we develop a set of novel algorithms including DivCAM, an approach where the network receives guidance during training via gradient based class activation maps to focus on a diverse set of discriminative features, as well as ProDrop and D-Transformers which apply prototypical networks to the domain generalization task, either with self-challenging or attention alignment. Since these methods offer competitive performance on top of explainability, we argue that the proposed methods can be used as a tool to improve the robustness of deep neural network architectures.
Static analysis tools typically address the problem of excessive false positives by requiring programmers to explicitly annotate their code. However, when faced with incomplete annotations, many analysis tools are either too conservative, yielding false positives, or too optimistic, resulting in unsound analysis results. In order to flexibly and soundly deal with partially-annotated programs, we propose to build upon and adapt the gradual typing approach to abstract-interpretation-based program analyses. Specifically, we focus on null-pointer analysis and demonstrate that a gradual null-pointer analysis hits a sweet spot, by gracefully applying static analysis where possible and relying on dynamic checks where necessary for soundness. In addition to formalizing a gradual null-pointer analysis for a core imperative language, we build a prototype using the Infer static analysis framework, and present preliminary evidence that the gradual null-pointer analysis reduces false positives compared to two existing null-pointer checkers for Infer. Further, we discuss ways in which the gradualization approach used to derive the gradual analysis from its static counterpart can be extended to support more domains. This work thus provides a basis for future analysis tools that can smoothly navigate the tradeoff between human effort and run-time overhead to reduce the number of reported false positives.
In the absence of dissipation a non-rotating magnetic nanoparticle can be stably levitated in a static magnetic field as a consequence of the spin origin of its magnetization. Here, we study the effects of dissipation on the stability of the system, considering the interaction with the background gas and the intrinsic Gilbert damping of magnetization dynamics. We find that dissipation limits the time over which a particle can be stably levitated. At large applied magnetic fields we identify magnetization switching induced by Gilbert damping as the key limiting factor for stable levitation. At low applied magnetic fields and for small particle dimensions magnetization switching is prevented due to the strong coupling of rotation and magnetization dynamics, and the stability is mainly limited by the gas-induced dissipation. In this latter case, high vacuum should be sufficient to extend stable levitation over experimentally relevant timescales. Our results demonstrate the possibility to experimentally observe the phenomenon of quantum spin stabilized magnetic levitation.
Cryptocurrencies are increasingly popular. Even people who are not experts have started to invest in these securities, and nowadays, cryptocurrency exchanges process transactions for over 100 billion US dollars per month. In spite of this, many cryptocurrencies have low liquidity, and therefore, they are highly prone to market manipulation. This paper performs an in-depth analysis of two market manipulations organized by communities over the Internet: The pump and dump and the crowd pump. The pump and dump scheme is a fraud as old as the stock market. Now, it got new vitality in the loosely regulated market of cryptocurrencies. Groups of highly coordinated people arrange this scam, usually on Telegram and Discord. We monitored these groups for more than 3 years detecting around 900 individual events. We analyze how these communities are organized and how they carry out the fraud. We report on three case studies of pump and dump. Then, we leverage our unique dataset of the verified pump and dumps to build a machine learning model able to detect a pump and dump in 25 seconds from the moment it starts, achieving the results of 94.5% of F1-score. Then, we move on to the crowd pump, a new phenomenon that hit the news in the first months of 2021, when a Reddit community inflates the price of the GameStop stocks (GME) of over 1,900% on Wall Street, the world's largest stock exchange. Later, other Reddit communities replicate the operation on the cryptocurrency markets. The targets were Dogecoin (DOGE) and Ripple (XRP). We reconstruct how these operations developed, and we discuss differences and analogies with the standard pump and dump. Lastly, we illustrate how it is possible to leverage our classifier to detect this kind of operation too.
Two-dimensional (2D) magnetic materials with strong magnetostriction are interesting systems for strain-tuning the magnetization, enabling potential for realizing spintronic and nanomagnetic devices. Realizing this potential requires understanding of the magneto-mechanical coupling in the 2D limit. In this work, we suspend thin Cr$_2$Ge$_2$Te$_6$ layers, creating nanomechanical membrane resonators. We probe its mechanical and magnetic properties as a function of temperature and strain. Pronounced signatures of magneto-elastic coupling are observed in the temperature-dependent resonance frequency of these membranes near $T_{\rm C}$. We further utilize Cr$_2$Ge$_2$Te$_6$ in heterostructures with thin layers of WSe$_2$ and FePS$_3$, which have positive thermal expansion coefficients, to compensate the negative thermal expansion coefficient of Cr$_2$Ge$_2$Te$_6$ and quantitatively probe the corresponding $T_{\rm C}$. Finally, we induce a strain of $0.016\%$ in a suspended heterostructure via electrostatic force and demonstrate a resulting enhancement of $T_{\rm C}$ by $2.5 \pm 0.6$ K in the absence of an external magnetic field.
Progress on the study of synchronisation in quantum systems has been largely driven by specific examples which resulted in several examples of frequency entrainment as well as mutual synchronisation. Here we study quantum synchronisation by utilising Liouville space perturbation theory. We begin by clarifying the role of centers, symmetries and oscillating coherences in the context of quantum synchronisation. We then analyse the eigenspectrum of the Liouville superoperator generating the dynamics of the quantum system and determine the conditions under which synchronisation arises. We apply our framework to derive a powerful relationship between energy conservation, degeneracies and synchronisation in quantum systems. Finally, we demonstrate our approach by analysing two mutually coupled thermal machines and the close relationship between synchronisation and thermodynamic quantities.
Structural optimization (topology, shapes, sizing) is an important tool for facilitating the emergence of new concepts in structural design. Normally, topology optimization is carried out at the early stage of design and then shape and sizing design are performed sequentially. Unlike traditional topology optimization method, explicit methodologies have attracted a great deal of attention because of the advantages of shortcuting the costly CAD/CAE processes while dealing with low order number of design variables compared to implicit method (such as SIMP). This paper aims at presenting an adaptation of a flow-inspired approach so-called Moving Node Approach (MNA) in topology optimization. In this approach, the discretization is decoupled from the material distribution and the final objective is to recognize the best beam assembly while minimizing compliance. The paradigm has here changed and new design variables are used such as nodes location, elements length/orientation and size providing a lower number of design variables than pixels-based. The methodology is validated using 2 classical testcases in the field of Topology Optimization: the Cantilever beam and the L-Shape problem.
The increase in the sensitivity of gravitational wave interferometers will bring additional detections of binary black hole and double neutron star mergers. It will also very likely add many merger events of black hole - neutron star binaries. Distinguishing mixed binaries from binary black holes mergers for high mass ratios could be challenging because in this situation the neutron star coalesces with the black hole without experiencing significant disruption. To investigate the transition of mixed binary mergers into those behaving more like binary black hole coalescences, we present results from merger simulations for different mass ratios. We show how the degree of deformation and disruption of the neutron star impacts the inspiral and merger dynamics, the properties of the final black hole, the accretion disk formed from the circularization of the tidal debris, the gravitational waves, and the strain spectrum and mismatches. The results also show the effectiveness of the initial data method that generalizes the Bowen-York initial data for black hole punctures to the case of binaries with neutron star companions.
Pre-trained contextualized language models (PrLMs) have led to strong performance gains in downstream natural language understanding tasks. However, PrLMs can still be easily fooled by adversarial word substitution, which is one of the most challenging textual adversarial attack methods. Existing defence approaches suffer from notable performance loss and complexities. Thus, this paper presents a compact and performance-preserved framework, Anomaly Detection with Frequency-Aware Randomization (ADFAR). In detail, we design an auxiliary anomaly detection classifier and adopt a multi-task learning procedure, by which PrLMs are able to distinguish adversarial input samples. Then, in order to defend adversarial word substitution, a frequency-aware randomization process is applied to those recognized adversarial input samples. Empirical results show that ADFAR significantly outperforms those newly proposed defense methods over various tasks with much higher inference speed. Remarkably, ADFAR does not impair the overall performance of PrLMs. The code is available at https://github.com/LilyNLP/ADFAR
We consider the renormalization-group evolution of the matrix element of $\langle 0| \bar{q}(z)_\beta [z,0]b(0)_\alpha| \bar{B}\rangle$, which can be used to define the distribution amplitudes for $B$ meson and widely applied in studies of $B$ meson decays. The contribution to the renormalization constant of the non-local operator $\bar{q}(z)_\beta [z,0]b(0)_\alpha$ is considered up to one-loop order in QCD. Since the quark fields in this operator are not directly coupled fields, momentum can not flow freely through this non-local operator. Momentum involved in this operator can be treated stringently in coordinate space. We find that the ultraviolet divergences regulated by dimensional parameter $\epsilon$ cancel with each other, and the evolution effect vanishes. The matrix element $\langle 0| \bar{q}(z)_\beta [z,0]b(0)_\alpha| \bar{B}\rangle$ escapes from the renormalization-group evolution. We then apply the matrix element in calculating $B\to\pi$ transition form factor, where the matrix element is obtained by using the $B$ meson wave function calculated in QCD-inspired potential model. By comparing with experimental data for the semileptonic decay of $B\to \pi \ell\nu$ and light-cone sum rule calculation, we analyse the perturbative and non-perturbative contributions to $B\to\pi$ transition form factor in the frame work of perturbative QCD approach. We find that the effectiveness of the suppression of Sudakov factor to soft contribution depends on the end-point behavior of $B$ meson wave function, and with the $B$-meson wave function used in this work, soft contribution can not be well suppressed. The hard contribution to the $B\to\pi$ transition form factor is about 59\%, and soft contribution can be as large as 41\% in the naive calculation. To make the perturbative calculation reliable, a soft momentum cutoff in the calculation and soft form factor have to be introduced.
It has been shown that temperature cycles on airless bodies of our Solar System can cause damaging of surface materials. Nevertheless, propagation mechanisms in the case of space objects are still poorly understood. Present work combines a thermoelasticity model together with linear elastic fracture mechanics theory to predict fracture propagation in the presence of thermal gradients generated by diurnal temperature cycling and under conditions similar to those existing on the asteroid Bennu. The crack direction is computed using the maximal strain energy release rate criterion, which is implemented using finite elements and the so-called G$\theta$ method (Uribe-Su\'arez et al. 2020. Eng. Fracture Mech. 227:106918). Using the implemented methodology, crack propagation direction for an initial crack tip in different positions and for different orientations is computed. It is found that cracks preferentially propagate in the North to South (N-S), in the North-East to South-West (NE-SW) and in the North-West to South-East (NW-SE) directions. Finally, thermal fatigue analysis was performed in order to estimate the crack growth rate. Computed value is in good agreement with available experimental evidence.
We establish sharp dynamical implications of convexity on symmetric spheres that do not follow from dynamical convexity. It allows us to show the existence of elliptic and non-hyperbolic periodic orbits and to furnish new examples of dynamically convex contact forms, in any dimension, that are not equivalent to convex ones via contactomorphisms that preserve the symmetry. Moreover, these examples are $C^1$-stable in the sense that they are actually not equivalent to convex ones via contactomorphisms that are $C^1$-close to those preserving the symmetry. We also show the multiplicity of symmetric non-hyperbolic and symmetric (not necessarily non-hyperbolic) closed Reeb orbits under suitable pinching conditions.
We introduce a geometric operation, which we call the relative Whitney trick, that removes a single double point between properly immersed surfaces in a $4$-manifold with boundary. Using the relative Whitney trick we prove that every link in a homology sphere is homotopic to a link that is topologically slice in a contractible topological $4$-manifold. We further prove that any link in a homology sphere is order $k$ Whitney tower concordant to a link in $S^3$ for all $k$. Finally, we explore the minimum Gordian distance from a link in $S^3$ to a homotopically trivial link. Extending this notion to links in homology spheres, we use the relative Whitney trick to make explicit computations for 3-component links and establish bounds in general.
We use relative hyperbolicity of mapping tori and Dehn fillings of relatively hyperbolic groups to solve the conjugacy problem between certain outer automorphisms. We reduce this problem to algorithmic problems entirely expressed in terms of the parabolic subgroups of the mapping tori. As an immediate application, we solve the conjugacy problem for outer automorphisms of free groups, whose polynomial part is piecewise inner. This proposes a path toward a full solution to the conjugacy problem for $\mathrm{Out}(F_n)$.
We investigate how the choice of equation of state (EOS) and resolution conspire to affect the outcomes of giant impact (GI) simulations. We focus on the simple case of equal mass collisions of two Earth-like $0.5\,M_\oplus$ proto-planets showing that the choice of EOS has a profound impact on the outcome of such collisions as well as on the numerical convergence with resolution. In simulations where the Tillotson EOS is used, impacts generate an excess amount of vapour due to the lack of a thermodynamically consistent treatment of phase transitions and mixtures. In oblique collisions this enhances the artificial angular momentum (AM) transport from the planet to the circum-planetary disc reducing the planet's rotation period over time. Even at a resolution of $1.3 \times 10^6$ particles the result is not converged. In head-on collisions the lack of a proper treatment of the solid/liquid-vapour phase transition allows the bound material to expand to very low densities which in turn results in very slow numerical convergence of the critical specific impact energy for catastrophic disruption $Q_{RD}^*$ with increasing resolution as reported in prior work. The simulations where ANEOS is used for oblique impacts are already converged at a modest resolution of $10^5$ particles, while head-on collisions converge when they evidence the post-shock formation of a dense iron-rich ring, which promotes gravitational re-accumulation of material. Once sufficient resolution is reached to resolve the liquid-vapour phase transition of iron in the ANEOS case, and this ring is resolved, the value of $Q_{RD}^*$ has then converged.
We studied theoretically the effect of a low concentration of adsorbed polar molecules on the optical conductivity of graphene, within the Kubo linear response approximation. Our analysis is based on a continuum model approximation that includes up to next to nearest neighbors in the pristine graphene effective Hamiltonian, thus extending the field-theoretical analysis developed in Refs.[1,2]. Our results show that the conductivity can be expressed in terms of renormalized quasiparticle parameters $\tilde{v}_F$, $\tilde{M}$ and $\tilde{\mu}$ that include the effect of the molecular surface concentration $n_{dip}$ and dipolar moment $\boldsymbol{\mathcal{P}}$, thus providing an analytical model for a graphene-based chemical sensor.
Learning an empirically effective model with generalization using limited data is a challenging task for deep neural networks. In this paper, we propose a novel learning framework called PurifiedLearning to exploit task-irrelevant features extracted from task-irrelevant labels when training models on small-scale datasets. Particularly, we purify feature representations by using the expression of task-irrelevant information, thus facilitating the learning process of classification. Our work is built on solid theoretical analysis and extensive experiments, which demonstrate the effectiveness of PurifiedLearning. According to the theory we proved, PurifiedLearning is model-agnostic and doesn't have any restrictions on the model needed, so it can be combined with any existing deep neural networks with ease to achieve better performance. The source code of this paper will be available in the future for reproducibility.
Virtual-reality (VR) and augmented-reality (AR) technology is increasingly combined with eye-tracking. This combination broadens both fields and opens up new areas of application, in which visual perception and related cognitive processes can be studied in interactive but still well controlled settings. However, performing a semantic gaze analysis of eye-tracking data from interactive three-dimensional scenes is a resource-intense task, which so far has been an obstacle to economic use. In this paper we present a novel approach which minimizes time and information necessary to annotate volumes of interest (VOIs) by using techniques from object recognition. To do so, we train convolutional neural networks (CNNs) on synthetic data sets derived from virtual models using image augmentation techniques. We evaluate our method in real and virtual environments, showing that the method can compete with state-of-the-art approaches, while not relying on additional markers or preexisting databases but instead offering cross-platform use.
On a compact K\"ahler manifold $X$, Toeplitz operators determine a deformation quantization $(\operatorname{C}^\infty(X, \mathbb{C})[[\hbar]], \star)$ with separation of variables [10] with respect to transversal complex polarizations $T^{1, 0}X, T^{0, 1}X$ as $\hbar \to 0^+$ [15]. The analogous statement is proved for compact symplectic manifolds with transversal non-singular real polarizations [13]. In this paper, we establish the analogous result for transversal singular real polarizations on compact toric symplectic manifolds $X$. Due to toric singularities, half-form correction and localization of our Toeplitz operators are essential. Via norm estimations, we show that these Toeplitz operators determine a star product on $X$ as $\hbar \to 0^+$.
We construct analogues of the Hecke operators for the moduli space of G-bundles on a curve X over a local field F with parabolic structures at finitely many points. We conjecture that they define commuting compact normal operators on the Hilbert space of half-densities on this moduli space. In the case F=C, we also conjecture that their joint spectrum is in a natural bijection with the set of opers on X for the Langlands dual group with real monodromy. This may be viewed as an analytic version of the Langlands correspondence for complex curves. Furthermore, we conjecture an explicit formula relating the eigenvalues of the Hecke operators and the global differential operators studied in our previous paper arXiv:1908.09677. Assuming the compactness conjecture, this formula follows from a certain system of differential equations satisfied by the Hecke operators, which we prove in this paper for G=PGL(n).
The potential of using millimeter-wave (mmWave) to encounter the current bandwidth shortage has motivated packing more antenna elements in the same physical size which permits the advent of massive multiple-input-multiple-output (MIMO) for mmWave communication. However, with increasing number of antenna elements, the ability of allocating a single RF-chain per antenna becomes infeasible and unaffordable. As a cost-effective alternative, the design of hybrid precoding has been considered where the limited-scattering signals are captured by a high-dimensional RF precoder realized by an analog phase-shifter network followed by a low-dimensional digital precoder at baseband. In this paper, the max-min fair problem is considered to design a low-complexity hybrid precoder for multi-group multicasting systems in mmWave channels. The problem is non-trivial due to two main reasons: the original max-min problem for multi-group multicasting for a fully-digital precoder is non-convex, and the analog precoder places constant modules constraint which restricts the feasible set of the precoders in the design problem. Therefore, we consider a low complexity hybrid precoder design to tackle and benefit from the mmWave channel structure. Each analog beamformer was designed to maximize the minimum matching component for users within a given group. Once obtained, the digital precoder was attained by solving the max-min problem of the equivalent channel.