text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
We examine the quantum null energy condition (QNEC) for a $2+1$-dimensional conformal field theory (CFT) at strong coupling in the background of a wormhole spacetime by employing the AdS/CFT correspondence. First, we numerically construct a novel $3+1$-dimensional vacuum AdS black hole solution with non-trivial topology, which is dual to a wormhole geometry connecting two flat universes. Although the bulk null energy condition (NEC) is not violated, the NEC for the holographic stress-energy tensor is violated near the wormhole throat. Next, we investigate the entanglement entropy for a half-space anchored to the boundary wormhole throat. We propose a natural prescription for regularizing the IR divergent part of the entanglement entropy and show that the QNEC is violated at the throat. This is the first counterexample to the QNEC, indicating that IR effects are crucial.
|
high energy physics theory
|
In this paper, we examine the impact of cyberattacks in an integrated transmission and distribution (T&D) power grid model with distributed energy resource (DER) integration. We adopt the OCTAVE Allegro methodology to identify critical system assets, enumerate potential threats, analyze, and prioritize risks for threat scenarios. Based on the analysis, attack strategies and exploitation scenarios are identified which could lead to system compromise. Specifically, we investigate the impact of data integrity attacks in inverted-based solar PV controllers, control signal blocking attacks in protective switches and breakers, and coordinated monitoring and switching time-delay attacks.
|
electrical engineering and systems science
|
Modern single image super-resolution (SISR) system based on convolutional neural networks (CNNs) achieves fancy performance while requires huge computational costs. The problem on feature redundancy is well studied in visual recognition task, but rarely discussed in SISR. Based on the observation that many features in SISR models are also similar to each other, we propose to use shift operation to generate the redundant features (i.e., Ghost features). Compared with depth-wise convolution which is not friendly to GPUs or NPUs, shift operation can bring practical inference acceleration for CNNs on common hardware. We analyze the benefits of shift operation for SISR and make the shift orientation learnable based on Gumbel-Softmax trick. For a given pre-trained model, we first cluster all filters in each convolutional layer to identify the intrinsic ones for generating intrinsic features. Ghost features will be derived by moving these intrinsic features along a specific orientation. The complete output features are constructed by concatenating the intrinsic and ghost features together. Extensive experiments on several benchmark models and datasets demonstrate that both the non-compact and lightweight SISR models embedded in our proposed module can achieve comparable performance to that of their baselines with large reduction of parameters, FLOPs and GPU latency. For instance, we reduce the parameters by 47%, FLOPs by 46% and GPU latency by 41% of EDSR x2 network without significant performance degradation.
|
electrical engineering and systems science
|
The search for long-lived particles (LLP) is an exciting physics opportunity in the upcoming runs of the Large Hadron Collider. In this paper, we focus on a new search strategy of using the High Granularity Calorimeter (HGCAL), part of the upgrade of the CMS detector, in such searches. In particular, we demonstrate that the high granularity of the calorimeter with this upgrade, which allows us to see "shower tracks" in the calorimeter, can play a crucial role in identifying the signal and suppressing the background. We study the potential reach of the HGCAL using a signal model in which the Standard Model Higgs boson decays into a pair of LLPs, $h \to XX$. After carefully estimating the Standard Model QCD and the misreconstructed fake-track backgrounds, we give the projected reach for both a more conservative vector boson fusion trigger and a novel displaced-track-based trigger. Our results show that the best reach for the Higgs decay branching ratio, BR$(h \to XX)$, in the vector boson fusion channel is about $\mathcal{O}(10^{-4})$ with lifetime $c\tau_X \sim 0.1$--$1$ meters, while for the gluon gluon fusion channel it is about $\mathcal{O}(10^{-5}\text{--}10^{-6})$ for similar lifetimes. Alternatively, for an LLP with $c\tau_X \sim 10^3$ meters, the HGCAL based search should be able to probe BR$(h\to XX)$ down to a few $\times 10^{-4}$($10^{-2}$) in the gluon gluon fusion (vector boson fusion) channels, respectively. In comparison with these previous searches, our new search shows enhanced sensitivity in complementary regions of the LLP parameter space.
|
high energy physics phenomenology
|
The Su-Schrieffer-Heeger (SSH) model perhaps is the easiest and the most basic model for topological excitations. Many variations and extensions of the SSH model have been proposed and explored to better understand both fundamental and novel aspects of topological physics. The SSH4 model has been proposed theoretically as an extended SSH model with higher dimension (the internal dimension changes from two to four). It has been proposed that the winding number in this system can be determined through a higher-dimensional extension of the mean chiral displacement measurement, however this has not yet been verified in experiment. Here we report the realization of this model with ultracold atoms in a momentum lattice. We verify the winding number through measurement of the mean chiral displacement in a system with higher internal dimension, we map out the topological phase transition in this system, and we confirm the topological edge state by observation of the quench dynamics when atoms are initially prepared at the system boundary.
|
condensed matter
|
We introduce a new method for estimating the support size of an unknown distribution which provably matches the performance bounds of the state-of-the-art techniques in the area and outperforms them in practice. In particular, we present both theoretical and computer simulation results that illustrate the utility and performance improvements of our method. The theoretical analysis relies on introducing a new weighted Chebyshev polynomial approximation method, jointly optimizing the bias and variance components of the risk, and combining the weighted minmax polynomial approximation method with discretized semi-infinite programming solvers. Such a setting allows for casting the estimation problem as a linear program (LP) with a small number of variables and constraints that may be solved as efficiently as the original Chebyshev approximation problem. Our technique is tested on synthetic data and used to address an important problem in computational biology - estimating the number of bacterial genera in the human gut. On synthetic datasets, for practically relevant sample sizes, we observe significant improvements in the value of the worst-case risk compared to existing methods. For the bioinformatics application, using metagenomic data from the NIH Human Gut and the American Gut Microbiome Projects, we generate a list of frequencies of bacterial taxa that allows us to estimate the number of bacterial genera to approximately 2300.
|
statistics
|
We analyze the residual gauge freedom in gravity, in four dimensions, in the light-cone gauge, in a formulation where unphysical fields are integrated out. By checking the invariance of the light-cone Hamiltonian, we obtain a set of residual gauge transformations, which satisfy the BMS algebra realized on the two physical fields in the theory. Hence, the BMS algebra appears as a consequence of residual gauge invariance in the bulk and not just at the asymptotic boundary. We highlight the key features of the light-cone BMS algebra and discuss its connection with the quadratic form structure of the Hamiltonian.
|
high energy physics theory
|
In this paper we prove an observability inequality for a degenerate transport equation. First we introduce a local in time Carleman estimate for the degenerate equation, then we apply it to obtain a global in time observability inequality by using also an energy estimate.
|
mathematics
|
Spatial regression is widely used for modeling the relationship between a dependent variable and explanatory covariates. Oftentimes, the linear relationships vary across space, when some covariates have location-specific effects on the response. One fundamental question is how to detect the systematic variation in the model and identify which locations share common regression coefficients and which do not. Only a correct model structure can assure unbiased estimation of coefficients and valid inferences. In this work, we propose a new procedure, called Spatial Heterogeneity Automatic Detection and Estimation (SHADE), for automatically and simultaneously subgrouping and estimating covariate effects for spatial regression models. The SHADE employs a class of spatially-weighted fusion type penalty on all pairs of observations, with location-specific weight adaptively constructed using spatial information, to cluster coefficients into subgroups. Under certain regularity conditions, the SHADE is shown to be able to identify the true model structure with probability approaching one and estimate regression coefficients consistently. We develop an alternating direction method of multiplier algorithm (ADMM) to compute the SHAD efficiently. In numerical studies, we demonstrate empirical performance of the SHADE by using different choices of weights and compare their accuracy. The results suggest that spatial information can enhance subgroup structure analysis in challenging situations when the spatial variation among regression coefficients is small or the number of repeated measures is small. Finally, the SHADE is applied to find the relationship between a natural resource survey and a land cover data layer to identify spatially interpretable groups.
|
statistics
|
We generalize our previous lattice construction of the abelian bosonization duality in $2+1$ dimensions to the entire web of dualities as well as the $N_f=2$ self-duality, via the lattice implementation of a set of modular transformations in the theory space. The microscopic construction provides explicit operator mappings, and allows the manifestation of some hidden symmetries. It also exposes certain caveats and implicit assumptions beneath the usual application of the modular transformations to generate the web of dualities. Finally, we make brief comments on the non-relativistic limit of the dualities.
|
high energy physics theory
|
The stability of linear dynamic systems with hysteresis in feedback is considered. While the absolute stability for memoryless nonlinearities (known as Lure's problem) can be proved by the well-known circle criterion, the multivalued rate-independent hysteresis poses significant challenges for feedback systems, especially for proof of convergence to an equilibrium state correspondingly set. The dissipative behavior of clockwise input-output hysteresis is considered with two boundary cases of energy losses at reversal cycles. For upper boundary cases of maximal (parallelogram shape) hysteresis loop, an equivalent transformation of the closed-loop system is provided. This allows for the application of the circle criterion of absolute stability. Invariant sets as a consequence of hysteresis are discussed. Several numerical examples are demonstrated, including a feedback-controlled double-mass harmonic oscillator with hysteresis and one stable and one unstable poles configuration.
|
electrical engineering and systems science
|
We present the holographic object which computes the five-point global conformal block in arbitrary dimensions for external and exchanged scalar operators. This object is interpreted as a weighted sum over infinitely many five-point geodesic bulk diagrams. These five-point geodesic bulk diagrams provide a generalization of their previously studied four-point counterparts. We prove our claim by showing that the aforementioned sum over geodesic bulk diagrams is the appropriate eigenfunction of the conformal Casimir operator with the right boundary conditions. This result rests on crucial inspiration from a much simpler $p$-adic version of the problem set up on the Bruhat-Tits tree.
|
high energy physics theory
|
This article investigates the optimization of yaw control inputs of a nine-turbine wind farm. The wind farm is simulated using the high-fidelity simulator SOWFA. The optimization is performed with a modifier adaptation scheme based on Gaussian processes. Modifier adaptation corrects for the mismatch between plant and model and helps to converge to the actual plan optimum. In the case study the modifier adaptation approach is compared with the Bayesian optimization approach. Moreover, the use of two different covariance functions in the Gaussian process regression is discussed. Practical recommendations concerning the data preparation and application of the approach are given. It is shown that both the modifier adaptation and the Bayesian optimization approach can improve the power production with overall smaller yaw misalignments in comparison to the Gaussian wake model.
|
electrical engineering and systems science
|
The response of the X and Gamma Imaging Spectrometer (XGIS) instrument onboard the Transient High Energy Sky and Early Universe Surveyor (THESEUS) mission, selected by ESA for an assessment phase in the framework of the Cosmic Vision M5 launch opportunity, has been extensively modeled with a Monte Carlo Geant-4 based software. In this paper, the expected sources of background in the Low Earth Orbit foreseen for THESEUS are described (e.g. diffuse photon backgrounds, cosmic-ray populations, Earth albedo emission) and the simulated on-board background environment and its effects on the instrumental performance is shown.
|
astrophysics
|
Orthogonal time frequency space (OTFS) is a modulation technique that is dedicated to the high-speed mobility scenario. However, its transmission involves a two-dimensional convolution of the symbols of interest and the multipath fading channel, and it complicates the equalization. In addition to the high-complexity issue, the existing pilot pattern to estimate the unknown channel accurately requires large overhead to avoid the pilot being contaminated, which is spectrally inefficient. In this paper, we propose a receiver approach by the marriage of the OTFS and a large-scale antenna array, which allows low-complexity detection and low-overhead pilot pattern design. First, the received signal from each path of the multipath fading channel is identified by a high-resolution receive beamformer facilitated by a large-scale antenna array. Then the identified signal from each angle in the delay-Doppler domain reduces to a flat-faded signal, which can be simply equalized using the channel information estimated by our pilot pattern. We further provide the estimator of the channel fading and the rotations of delay and Doppler. With these estimates, the symbols of interest can be recovered, and then, the signals from all angles of arrival are combined as different diversity versions. In addition, our pilot pattern with only around 25% overhead of the existing pilot pattern ensures the same protection of pilot pollution. Eventually, the efficiency, the reliability, and the low complexity of the proposed receiver approach are further validated by the numerical results.
|
electrical engineering and systems science
|
Global value chains (GVCs) are formed through value-added trade, and some regions promote economic integration by concluding regional trade agreements to promote these chains. However, there is no way to quantitatively assess the scope and extent of economic integration involving various sectors in multiple countries. In this study, we used the World Input--Output Database to create a cross-border sector-wise trade in value-added network (international value-added network (IVAN)) covering the period of 2000--2014 and evaluated them using network science methods. By applying Infomap to the IVAN, we confirmed for the first time the existence of two regional communities: Europe and the Pacific Rim. Helmholtz--Hodge decomposition was used to decompose the value flows within the region into potential and circular flows, and the annual evolution of the potential and circular relationships between countries and sectors was clarified. The circular flow component of the decomposition was used to define an economic integration index, and findings confirmed that the degree of economic integration in Europe declined sharply after the economic crisis in 2009 to a level lower than that in the Pacific Rim. The European economic integration index recovered in 2011 but again fell below that of the Pacific Rim in 2013. Moreover, sectoral analysis showed that the economic integration index captured the effect of Russian mineral resources, free movement of labor in Europe, and international division of labor in the Pacific Rim, especially in GVCs for the manufacture of motor vehicles and high-tech products.
|
computer science
|
Consider a set of categorical variables $\mathcal{P}$ where at least one, denoted by $Y$, is binary. The log-linear model that describes the counts in the resulting contingency table implies a specific logistic regression model, with the binary variable as the outcome. Extending results in Christensen (1997), by also considering the case where factors present in the contingency table disappear from the logistic regression model, we prove that the Maximum Likelihood Estimate (MLE) for the parameters of the logistic regression equals the MLE for the corresponding parameters of the log-linear model. We prove that, asymptotically, standard errors for the two sets of parameters are also equal. Subsequently, Wald confidence intervals are asymptotically equal. These results demonstrate the extent to which inferences from the log-linear framework can be translated to inferences within the logistic regression framework, on the magnitude of main effects and interactions. Finally, we prove that the deviance of the log-linear model is equal to the deviance of the corresponding logistic regression, provided that the latter is fitted to a dataset where no cell observations are merged when one or more factors in $\mathcal{P} \setminus \{ Y \}$ become obsolete. We illustrate the derived results with the analysis of a real dataset.
|
statistics
|
We prove that if $\Gamma$ is a group of polynomial growth then each delocalized cyclic cocycle on the group algebra has a representative of polynomial growth. For each delocalized cocyle we thus define a higher analogue of Lott's delocalized eta invariant and prove its convergence for invertible differential operators. We also use a determinant map construction of Xie and Yu to prove that if $\Gamma$ is of polynomial growth then there is a well defined pairing between delocalized cyclic cocyles and $K$-theory classes of $C^*$-algebraic secondary higher invariants. When this $K$-theory class is that of a higher rho invariant of an invertible differential operator we show this pairing is precisely the aforementioned higher analogue of Lott's delocalized eta invariant. As an application of this equivalence we provide a delocalized higher Atiyah-Patodi-Singer index theorem given $M$ is a compact spin manifold with boundary, equipped with a positive scalar metric $g$ and having fundamental group $\Gamma=\pi_1(M)$ which is finitely generated and of polynomial growth.
|
mathematics
|
We investigated through fully atomistic molecular dynamics simulations, the mechanical behavior (compressive and tensile) and energy absorption properties of two families (primitive (P688 and P8bal) and gyroid (G688 and G8bal)) of carbon-based schwarzites. Our results show that all schwarzites can be compressed (with almost total elastic recovery) without fracture to more than 50%, one of them can be even remarkably compressed up to 80%. One of the structures (G8bal) presents negative Poisson's ratio value (auxetic behavior). The crush force efficiency, the stroke efficiency and the specific energy absorption (SEA) values show that schwarzites can be effective energy absorber materials. Although the same level of deformation without fracture observed in the compressive case is not observed for the tensile case, it is still very high (30-40%). The fracture dynamics show extensive structural reconstructions with the formation of linear atomic chains (LACs).
|
condensed matter
|
We investigate a higher-group structure of massless axion electrodynamics in $(3+1)$ dimensions. By using the background gauging method, we show that the higher-form symmetries necessarily have a global semistrict 3-group (2-crossed module) structure, and exhibit 't Hooft anomalies of the 3-group. In particular, we find a cubic mixed 't Hooft anomaly between 0-form and 1-form symmetries, which is specific to the higher-group structure.
|
high energy physics theory
|
We search for evidence of the cause of the exoplanet radius gap, i.e. the dearth of planets with radii near $1.8\ R_\oplus$. If the cause was photoevaporation, the radius gap should trend with proxies for the early-life high-energy emission of planet-hosting stars. If, alternatively, the cause was core-powered mass loss, no such trends should exist. Critically, spurious trends between the radius gap and stellar properties arise from an underlying correlation with instellation. After accounting for this underlying correlation, we find no trends remain between the radius gap and stellar mass or present-day stellar activity as measured by near-UV emission. We dismiss the nondetection of a radius gap trend with near-UV emission because present-day near-UV emission is unlikely to trace early-life high-energy emission, but we provide a catalog of GALEX near-UV and far-UV emission measurements for general use. We interpret the nondetection of a radius gap trend with stellar mass by simulating photoevaporation with mass-dependent evolution of stellar high-energy emission. The simulation produces an undetectable trend between the radius gap and stellar mass under realistic sources of error. We conclude that no evidence, from this analysis or others in the literature, currently exists that clearly favors either photoevaporation or core powered mass loss as the primary cause of the exoplanet radius gap. However, repeating this analysis once the body of well-characterized $< 4\ R_\oplus$ planets has roughly doubled could confirm or rule out photoevaporation.
|
astrophysics
|
Interference cancellation is the main driving technology in enhancing the transmission rates over telephone lines above 100 Mbps. Still, crosstalk interference in multi-pair digital subscriber line (DSL) systems at higher frequencies has not been dealt with sufficiently. The upcoming G.(mg) fast DSL system envisions the use of extremely high bandwidth and full-duplex transmissions generating significantly higher crosstalk and self-interference signals. More powerful interference cancellation techniques are required to enable multi-gigabit per second data rate transmission over copper lines. In this article, we analyze the performance of interference cancellation techniques, with a focus on novel research approaches and design considerations for efficient interference mitigation for multi-gigabit transmission over standard copper lines. We also detail novel approaches for interference cancellation in the upcoming technologies.
|
electrical engineering and systems science
|
A pebbling move on a graph removes two pebbles from a vertex and adds one pebble to an adjacent vertex. A vertex is reachable from a pebble distribution if it is possible to move a pebble to that vertex using pebbling moves. The optimal pebbling number $\pi_{opt}$ is the smallest number m needed to guarantee a pebble distribution of m pebbles from which any vertex is reachable. The optimal pebbling number of the square grid graph $P_n\square P_m$ was investigated in several papers. In this paper, we present a new method using some recent ideas to give a lower bound on $\pi_{opt}$. We apply this technique to prove that $\pi_{opt}(P_n\square P_m)\geq \frac{2}{13}nm$. Our method also gives a new proof for $\pi_{opt}(P_n)=\pi_{opt}(C_n)=\left\lceil\frac{2n}{3}\right\rceil$.
|
mathematics
|
A systematic study of hole compensation effect on magnetic properties, which is controlled by defect compensation through ion irradiation, in (Ga,Mn)As, (In,Mn)As and (Ga,Mn)P is presented in this work. In all materials, both Curie temperature and magnetization decrease upon increasing the hole compensation, confirming the description of hole mediated ferromagnetism according to the p-d Zener model. The material dependence of Curie temperature and magnetization versus hole compensation reveals that the manipulation of magnetic properties in III-Mn-V dilute ferromagnetic semiconductors by ion irradiation is strongly influenced by the energy level location of the produced defect relative to the band edges in semiconductors.
|
condensed matter
|
We use a sample of 938 red clump giant stars located in the direction of the galactic long bar to study the chemistry of Milky Way bar stars. Kinematically separating stars on bar orbits from stars with inner disc orbits, we find that stars on bar-like orbits are more metal rich with a mean iron abundance of <[Fe/H]>=+0.30 compared to <[Fe/H]>=+0.03 for the inner disc. Spatially selecting bar stars is complicated by a strong vertical metallicity gradient of -1.1dex/kpc, but we find the metallicity distribution varies in a manner consistent with our orbital selection. Our results have two possible interpretations. The first is that the most metal rich stars in the inner Galaxy pre-existed the bar, but were kinematically cold at the time of bar formation and therefore more easily captured onto bar orbits when the bar formed. The second is that the most metal rich stars formed after the bar, either directly onto the bar following orbits or were captured by the bar after their formation.
|
astrophysics
|
Maintenance record logbooks are an emerging text type in NLP. They typically consist of free text documents with many domain specific technical terms, abbreviations, as well as non-standard spelling and grammar, which poses difficulties to NLP pipelines trained on standard corpora. Analyzing and annotating such documents is of particular importance in the development of predictive maintenance systems, which aim to provide operational efficiencies, prevent accidents and save lives. In order to facilitate and encourage research in this area, we have developed MaintNet, a collaborative open-source library of technical and domain-specific language datasets. MaintNet provides novel logbook data from the aviation, automotive, and facilities domains along with tools to aid in their (pre-)processing and clustering. Furthermore, it provides a way to encourage discussion on and sharing of new datasets and tools for logbook data analysis.
|
computer science
|
We study the Hamilton-Jacobi formulation of effective mechanical actions associated with holographic renormalization group flows when the field theory is put on the sphere and mass terms are turned on. Although the system is supersymmetric and it is described by a superpotential, Hamilton's characteristic function is not readily given by the superpotential when the boundary of AdS is curved. We propose a method to construct the solution as a series expansion in scalar field degrees of freedom. The coefficients are functions of the warp factor to be determined by a differential equation one obtains when the ansatz is substituted into the Hamilton-Jacobi equation. We also show how the solution can be derived from the BPS equations without having to solve differential equations. The characteristic function readily provides information on holographic counterterms which cancel divergences of the on-shell action near the boundary of AdS.
|
high energy physics theory
|
Volleyball is a team sport with unique and specific characteristics. We introduce a new two level-hierarchical Bayesian model which accounts for theses volleyball specific characteristics. In the first level, we model the set outcome with a simple logistic regression model. Conditionally on the winner of the set, in the second level, we use a truncated negative binomial distribution for the points earned by the loosing team. An additional Poisson distributed inflation component is introduced to model the extra points played in the case that the two teams have point difference less than two points. The number of points of the winner within each set is deterministically specified by the winner of the set and the points of the inflation component. The team specific abilities and the home effect are used as covariates on all layers of the model (set, point, and extra inflated points). The implementation of the proposed model on the Italian Superlega 2017/2018 data shows an exceptional reproducibility of the final league table and a satisfactory predictive ability.
|
statistics
|
We consider the $S^2 \times_q S^1$ supersymmetric index of a 3d $\mathcal{N}=2$ theory $T[M_3]$ when $M_3$ is a plumbed 3-manifold. We engineer an effective description of $T[M_3]$ from the expression of the homological block for plumbed 3-manifolds as a $D^2 \times_q S^1$ partition function of a 3d $\mathcal{N}=2$ theory $T[M_3]$ with a boundary condition. We check that the supersymmetric index for such a $T[M_3]$ is invariant under the 3d Kirby moves.
|
high energy physics theory
|
Hot Jupiters seem to get rarer with decreasing stellar mass. The goal of the Pan-Planets transit survey was the detection of such planets and a statistical characterization of their frequency. Here, we announce the discovery and validation of two planets found in that survey, Wendelstein-1b and Wendelstein-2b, which are two short-period hot Jupiters that orbit late K host stars. We validated them both by the traditional method of radial velocity measurements with the HIgh Resolution Echelle Spectrometer (HIRES) and the Habitable-zone Planet Finder (HPF) instruments and then by their Transit Color Signature (TraCS). We observed the targets in the wavelength range of $4000 - 24000$ Angstr\"om and performed a simultaneous multiband transit fit and additionally determined their thermal emission via secondary eclipse observations. Wendelstein-1b is a hot Jupiter with a radius of $1.0314_{-0.0061}^{+0.0061}$ $R_J$ and mass of $0.592_{-0.129}^{+0.165}$ $M_J$, orbiting a K7V dwarf star at a period of $2.66$ d, and has an estimated surface temperature of about $1727_{-90}^{+78}$ K. Wendelstein-2b is a hot Jupiter with a radius of $1.1592_{-0.0210}^{+0.0204}$ $R_J$ and a mass of $0.731_{-0.311}^{+0.541}$ $M_J$, orbiting a K6V dwarf star at a period of $1.75$ d, and has an estimated surface temperature of about $1852_{-140}^{+120}$ K. With this, we demonstrate that multiband photometry is an effective way of validating transiting exoplanets, in particular for fainter targets since radial velocity (RV) follow-up becomes more and more costly for those targets.
|
astrophysics
|
Time domain science has undergone a revolution over the past decade, with tens of thousands of new supernovae (SNe) discovered each year. However, several observational domains, including SNe within days or hours of explosion and faint, red transients, are just beginning to be explored. Here, we present the Young Supernova Experiment (YSE), a novel optical time-domain survey on the Pan-STARRS telescopes. Our survey is designed to obtain well-sampled $griz$ light curves for thousands of transient events up to $z \approx 0.2$. This large sample of transients with 4-band light curves will lay the foundation for the Vera C. Rubin Observatory and the Nancy Grace Roman Space Telescope, providing a critical training set in similar filters and a well-calibrated low-redshift anchor of cosmologically useful SNe Ia to benefit dark energy science. As the name suggests, YSE complements and extends other ongoing time-domain surveys by discovering fast-rising SNe within a few hours to days of explosion. YSE is the only current four-band time-domain survey and is able to discover transients as faint $\sim$21.5 mag in $gri$ and $\sim$20.5 mag in $z$, depths that allow us to probe the earliest epochs of stellar explosions. YSE is currently observing approximately 750 square degrees of sky every three days and we plan to increase the area to 1500 square degrees in the near future. When operating at full capacity, survey simulations show that YSE will find $\sim$5000 new SNe per year and at least two SNe within three days of explosion per month. To date, YSE has discovered or observed 8.3% of the transient candidates reported to the International Astronomical Union in 2020. We present an overview of YSE, including science goals, survey characteristics and a summary of our transient discoveries to date.
|
astrophysics
|
Time- and number-resolved photon detection is crucial for photonic quantum information processing. Existing photon-number-resolving (PNR) detectors usually have limited timing and dark-count performance or require complex fabrication and operation. Here we demonstrate a PNR detector at telecommunication wavelengths based on a single superconducting nanowire with an integrated impedance-matching taper. The prototyping device was able to resolve up to five absorbed photons and had 16.1 ps timing jitter, <2 c.p.s. device dark count rate, $\sim$86 ns reset time, and 5.6% system detection efficiency (without cavity) at 1550 nm. Its exceptional distinction between single- and two-photon responses is ideal for coincidence counting and allowed us to directly observe bunching of photon pairs from a single output port of a Hong-Ou-Mandel interferometer. This detector architecture may provide a practical solution to applications that require high timing resolution and few-photon discrimination.
|
physics
|
Integrating the reconfigurable intelligent surface in a cell-free (RIS-CF) network is an effective solution to improve the capacity and coverage of future wireless systems with low cost and power consumption. The reflecting coefficients of RISs can be programmed to enhance signals received at users. This letter addresses a joint design of transmit beamformers at access points and reflecting coefficients at RISs to maximize the energy efficiency (EE) of RIS-CF networks, taking into account the limited backhaul capacity constraints. Due to a very computationally challenging nonconvex problem, we develop a simple yet efficient alternating descent algorithm for its solution. Numerical results verify that the EE of RIS-CF networks is greatly improved, showing the benefit of using RISs.
|
electrical engineering and systems science
|
We make two observations regarding a recent tight example for a composition theorem for randomized query complexity: (1) it implies general randomized query-to-communication lifting is not always true if one allows relations, (2) it is in a certain sense essential that a relation is used in constructing the example.
|
computer science
|
We investigate a two Higgs doublet model with extra flavour depending $U(1)_X$ gauge symmetry where $Z'$ boson interactions can explain the Atomki anomaly by choosing appropriate charge assignment for the SM fermions. For parameter region explaining the Atomki anomaly we obtain light scalar boson with $\mathcal{O}(10)$ GeV mass, and we explore scalar sector to search for allowed parameter space. We then discuss anomalous magnetic dipole moment of muon and lepton flavour violating processes induced by Yukawa couplings of our model.
|
high energy physics phenomenology
|
Two-dimensional (2D) lateral heterojunctions of transition metal dichalcogenides (TMDCs) have become a reality in recent years. Semiconducting TMDC layers in their common H -structure have a nonzero in-plane electric polarization, which is a topological invariant. We show by means of first-principles calculations that lateral 2D heterojunctions between TMDCs with a different polarization generate one-dimensional (1D) metallic states at the junction, even in cases where the different materials are joined epitaxially. The metallicity does not depend upon structural details, and is explained from the change in topological invariant at the junction. Nevertheless, these 1D metals are susceptible to 1D instabilities, such as charge- and spin-density waves, making 2D TMDC heterojunctions ideal systems for studying 1D physics.
|
condensed matter
|
Efficient feature selection from high-dimensional datasets is a very important challenge in many data-driven fields of science and engineering. We introduce a statistical mechanics inspired strategy that addresses the problem of sparse feature selection in the context of binary classification by leveraging a computational scheme known as expectation propagation (EP). The algorithm is used in order to train a continuous-weights perceptron learning a classification rule from a set of (possibly partly mislabeled) examples provided by a teacher perceptron with diluted continuous weights. We test the method in the Bayes optimal setting under a variety of conditions and compare it to other state-of-the-art algorithms based on message passing and on expectation maximization approximate inference schemes. Overall, our simulations show that EP is a robust and competitive algorithm in terms of variable selection properties, estimation accuracy and computational complexity, especially when the student perceptron is trained from correlated patterns that prevent other iterative methods from converging. Furthermore, our numerical tests demonstrate that the algorithm is capable of learning online the unknown values of prior parameters, such as the dilution level of the weights of the teacher perceptron and the fraction of mislabeled examples, quite accurately. This is achieved by means of a simple maximum likelihood strategy that consists in minimizing the free energy associated with the EP algorithm.
|
statistics
|
Empirical copula functions can be used to model the dependence structure of multivariate data. The Greenwald and Khanna algorithm is adapted in order to provide a space-memory efficient approximation to the empirical copula function of a bivariate stream of data. A succinct space-memory efficient summary of values seen in the stream up to a certain time is maintained and can be queried at any point to return an approximation to the empirical bivariate copula function with guaranteed error bounds. An example then illustrates how these summaries can be used as a tool to compute approximations to higher dimensional copula decompositions containing bivariate copulas. The computational benefits and approximation error of the algorithm is theoretically and numerically assessed.
|
statistics
|
Dark matter can capture in neutron stars and heat them to observable luminosities. We study relativistic scattering of dark matter on highly degenerate electrons. We develop a Lorentz invariant formalism to calculate the capture probability of dark matter that accounts for the relativistic motion of the target particles and Pauli exclusion principle. We find that the actual capture probability can be five orders of magnitude larger than the one estimated using a nonrelativistic approach. For dark matter masses $10~{\rm eV}\textup{--}10~{\rm PeV}$, neutron star heating complements and can be more sensitive than terrestrial direct detection searches. The projected sensitivity regions exhibit characteristic features that demonstrate a rich interplay between kinematics and Pauli blocking of the DM--electron system. Our results show that old neutron stars could be the most promising target for discovering leptophilic dark matter.
|
high energy physics phenomenology
|
The universal homogeneous triangle-free graph, constructed by Henson and denoted $\mathcal{H}_3$, is the triangle-free analogue of the Rado graph. While the Ramsey theory of the Rado graph has been completely established, beginning with Erd\H{o}s-Hajnal-Pos\'{a} and culminating in work of Sauer and Laflamme-Sauer-Vuksanovic, the Ramsey theory of $\mathcal{H}_3$ had only progressed to bounds for vertex colorings (Komj\'{a}th-R\"{o}dl) and edge colorings (Sauer). This was due to a lack of broadscale techniques. We solve this problem in general: For each finite triangle-free graph $G$, there is a finite number $T(G)$ such that for any coloring of all copies of $G$ in $\mathcal{H}_3$ into finitely many colors, there is a subgraph of $\mathcal{H}_3$ which is again universal homogeneous triangle-free in which the coloring takes no more than $T(G)$ colors. This is the first such result for a homogeneous structure omitting copies of some non-trivial finite structure. The proof entails developments of new broadscale techniques, including a flexible method for constructing trees which code $\mathcal{H}_3$ and the development of their Ramsey theory.
|
mathematics
|
The narrow-line Seyfert 1 galaxy IRAS17020+4544 is one of the few sources where both an X-ray ultra-fast outflow and a molecular outflow were observed to be consistent with energy conservation. However, IRAS17020+4544 is less massive and has a much more modest active galactic nucleus (AGN) luminosity than the other examples. Using recent CO(1-0) observations with the NOrthern Extended Millimeter Array (NOEMA), we characterised the molecular gas content of the host galaxy for the first time. We found that the molecular gas is distributed into an apparent central disc of 1.1x10^9 Msun, and a northern extension located up to 8 kpc from the centre with a molecular gas mass M_H2~10^8 Msun. The molecular gas mass and the CO dynamics in the northern extension reveal that IRAS 17020+4544 is not a standard spiral galaxy, instead it is interacting with a dwarf object corresponding to the northern extension. This interaction possibly triggers the high accretion rate onto the super massive black hole. Within the main galaxy, which hosts the AGN, a simple analytical model predicts that the molecular gas may lie in a ring, with less molecular gas in the nuclear region. Such distribution may be the result of the AGN activity which removes or photodissociates the molecular gas in the nuclear region (AGN feedback). Finally, we have detected a molecular outflow of mass M_H2=(0.7-1.2)x10^7 Msun in projection at the location of the northern galaxy, with a similar velocity to that of the massive outflow reported in previous millimeter data obtained by the Large Millimeter Telescope.
|
astrophysics
|
It is commonly known that the Fokker-Planck equation is exactly solvable only for some particular systems, usually with time-independent drift coefficients. To extend the class of solvable problems, we use the intertwining relations of SUSY Quantum Mechanics but in new - asymmetric - form. It turns out that this form is just useful for solution of Fokker-Planck equation. As usual, intertwining provides a partnership between two different systems both described by Fokker-Planck equation. Due to the use of an asymmetric kind of intertwining relations with a suitable ansatz, we managed to obtain a new class of analytically solvable models. What is important, this approach allows us to deal with the drift coefficients depending on both variables, $x,$ and $t.$ An illustrating example of the proposed construction is given explicitly.
|
quantum physics
|
MARTY is a C++ computer algebra system specialized for High Energy Physics that can calculate amplitudes, squared amplitudes and Wilson coefficients in a large variety of beyond the Standard Model scenarios up to the one-loop order. It is fully independent of any other framework and its main development guideline is generality, in order to be adapted easily to any type of model. The calculations are fully automated from the Lagrangian up to the generation of the C++ code evaluating the theoretical results (numerically, depending on the model parameters). Once a phenomenological tool chain has been set up - from a Lagrangian to observable analysis - it can be used in a model independent way leaving only model building, with MARTY, as the task to be performed by physicists. Here we present the main steps to build a general new physics model, namely gauge group, particle content, representations, replacements, rotations and symmetry breaking, using the example of a 2 Higgs Doublet Model. The sample codes that are shown for this example can be easily generalized to any Beyond the Standard Model scenario written with MARTY.
|
high energy physics phenomenology
|
In a recent paper it was shown how a matrix S-lemma can be applied to construct controllers from noisy data. The current paper complements these results by proving a matrix version of the classical Finsler's lemma. This matrix Finsler's lemma provides a tractable condition under which all matrix solutions to a quadratic equality also satisfy a quadratic inequality. We will apply this result to bridge known data-driven control design techniques for both exact and noisy data, thereby revealing a more general theory. The result is also applied to data-driven control of Lur'e systems.
|
mathematics
|
We present a mapping of correlated multi-impurity Anderson models to a cluster model coupled to a number of effective conduction bands capturing its essential low-energy physics. The major ingredient is the complex single-particle self energy matrix of the uncorrelated problem that encodes the influence to the host conduction band onto the dynamics of a set of correlated orbitals. While the real part of the self-energy matrix generates an effective hopping between the cluster orbitals, the imaginary part determines the coupling to the effective conduction bands in the mapped model. The rank of the imaginary part determines the number of independent screening channels of the problem, and allows the replacement of the phenomenological exhaustion criterion by a rigorous mathematical statement. This rank provides a distinction between multi-impurity models of first kind and of second kind. For the latter, there are insufficient screening channels available, so that a singlet ground state must be driven by the inter-cluster spin correlations. This classification provides a fundamental answer to the question of why ferromagnetic correlations between local moments are irrelevant for the spin compensated ground state in dilute impurity models, whereas they compete with the Kondo-scale in dense impurity arrays, without evoking a spin density wave. The low-temperature physics of three examples taken from the literature are deduced from the analytic structure of the mapped model, demonstrating the potential power of this approach. NRG calculations are presented for up to five site cluster. We investigate the appearance of frustration induced non-Fermi liquid fixed points in the trimer, and demonstrate the existence of several critical points of KT type at which ferromagnetic correlations suppress the screening of an additional effective spin-$1/2$ degree of freedom.
|
condensed matter
|
In this paper, thick branes generated by mimetic scalar field with Lagrange multiplier formulation are investigated. We give three typical thick brane background solutions with different asymptotic behaviors and show that all the solutions are stable under tensor perturbations. The effective potentials of the tensor perturbations exhibit as volcano potential, P\"{o}schl-Teller potential, and harmonic oscillator potential for the three background solutions, respectively. All the tensor zero modes (massless gravitons) of the three cases can be localized on the brane. We also calculate the corrections to the Newtonian potential. On a large scale, the corrections to the Newtonian potential can be ignored. While on a small scale, the correction from the volcano-like potential is more pronounced than the other two cases. Combining the latest results of short-range gravity experiments that the usual Newtonian potential $\propto1/r$ holds down to a length scale at $52\mu$m, we get the constraint on the scale parameter as $k\gtrsim 10^{-4}$eV, and constraint on the corresponding five-dimensional fundamental scale as $bM_\ast \gtrsim10^5$TeV.
|
high energy physics theory
|
In this article, we study "questionable representations" of (partial or total) orders, introduced in our previous article "A class of orders with linear? time sorting algorithm". (Later, we consider arbitrary binary functional/relational structures instead of orders.) A "question" is the first difference between two sequences (with ordinal index) of elements of orders/sets. In finite width "questionable representations" of an order O, comparison can be solved by looking at the "question" that compares elements of a finite order O'. A corollary of a theorem by Cantor (1895)is that all countable total orders have a binary (width 2) questionable representation. We find new classes of orders on which testing isomorphism or counting the number of linear extensions can be done in polynomial time. We also present a generalization of questionable-width, called balanced tree-questionable-width, and show that if a class of binary structures has bounded tree-width or clique-width, then it has bounded balanced tree-questionable-width. But there are classes of graphs of bounded balanced tree-questionable-width and unbounded tree-width or clique-width.
|
mathematics
|
Advances in high angular resolution astronomy make it conceivable that black hole dark matter could be detected via angular deviation effects. Assuming the dark matter in the galaxy is made of solar mass black holes, there is a non-trivial probability that a line-of-sight through the galaxy, leads to micro-arcseconds deviations, a value that has been discussed for various astronomical projects. In cosmology the effects are magnified by an increased density at early times and an opening of angles due to redshift. If the dark matter is made of primordial black holes, present at the CMB, random deflections of the CMB photons lead to a limit on the angular resolution, approximately ${3}\times 10^{-7} \sqrt{M/M_\odot}\, rad$, with $M$ the mass of the black holes. Using the resolutions of $\sim 10^{-3} rad$ demonstrated in observations of the "acoustic peaks" then implies the limit $(M/M_\odot)\lesssim 10^{7}$. While this large value seems uninteresting, improved resolutions would lead to significant limits or conceivably the discovery of primordial black holes.
|
astrophysics
|
For a smooth spacetime $X$, based on the timelike homotopy classes of its timelike paths, we define a topology on $X$ that refines the Alexandrov topology and always coincides with the manifold topology. We show that the space of such homotopy classes forms a semicategory which encodes enough information to reconstruct the topology and conformal structure of $X$. Its set of morphisms carries a natural topology that we prove to be locally euclidean but, in general, not Hausdorff. Our results do not require any causality conditions on $X$ and do also hold under weaker regularity assumptions.
|
mathematics
|
Mobile edge caching (MEC) has received much attention as a promising technique to overcome the stringent latency and data hungry requirements in the future generation wireless networks. Meanwhile, full-duplex (FD) transmission can potentially double the spectral efficiency by allowing a node to receive and transmit in the same time/frequency block simultaneously. In this paper, we investigate the delivery time performance of full-duplex enabled MEC (FD-MEC) systems, in which the users are served by distributed edge nodes (ENs), which operate in FD mode and are equipped with a limited storage memory. Firstly, we analyse the FD-MEC with different levels of cooperation among the ENs and take into account a realistic model of self-interference cancellation. Secondly, we propose a framework to minimize the system delivery time of FD-MEC under both linear and optimal precoding designs}. Thirdly, to deal with the non-convexity of the formulated problems, two iterative optimization algorithms are proposed based on the inner approximation method, whose convergence is analytically guaranteed. Finally, the effectiveness of the proposed designs are demonstrated via extensive numerical results. {It is shown that the cooperative scheme mitigates inter-user and self interference significantly better than the distributed scheme at an expense of inter-EN cooperation. In addition, we show that minimum mean square error (MMSE)-based precoding design achieves the best performance-complexity trade-off, compared with the zero-forcing and optimal designs.
|
computer science
|
We extend the holographic duality between 3d pure gravity and the 2d Ising CFT proposed in [Phys. Rev. D 85 (2012) 024032] to CFTs with boundaries. Besides the usual asymptotic boundary, the dual bulk spacetime now has a real cutoff, on which live branes with finite tension, giving Neumann boundary condition on the metric tensor. The strongly coupled bulk theory requires that we dress the well-known semiclassical AdS/BCFT answer with boundary gravitons, turning the partition function into the form of Virasoro characters. Using this duality, we relate the brane tensions to the modular S-matrix elements of the dual BCFT and derive the transformation between gravitational solutions with different brane tensions under modular S action.
|
high energy physics theory
|
Anelastic convection at high Rayleigh number in a plane parallel layer with no slip boundaries is considered. Energy and entropy balance equations are derived, and they are used to develop scaling laws for the heat transport and the Reynolds number. The appearance of an entropy structure consisting of a well-mixed uniform interior, bounded by thin layers with entropy jumps across them, makes it possible to derive explicit forms for these scaling laws. These are given in terms of the Rayleigh number, the Prandtl number, and the bottom to top temperature ratio, which measures how stratified the layer is. The top and bottom boundary layers are examined and they are found to be very different, unlike in the Boussinesq case. Elucidating the structure of these boundary layers plays a crucial part in determining the scaling laws. Physical arguments governing these boundary layers are presented, concentrating on the case in which the boundary layers are thin even when the stratification is large, the incompressible boundary layer case. Different scaling laws are found, depending on whether the viscous dissipation is primarily in the boundary layers or in the bulk. The cases of both high and low Prandtl number are considered. Numerical simulations of no-slip anelastic convection up to a Rayleigh number of $10^7$ have been performed and our theoretical predictions are compared with the numerical results.
|
physics
|
Motivated by recent studies of odd-parity multipole order in condensed matter physics, we theoretically study magnetoelectric responses in an electric octupole state. Investigating the Edelstein effect and spin Hall effect in a locally noncentrosymmetric bilayer Rashba model, we clarify characteristic properties due to parity violation in the electric octupole state. Furthermore, a possible realization of electric octupole order in bilayer high-Tc cuprate superconductors is proposed. Our calculation of magnetic torque is consistent with recent experimental observation of a kink above the superconducting transition temperature. We also show significant enhancement of the in-plane anisotropy in spin susceptibility due to the superconductivity, and propose an experimental test by means of the nuclear magnetic resonance in the superconducting state. A spin-orbit coupled metal state in Cd2Re2O7 is also discussed.
|
condensed matter
|
We aim to provably complete a sparse and highly-missing tensor in the presence of covariate information along tensor modes. Our motivation comes from online advertising where users click-through-rates (CTR) on ads over various devices form a CTR tensor that has about 96% missing entries and has many zeros on non-missing entries, which makes the standalone tensor completion method unsatisfactory. Beside the CTR tensor, additional ad features or user characteristics are often available. In this paper, we propose Covariate-assisted Sparse Tensor Completion (COSTCO) to incorporate covariate information for the recovery of the sparse tensor. The key idea is to jointly extract latent components from both the tensor and the covariate matrix to learn a synthetic representation. Theoretically, we derive the error bound for the recovered tensor components and explicitly quantify the improvements on both the reveal probability condition and the tensor recovery accuracy due to covariates. Finally, we apply COSTCO to an advertisement dataset consisting of a CTR tensor and ad covariate matrix, leading to 23% accuracy improvement over the baseline. An important by-product is that ad latent components from COSTCO reveal interesting ad clusters, which are useful for better ad targeting.
|
statistics
|
This paper explores the problem good-case latency of Byzantine fault-tolerant broadcast, motivated by the real-world latency and performance of practical state machine replication protocols. The good-case latency measures the time it takes for all non-faulty parties to commit when the designated broadcaster is non-faulty. We provide a complete characterization of tight bounds on good-case latency, in the authenticated setting under synchrony, partial synchrony and asynchrony. Some of our new results may be surprising, e.g., 2-round PBFT-style partially synchronous Byzantine broadcast is possible if and only if $n\geq 5f-1$, and a tight bound for good-case latency under $n/3<f<n/2$ under synchrony is not an integer multiple of the delay bound.
|
computer science
|
Significance: Histopathological analysis of tissues is an essential tool for grading, staging, diagnosing and resecting cancers and other malignancies. Current histopathological techniques require substantial sample processing prior to staining with hematoxylin and eosin (H&E) dyes, to highlight nuclear and cellular morphology. Sample preparation and staining is resource-intensive and introduces potential for variability during sample preparation. Aim: We present a novel method for direct label-free histopathological assessment of formalin fixed paraffin embedded tissue blocks and thin tissue sections using a dual contrast photoacoustic remote sensing (PARS) microscopy system. Approach: To emulate the nuclear and cellular contrast of H&E staining, we leverage unique properties of the PARS system. Here the ultraviolet excitation of the PAARS microscope takes advantage of DNA's unique optical absorption to provide nuclear contrast analogous to hematoxylin staining of cell nucelli. Concurrently, the optical scattering contrast of the PARS detection system is leveraged to provide bulk tissue contrast analogous to eosin staining of cell membranes. Results: We demonstrate the efficacy of this technique by imaging human breast tissue and human skin tissues in formalin fixed paraffin embedded tissue blocks and frozen sections respectively. Salient nuclear and extra-nuclear features such as cancerous cells, glands and ducts, adipocytes, and stromal structures such as collagen. Conclusions. The presented dual contrast PARS microscope enables label-free visualizations of tissue with contrast and quality analogous to the current gold standard for histopathological analysis. Thus, the proposed system is well positioned to augment existing histopathological workflows, providing histological imaging directly on unstained tissue blocks and sections.
|
physics
|
The low energy effective field theories of $(2+1)$ dimensional topological phases of matter provide powerful avenues for investigating entanglement in their ground states. In \cite{Fliss:2017wop} the entanglement between distinct Abelian topological phases was investigated through Abelian Chern-Simons theories equipped with a set of topological boundary conditions (TBCs). In the present paper we extend the notion of a TBC to non-Abelian Chern-Simons theories, providing an effective description for a class of gapped interfaces across non-Abelian topological phases. These boundary conditions furnish a defining relation for the extended Hilbert space of the quantum theory and allow the calculation of entanglement directly in the gauge theory. Because we allow for trivial interfaces, this includes a generic construction of the extended Hilbert space in any (compact) Chern-Simons theory quantized on a Riemann surface. Additionally, this provides a constructive and principled definition for the Hilbert space of effective ground states of gapped phases of matter glued along gapped interfaces. Lastly, we describe a generalized notion of surgery, adding a powerful tool from topological field theory to the gapped interface toolbox.
|
high energy physics theory
|
In this paper we find a host of boost operators for a very general choice of coproducts in AdS_3-inspired scattering theories, focusing on the massless sector, with and without an added trigonometric deformation. We find that the boost coproducts are exact symmetries of the R-matrices we construct, besides fulfilling the relations of modified Poincare' - type superalgebras. In the process, we discover an ambiguity in determining the boost coproduct which allows us to derive differential constraints on our R-matrices. In one particular case of the trigonometric deformation, we find a non-coassociative structure which satisfies the axioms of a quasi-Hopf algebra.
|
high energy physics theory
|
We present Neutron Star Interior Composition Explorer X-ray and Arcminute Microkelvin Imager Large Array radio observations of a rapid hard-to-soft state transition in the black hole X-ray transient MAXI J1820+070. During the transition from the hard state to the soft state a switch between two particular types of quasiperiodic oscillations (QPOs) was seen in the X-ray power density spectra, from type-C to type-B, along with a drop in the strength of the broadband X-ray variability and a brief flare in the 7-12 keV band. Soon after this switch (~1.5-2.5 hr) a strong radio flare was observed that corresponded to the launch of superluminal ejecta. Although hints of a connection between QPO transitions and radio flares have been seen in other black hole X-ray transients, our observations constitute the strongest observational evidence to date for a link between the appearance of type-B QPOs and the launch of discrete jet ejections.
|
astrophysics
|
Pecci-Quinn (PQ) symmetry breaking by perturbative dynamics would suffer from a hierarchy problem, just like the electroweak symmetry breaking in the standard model. The dynamics of the axion, associated with the PQ symmetry breaking, would also involve a triviality problem. We provide a paradigm to resolve those two problems potentially existing in the PQ symmetry breaking scenario, with keeping successful axion relaxation for the QCD strong CP phase. The proposed theory includes an axicolor dynamics with the axicolored fermions partially gauged by the QCD color, and is shown to be governed by an asymptotically safe (AS) fixed point: quantum scale invariance is built. The AS axicolor is actually a "walking" dynamics, which dynamically breaks a PQ symmetry, a part of the chiral symmetry carried by the axicolored fermions. The PQ scale generation is then triggered by the nonperturbative dimensional transmutation in the "walking" dynamics. A composite axion emerges as the assosiated Nambu-Goldstone boson. That is, no hierarchy or triviality problem is present there. The composite axion can potentially be light due to the characteristic feature of the AS axicolor ("walking" axicolor), and the mass can reach the QCD axion limit. The axicolor theta term is unphysical, and can be rotated away by a residual chiral symmetry. Thus the QCD CP problem can unambiguously be solved and the axion relaxation is protected by the established (near) scale invariance during the "walking" regime.
|
high energy physics phenomenology
|
We provide a theoretical framework for the development of a solid-state thermal rectifier through a confinement in the available population of phonons on one side of an asymmetrically graded film stack. Using a modification of the phonon gas model to account for phonon filtering and population confinement, we demonstrate that for an ideal material, with low phonon anharmonicity, significant thermal rectification can be achieved even in the absence of ballistic phonon transport. This formalism is used to illustrate thermal rectification in a thin-film of diamond (1-5 nm) graded to dimensions > 1 {\mu}m exhibiting theoretical values of thermal rectification ratios between 0.75 and 6. Our theoretical formulation for thermal rectification is therefore expected to produce opportunities to design advanced solid-state devices that enable a variety of critical technologies.
|
condensed matter
|
Spin waves are excitations in ferromagnetic media that have been proposed as information carriers in spintronic devices with potentially much lower operation power than conventional charge-based electronics. The wave nature of spin waves can be exploited to design majority gates by coding information in their phase and using interference for computation. However, a scalable spin wave majority gate design that can be co-integrated alongside conventional Si-based electronics is still lacking. Here, we demonstrate a reconfigurable nanoscale inline spin wave majority gate with ultrasmall footprint, frequency-division multiplexing, and fan-out. Time-resolved imaging of the magnetisation dynamics by scanning transmission x-ray microscopy reveals the operation mode of the device and validates the full logic majority truth table. All-electrical spin wave spectroscopy further demonstrates spin wave majority gates with sub-micron dimensions, sub-micron spin wave wavelengths, and reconfigurable input and output ports. We also show that interference-based computation allows for frequency-division multiplexing as well as the computation of different logic functions in the same device. Such devices can thus form the foundation of a future spin-wave-based superscalar vector computing platform.
|
physics
|
Given a smooth complex toric variety we will compare real Lagerberg forms and currents on its tropicalization with invariant complex forms and currents on the toric variety. Our main result is a correspondence theorem which identifies the cone of invariant closed positive currents on the complex toric variety with closed positive currents on the tropicalization. In a subsequent paper, this correspondence will be used to develop a Bedford-Taylor theory of plurisubharmonic functions on the tropicalization.
|
mathematics
|
We investigate the physical role of various scale-similarity models in the stabilized mixed model [K. Abe, Int. J. Heat Fluid Flow, 39, 42 (2013); M. Inagaki and K. Abe, Int. J. Heat Fluid Flow, 64, 137 (2017)] and evaluate their performance in turbulent channel flows. Among various models in the present study, the original model combined with the scale-similarity model for the subgrid-scale (SGS)-Reynolds term yields the best prediction for the anisotropy of the grid-scale (GS) velocity fluctuations and the SGS stress, even in coarse grid resolutions. Moreover, it successfully predicts large intensities of the spectra close to the cut-off scale in accordance with the filtered direct numerical simulation, whereas other models predict a rapid decay of the spectra in the low-wavelength region. To investigate the behavior of the models close to the cut-off scale, we analyze the budget equation for the GS Reynolds stress spectrum. The result shows that the scale-similarity model for the SGS-Reynolds term plays a role in the enhancement of the wall-normal velocity fluctuation close to the cut-off scale. Thereby, it activates turbulence close to the cut-off scale, leading to a reproduction of the proper streak structures observed in wall-bounded turbulent flows. The reproduction of velocity fluctuations close to the cut-off scale and turbulent structures is a key element for further development of SGS models.
|
physics
|
Since $\mathrm{SO}(10)$ GUTs unify all fermions of the Standard Model plus a right-chiral neutrino in a representation $\mathbf{16}$ per family, they have the potential to be maximally predictive regarding the ratios between the masses (or Yukawa couplings) of different fermion types, i.e.~the up-type quarks, down-type quarks, charged leptons and neutrinos. We analyze the predictivity of classes of $\mathrm{SO}(10)$ (SUSY) GUT models for the fermion mass ratios, where the Yukawa couplings for each family are dominated by a single effective GUT operator of the schematic form $\mathbf{16}^2\cdot\mathbf{45}^n\cdot\mathbf{210}^{m}\cdot\mathbf{H}$, for $\mathbf{H}\in\{\mathbf{10},\mathbf{120},\mathbf{\overline{126}}\}$. This extends previous works to general vacuum expectation value directions for GUT-scale VEVs and to larger Higgs representations. In addition, we show that the location of the MSSM Higgses in the space of all doublets is a crucial aspect to consider. We discuss highly predictive cases and illustrate the predictive power in toy models consisting of masses for the 3rd and 2nd fermion family.
|
high energy physics phenomenology
|
Under the assumption that every material object can ultimately be described by quantum theory, we ask how a probe system evolves in a device prepared and kept in a superposition state of values of its classical parameter. We find that, under ideal conditions, the evolution of the system would be unitary, generated by an effective Hamiltonian. We describe also an incoherent use of the device that achieves the same effective evolution on an ensemble. The effective Hamiltonian thus generated may have qualitatively different features from that associated to a classical value of the parameter.
|
quantum physics
|
According to QBism, quantum states, unitary evolutions, and measurement operators are all understood as personal judgments of the agent using the formalism. Meanwhile, quantum measurement outcomes are understood as the personal experiences of the same agent. Wigner's conundrum of the friend, in which two agents ostensibly have different accounts of whether or not there is a measurement outcome, thus poses no paradox for QBism. Indeed the resolution of Wigner's original thought experiment was central to the development of QBist thinking. The focus of this paper concerns two very instructive modifications to Wigner's puzzle: One, a recent no-go theorem by Frauchiger and Renner, and the other a thought experiment by Baumann and Brukner. We show that the paradoxical features emphasized in these works disappear once both friend and Wigner are understood as agents on an equal footing with regard to their individual uses of quantum theory. Wigner's action on his friend then becomes, from the friend's perspective, an action the friend takes on Wigner. Our analysis rests on a kind of quantum Copernican principle: When two agents take actions on each other, each agent has a dual role as a physical system for the other agent. No user of quantum theory is more privileged than any other. In contrast to the sentiment of Wigner's original paper, neither agent should be considered as in "suspended animation." In this light, QBism brings an entirely new perspective to understanding Wigner's friend thought experiments.
|
quantum physics
|
Holter monitoring, a long-term ECG recording (24-hours and more), contains a large amount of valuable diagnostic information about the patient. Its interpretation becomes a difficult and time-consuming task for the doctor who analyzes them because every heartbeat needs to be classified, thus requiring highly accurate methods for automatic interpretation. In this paper, we present a three-stage process for analysing Holter recordings with robustness to noisy signal. First stage is a segmentation neural network (NN) with encoderdecoder architecture which detects positions of heartbeats. Second stage is a classification NN which will classify heartbeats as wide or narrow. Third stage in gradient boosting decision trees (GBDT) on top of NN features that incorporates patient-wise features and further increases performance of our approach. As a part of this work we acquired 5095 Holter recordings of patients annotated by an experienced cardiologist. A committee of three cardiologists served as a ground truth annotators for the 291 examples in the test set. We show that the proposed method outperforms the selected baselines, including two commercial-grade software packages and some methods previously published in the literature.
|
electrical engineering and systems science
|
We investigate the wind of lambda And, a solar-mass star that has evolved off the main sequence becoming a sub-giant. We present spectropolarimetric observations and use them to reconstruct the surface magnetic field of lambda And. Although much older than our Sun, this star exhibits a stronger (reaching up to 83 G) large-scale magnetic field, which is dominated by the poloidal component. To investigate the wind of lambda And, we use the derived magnetic map to simulate two stellar wind scenarios, namely a polytropic wind (thermally-driven) and an Alfven-wave driven wind with turbulent dissipation. From our 3D magnetohydrodynamics simulations, we calculate the wind thermal emission and compare it to previously published radio observations and more recent VLA observations, which we present here. These observations show a basal sub-mJy quiescent flux level at ~5 GHz and, at epochs, a much larger flux density (>37 mJy), likely due to radio flares. By comparing our model results with the radio observations of lambda And, we can constrain its mass-loss rate Mdot. There are two possible conclusions. 1) Assuming the quiescent radio emission originates from the stellar wind, we conclude that lambda And has Mdot ~ 3e-9 Msun/yr, which agrees with the evolving mass-loss rate trend for evolved solar-mass stars. 2) Alternatively, if the quiescent emission does not originate from the wind, our models can only place an upper limit on mass-loss rates, indicating that Mdot <~ 3e-9 Msun/yr.
|
astrophysics
|
We conduct experiments with force-free magnetically-driven rigid helical swimmers in Newtonian and viscoelastic (Boger) fluids. By varying the sizes of the swimmer body and its helical tail, we show that the impact of viscoelasticity strongly depends on the swimmer geometry: it can lead to a significant increase of the swimming speed (up to a factor of five), a similar decrease (also up to a factor of five) or it can have approximately no impact. Analysis of our data along with theoretical modeling shows that the influence of viscoelasticity on helical propulsion is controlled by a snowman-like effect, previously reported for dumbbell swimmers, wherein the front-back asymmetry of the swimmer leads to a non-Newtonian elastic force that can either favor or hinder locomotion.
|
physics
|
This study investigates the unitary equivalence of split-step quantum walks (SSQW). We consider a new class of quantum walks which includes all SSQWs. We show the explicit form of quantum walks in this class, and clarify their unitary equivalence classes. Unitary equivalence classes of Suzuki's SSQW are also given.
|
quantum physics
|
As the coronavirus disease 2019 (COVID-19) has shown profound effects on public health and the economy worldwide, it becomes crucial to assess the impact on the virus transmission and develop effective strategies to address the challenge. A new statistical model derived from the SIR epidemic model with functional parameters is proposed to understand the impact of weather and government interventions on the virus spread and also provide the forecasts of COVID-19 infections among eight metropolitan areas in the United States. The model uses Bayesian inference with Gaussian process priors to study the functional parameters nonparametrically, and sensitivity analysis is adopted to investigate the main and interaction effects of these factors. This analysis reveals several important results including the potential interaction effects between weather and government interventions, which shed new light on the effective strategies for policymakers to mitigate the COVID-19 outbreak.
|
statistics
|
Enhancing the frequency bandwidth of the seismic data is always the pursuance at the geophysical community. High resolution of seismic data provides the key resource to extract detailed stratigraphic knowledge. Here, a novel approach, based on deep learning model, is introduced by extracting reflections from well log data to broaden spectrum bandwidth of seismic data through boosting low and high frequencies. The corresponding improvement is observed from the enhancement of resolution of seismic data as well as elimination of sidelobe artifacts from seismic wavelets. During the training stage of deep learning model, geo-spatial information by taking consideration of multiple wells simultaneously is fully guaranteed, which assures that laterally and vertically geological information are constrained by and accurate away from the well controls during the inversion procedure. Extensive experiments prove that the enhanced seismic data is consistent with well log information, and honors rock property relationships defined from the wells at given locations. Uncertainty analysis could also be quantitatively assessed to determine the possibilities of a range of seismic responses by leveraging the outputs from the proposed approach.
|
electrical engineering and systems science
|
Acquisition and tracking systems form an important component of free-space optical communications due to directional nature of the optical signal. Acquisition subsystems are needed in order to search and locate the receiver terminal in an uncertainty/search region with very narrow laser beams. In this paper, we have proposed and analyzed two adaptive search schemes for acquisition systems that perform better---for the low probability of detection---than the spiral scanning approach. The first of these schemes, the adaptive spiral search, provides a better acquisition time performance by dividing the search region into a number of smaller subregions, and prioritizing search in regions of higher probability mass. The second technique---the shotgun approach---searches the region in a random manner by sampling the search region according to a Gaussian distribution. The adaptive spiral scheme outperforms the shotgun approach in terms of acquisition time, especially if the number of search subregions is large enough. However, a higher pointing accuracy is required by the adaptive spiral search in order to search the region precisely. On the other hand, the shotgun scanning approach does not require such stringent pointing accuracy.
|
electrical engineering and systems science
|
The Lyapunov exponent characterizes the asymptotic behavior of long matrix products. Recognizing scenarios where the Lyapunov exponent is strictly positive is a fundamental challenge that is relevant in many applications. In this work we establish a novel tool for this task by deriving a quantitative lower bound on the Lyapunov exponent in terms of a matrix sum which is efficiently computable in ergodic situations. Our approach combines two deep results from matrix analysis --- the $n$-matrix extension of the Golden-Thompson inequality and the Avalanche-Principle. We apply these bounds to the Lyapunov exponents of Schr\"odinger cocycles with certain ergodic potentials of polymer type and arbitrary correlation structure. We also derive related quantitative stability results for the Lyapunov exponent near aligned diagonal matrices and a bound for almost-commuting matrices.
|
mathematics
|
The study of multiplicative noise models has a long history in control theory but is re-emerging in the context of complex networked systems and systems with learning-based control. We consider linear system identification with multiplicative noise from multiple state-input trajectory data. We propose exploratory input signals along with a least-squares algorithm to simultaneously estimate nominal system parameters and multiplicative noise covariance matrices. Identifiability of the covariance structure and asymptotic consistency of the least-squares estimator are demonstrated by analyzing first and second moment dynamics of the system. The results are illustrated by numerical simulations.
|
electrical engineering and systems science
|
We rigorously prove the passage from a Lotka-Volterra reaction-diffusion system towards a cross-diffusion system at the fast reaction limit. The system models a competition of two species, where one species has a more diverse diet than the other. The resulting limit gives a cross-diffusion system of a starvation driven type. We investigate the linear stability of homogeneous equilibria of those systems and rule out the possibility of Turing instability. Numerical simulations are included which are compatible with the theoretical results.
|
mathematics
|
Multi-degree splines are smooth piecewise-polynomial functions where the pieces can have different degrees. We describe a simple algorithmic construction of a set of basis functions for the space of multi-degree splines, with similar properties to standard B-splines. These basis functions are called multi-degree B-splines (or MDB-splines). The construction relies on an extraction operator that represents all MDB-splines as linear combinations of local B-splines of different degrees. This enables the use of existing efficient algorithms for B-spline evaluations and refinements in the context of multi-degree splines. A Matlab implementation is provided to illustrate the computation and use of MDB-splines.
|
mathematics
|
In physics labs, students experience a wide range of equitable and inequitable interactions. We developed a methodology to characterize different lab groups in terms of their bid exchanges and inchargeness. An equitable group is one in which every student's bids are heard and acknowledged. Our analysis of equitable and inequitable groups raises questions about how inchargeness and gender interact to affect the functionality of a lab group.
|
physics
|
We present the G$_0$W$_0$ band structure, core levels, and deformation potential of monolayer FeSe in the paramagnetic phase based on a starting mean field of the Kohn Sham density functional theory (DFT) with the PBE functional. We find the GW correction increases the bandwidth of the states forming the $M$ pocket near the Fermi energy, while leaving the $\Gamma$ pocket roughly unchanged. We then compare the G$_0$W$_0$ quasiparticle band energies with the band structure from a simple empirical +A approach, which was recently proposed to capture the renormalization of the electron-phonon interaction going beyond DFT in FeSe, when used as a starting point in density functional perturbation theory (DFPT). We show that this empirical correction succeeds in approximating the GW non-local and dynamical self energy in monolayer FeSe and reproduces the GW band structure near the Fermi surface, the core energy levels, and the deformation potential (electron-phonon coupling).
|
condensed matter
|
We study the problem of estimating a time dependent magnetic field by continuous optical probing of an atomic ensemble. The magnetic field is assumed to follow a stochastic Ornstein-Uhlenbeck process and it induces Larmor precession of the atomic ground state spin, which is read out by the Faraday polarization rotation of a laser field probe. The interactions and the measurement scheme are compatible with a hybrid quantum-classical Gaussian description of the unknown magnetic field, and the atomic and field variables. This casts the joint conditional quantum dynamics and classical parameter estimation problem in the form of update formulas for the first and second moments of the classical and quantum degrees of freedom. Our hybrid quantum-classical theory is equivalent with the classical theory of Kalman filtering and with the quantum theory of Gaussian states. By reference to the classical theory of smoothing and with the quantum theory of past quantum states, we show how optical probing after time $t$ improves our estimate of the value of the magnetic field at time $t$, and we present numerical simulations that analyze and explain the improvement over the conventional filtering approach.
|
quantum physics
|
In scalar-tensor Horndeski theories, non-singular cosmological models - bounce and genesis - are problematic because of potential ghost and/or gradient instabilities. One way to get around this obstacle is to send the effective Planck mass to zero in the asymptotic past ("strong gravity in the past"). One may suspect that this feature is a signal of strong coupling problem at early times. However, the classical treatment of the cosmological background is legitimate, provided that the strong coupling energy scale remains at all times much higher than the scale associated with the classical evolution. We construct various models of this sort, namely (i) bouncing Universe which proceeds through inflationary epoch to kination (expansion within GR, driven by massless scalar field); (ii) bouncing Universe with kination stage immediately after bounce; (iii) combination of genesis and bounce, with the Universe starting from flat space-time, then contracting and bouncing to the expansion epoch; (iv) "standard" genesis evading the strong coupling problem in the past. All these models are stable, and perturbations about the backgrounds are not superluminal.
|
high energy physics theory
|
Tipping elements in the climate system are large-scale subregions of the Earth that might possess threshold behavior under global warming with large potential impacts on human societies. Here, we study a subset of five tipping elements and their interactions in a conceptual and easily extendable framework: the Greenland and West Antarctic Ice Sheets, the Atlantic Meridional Overturning Circulation (AMOC), the El-Nino Southern Oscillation (ENSO) and the Amazon rainforest. In this nonlinear and multistable system, we perform a basin stability analysis to detect its stable states and their associated Earth system resilience. Using this approach, we perform a system-wide and comprehensive robustness analysis with more than 3.5 billion ensemble members. Further, we investigate dynamic regimes where some of the states lose stability and oscillations appear using a newly developed basin bifurcation analysis methodology. Our results reveal that the state of four or five tipped elements has the largest basin volume for large levels of global warming beyond 4 {\deg}C above pre-industrial climate conditions. For lower levels of warming, states including disintegrated ice sheets on West Antarctica and Greenland have higher basin volume than other state configurations. Therefore in our model, we find that the large ice sheets are of particular importance for Earth system resilience. We also detect the emergence of limit cycles for 0.6% of all ensemble members at rare parameter combinations. Such limit cycle oscillations mainly occur between the Greenland Ice Sheet and AMOC (86%), due to their negative feedback coupling. These limit cycles point to possibly dangerous internal modes of variability in the climate system that could have played a role in paleoclimatic dynamics such as those unfolding during the Pleistocene ice age cycles.
|
physics
|
In both ${\cal N}=1$ and ${\cal N}=2$ supersymmetry, it is known that $\mathsf{Sp}(2n, {\mathbb R})$ is the maximal duality group of $n$ vector multiplets coupled to chiral scalar multiplets $\tau (x,\theta) $ that parametrise the Hermitian symmetric space $\mathsf{Sp}(2n, {\mathbb R})/ \mathsf{U}(n)$. If the coupling to $\tau$ is introduced for $n$ superconformal gauge multiplets in a supergravity background, the action is also invariant under super-Weyl transformations. Computing the path integral over the gauge prepotentials in curved superspace leads to an effective action $\Gamma [\tau, \bar \tau]$ with the following properties: (i) its logarithmically divergent part is invariant under super-Weyl and rigid $\mathsf{Sp}(2n, {\mathbb R})$ transformations; (ii) the super-Weyl transformations are anomalous upon renormalisation. In this paper we describe the ${\cal N}=1$ and ${\cal N}=2$ locally supersymmetric "induced actions" which determine the logarithmically divergent parts of the corresponding effective actions. In the ${\cal N}=1$ case, superfield heat kernel techniques are used to compute the induced action of a single vector multiplet $(n=1)$ coupled to a chiral dilaton-axion multiplet. We also describe the general structure of ${\cal N}=1$ super-Weyl anomalies that contain weight-zero chiral scalar multiplets $\Phi^I$ taking values in a K\"ahler manifold. Explicit anomaly calculations are carried out in the $n=1$ case.
|
high energy physics theory
|
Magnetic resonance imaging (MRI) is widely used for screening, diagnosis, image-guided therapy, and scientific research. A significant advantage of MRI over other imaging modalities such as computed tomography (CT) and nuclear imaging is that it clearly shows soft tissues in multi-contrasts. Compared with other medical image super-resolution (SR) methods that are in a single contrast, multi-contrast super-resolution studies can synergize multiple contrast images to achieve better super-resolution results. In this paper, we propose a one-level non-progressive neural network for low up-sampling multi-contrast super-resolution and a two-level progressive network for high up-sampling multi-contrast super-resolution. Multi-contrast information is combined in high-level feature space. Our experimental results demonstrate that the proposed networks can produce MRI super-resolution images with good image quality and outperform other multi-contrast super-resolution methods in terms of structural similarity and peak signal-to-noise ratio. Also, the progressive network produces a better SR image quality than the non-progressive network, even if the original low-resolution images were highly down-sampled.
|
electrical engineering and systems science
|
With the development of Deep Neural Networks (DNNs) and the substantial demand growth of DNN model sharing and reuse, a gap for backdoors remains. A backdoor can be injected into a third-party model and is extremely stealthy in the normal situation, and thus has been widely discussed. Nowadays, the backdoor attack on deep neural networks has become a serious concern, which brings extensive research about attack and defense around backdoors in DNN. In this paper, we propose a stealthy scapegoat backdoor attack that can escape mainstream detection schemes, which can detect the backdoor either in the class level or the model level. We create a scapegoat to mislead the detection schemes in the class level and at the same time make our target model an adversarial input to the detection schemes in the model level. It reveals that although many effective backdoor defense schemes have been put forward, the backdoor attack on DNN still needs to be dealt with. The evaluation results on three benchmark datasets demonstrate that the proposed attack has an excellent performance in both aggressivity and stealthiness against two state-of-the-art defense schemes.
|
computer science
|
We propose and study a BCJ double-copy of massive particles, showing that it is equivalent to a KLT formula with a kernel given by the inverse of a matrix of massive bi-adjoint scalar amplitudes. For models with a uniform non-zero mass spectrum we demonstrate that the resulting double-copy factors on physical poles and that up to at least 5-particle scattering, color-kinematics satisfying numerators always exist. For the scattering of 5 or more particles, the procedure generically introduces spurious singularities that must be cancelled by imposing additional constraints. When massive particles are present, color-kinematics duality is not enough to guarantee a physical double-copy. As an example, we apply the formalism to massive Yang-Mills and show that up to 4-particle scattering the double-copy construction generates physical amplitudes of a model of dRGT massive gravity coupled to a dilaton and a two-form with dilaton parity violating couplings. We show that the spurious singularities in the 5-particle double-copy do not cancel in this example, and the construction fails to generate physically sensible amplitudes. We conjecture sufficient constraints on the mass spectrum, which in addition to massive BCJ relations, guarantee the absence of spurious singularities.
|
high energy physics theory
|
We study the magnetic properties of CaFeTi$_2$O$_6$ (CFTO) by high-field magnetization and specific heat measurements. While the magnetic susceptibility data yield a vanishingly small Curie-Weiss temperature, the magnetic moments are not fully polarized in magnetic field up to 60 T, which reveals a large spin exchange energy scale. Yet, the system shows no long range magnetic order but a spin-glass-like state below 5.5 K in zero field, indicating strong magnetic frustration in this system. Applying magnetic field gradually suppresses the spin-glass-like state and gives rise to a potential quantum spin liquid state whose low-temperature specific heat exhibits a $T^{1.6}$ power-law. Crucially, conventional mechanisms for frustration do not apply to this system as it possesses neither apparent geometrical frustration nor exchange frustration. We suggest that the orbital modulation of exchange interaction is likely the source of hidden frustration in CFTO, and its full characterization may open a new route in the quest for quantum spin liquids.
|
condensed matter
|
Knowledge graphs (KGs) have helped neural models improve performance on various knowledge-intensive tasks, like question answering and item recommendation. By using attention over the KG, such KG-augmented models can also "explain" which KG information was most relevant for making a given prediction. In this paper, we question whether these models are really behaving as we expect. We show that, through a reinforcement learning policy (or even simple heuristics), one can produce deceptively perturbed KGs, which maintain the downstream performance of the original KG while significantly deviating from the original KG's semantics and structure. Our findings raise doubts about KG-augmented models' ability to reason about KG information and give sensible explanations.
|
computer science
|
Velocity-space anisotropy can significantly modify fusion reactivity. The nature and magnitude of this modification depends on the plasma temperature, as well as the details of how the anisotropy is introduced. For plasmas that are sufficiently cold compared to the peak of the fusion cross-section, anisotropic distributions tend to have higher yields than isotropic distributions with the same thermal energy. At higher temperatures, it is instead isotropic distributions that have the highest yields. However, the details of this behavior depend on exactly how the distribution differs from an isotropic Maxwellian. This paper describes the effects of anisotropy on fusion yield for the class of anisotropic distribution functions with the same energy distribution as a 3D isotropic Maxwellian, and compares those results with the yields from bi-Maxwellian distributions. In many cases, especially for plasmas somewhat below reactor-regime temperatures, the effects of anisotropy can be substantial.
|
physics
|
In this work we consider the problem of estimating a high-dimensional $p \times p$ covariance matrix $\Sigma$, given $n$ observations of confounded data with covariance $\Sigma + \Gamma \Gamma^T$, where $\Gamma$ is an unknown $p \times q$ matrix of latent factor loadings. We propose a simple and scalable estimator based on the projection on to the right singular vectors of the observed data matrix, which we call RSVP. Our theoretical analysis of this method reveals that in contrast to PCA-based approaches, RSVP is able to cope well with settings where the smallest eigenvalue of $\Gamma^T \Gamma$ is close to the largest eigenvalue of $\Sigma$, as well as settings where the eigenvalues of $\Gamma^T \Gamma$ are diverging fast. It is also able to handle data that may have heavy tails and only requires that the data has an elliptical distribution. RSVP does not require knowledge or estimation of the number of latent factors $q$, but only recovers $\Sigma$ up to an unknown positive scale factor. We argue this suffices in many applications, for example if an estimate of the correlation matrix is desired. We also show that by using subsampling, we can further improve the performance of the method. We demonstrate the favourable performance of RSVP through simulation experiments and an analysis of gene expression datasets collated by the GTEX consortium.
|
statistics
|
It has been shown recently by Saad, Shenker and Stanford that the genus expansion of a certain matrix integral generates partition functions of Jackiw-Teitelboim (JT) quantum gravity on Riemann surfaces of arbitrary genus with any fixed number of boundaries. We use an extension of this integral for studying gas of baby universes or wormholes in JT gravity. To investigate the gas nonperturbatively we explore the generating functional of baby universes in the matrix model. The simple particular case when the matrix integral includes the exponential potential is discussed in some detail. We argue that there is a phase transition in the gas of baby universes
|
high energy physics theory
|
In this work, we present the results of an observational study of 2I/Borisov carried out with the 10.4-m Gran Telescopio Canarias (GTC) and the 3.6-m Telescopio Nazionale Galileo (TNG), both telescopes located at the Roque de Los Muchachos Observatory, in the island of La Palma (Spain). The study includes images in the visible and near-infrared, as well as visible spectra in the 3600 - 9200 A wavelength range. N-body simulations were also performed to explore its orbital evolution and Galactic kinematic context. The comet's dust continuum and near-infrared colours are compatible with those observed for Solar system comets. From its visible spectrum on the nights of 2019, September 24 and 26 we measured CN gas production rates Q(CN) = (2.3 +- 0.4) x 10^{24} mol/s and Q(CN) = (9.5 +- 0.2) x 10^{24} mol/s, respectively, in agreement with measurements reported by other authors on similar nights. We also obtained an upper limit for the C2 production rate of Q(C2) < (4.5 +- 0.1) x 10^{24} mol/s. Dust modelling results indicate a moderate dust production rate of about 50 kg/s at heliocentric distance r_h=2.6 au, with a differential power-law dust size distribution of index -3.4, within the range reported for many comet comae. Our simulations show that the Galactic velocity of 2I/Borisov matches well that of known stars in the solar neighbourhood and also those of more distant regions of the Galactic disc.
|
astrophysics
|
The transfer of topological concepts from the quantum world to classical mechanical and electronic systems has opened fundamentally new approaches to protected information transmission and wave guidance. A particularly promising technology are recently discovered topolectrical circuits that achieve robust electric signal transduction by mimicking edge currents in quantum Hall systems. In parallel, modern active matter research has shown how autonomous units driven by internal energy reservoirs can spontaneously self-organize into collective coherent dynamics. Here, we unify key ideas from these two previously disparate fields to develop design principles for active topolectrical circuits (ATCs) that can self-excite topologically protected global signal patterns. Building on a generic nonlinear oscillator representation, we demonstrate both theoretically and experimentally the emergence of self-organized protected edge states in ATCs. The good agreement between theory, simulations and experiment implies that ATCs can be realized in many different ways. Beyond topological protection, we also show how one can induce persistent localized bulk wave patterns by strategically placing defects in 2D lattice ATCs. These results lay the foundation for the practical implementation of autonomous electrical circuits with robust functionality in arbitrarily high dimensions.
|
condensed matter
|
Band selection, by choosing a set of representative bands in hyperspectral image (HSI), is an effective method to reduce the redundant information without compromising the original contents. Recently, various unsupervised band selection methods have been proposed, but most of them are based on approximation algorithms which can only obtain suboptimal solutions toward a specific objective function. This paper focuses on clustering-based band selection, and proposes a new framework to solve the above dilemma, claiming the following contributions: 1) An optimal clustering framework (OCF), which can obtain the optimal clustering result for a particular form of objective function under a reasonable constraint. 2) A rank on clusters strategy (RCS), which provides an effective criterion to select bands on existing clustering structure. 3) An automatic method to determine the number of the required bands, which can better evaluate the distinctive information produced by certain number of bands. In experiments, the proposed algorithm is compared to some state-of-the-art competitors. According to the experimental results, the proposed algorithm is robust and significantly outperform the other methods on various data sets.
|
electrical engineering and systems science
|
We consider a sparse linear regression model Y=X\beta^{*}+W where X has a Gaussian entries, W is the noise vector with mean zero Gaussian entries, and \beta^{*} is a binary vector with support size (sparsity) k. Using a novel conditional second moment method we obtain a tight up to a multiplicative constant approximation of the optimal squared error \min_{\beta}\|Y-X\beta\|_{2}, where the minimization is over all k-sparse binary vectors \beta. The approximation reveals interesting structural properties of the underlying regression problem. In particular, a) We establish that n^*=2k\log p/\log (2k/\sigma^{2}+1) is a phase transition point with the following "all-or-nothing" property. When n exceeds n^{*}, (2k)^{-1}\|\beta_{2}-\beta^*\|_0\approx 0, and when n is below n^{*}, (2k)^{-1}\|\beta_{2}-\beta^*\|_0\approx 1, where \beta_2 is the optimal solution achieving the smallest squared error. With this we prove that n^{*} is the asymptotic threshold for recovering \beta^* information theoretically. b) We compute the squared error for an intermediate problem \min_{\beta}\|Y-X\beta\|_{2} where minimization is restricted to vectors \beta with \|\beta-\beta^{*}\|_0=2k \zeta, for \zeta\in [0,1]. We show that a lower bound part \Gamma(\zeta) of the estimate, which corresponds to the estimate based on the first moment method, undergoes a phase transition at three different thresholds, namely n_{\text{inf,1}}=\sigma^2\log p, which is information theoretic bound for recovering \beta^* when k=1 and \sigma is large, then at n^{*} and finally at n_{\text{LASSO/CS}}. c) We establish a certain Overlap Gap Property (OGP) on the space of all binary vectors \beta when n\le ck\log p for sufficiently small constant c. We conjecture that OGP is the source of algorithmic hardness of solving the minimization problem \min_{\beta}\|Y-X\beta\|_{2} in the regime n<n_{\text{LASSO/CS}}.
|
statistics
|
The East Asian very-long-baseline interferometry (VLBI) Network (EAVN) is a rapidly evolving international VLBI array that is currently promoted under joint efforts among China, Japan, and Korea. EAVN aims at forming a joint VLBI Network by combining a large number of radio telescopes distributed over East Asian regions. After the combination of the Korean VLBI Network (KVN) and the VLBI Exploration of Radio Astrometry (VERA) into KaVA, further expansion with the joint array in East Asia is actively promoted. Here we report the first imaging results (at 22 and 43 GHz) of bright radio sources obtained with KaVA connected to Tianma 65-m and Nanshan 26-m Radio Telescopes in China. To test the EAVN imaging performance for different sources, we observed four active galactic nuclei (AGN) having different brightness and morphology. As a result, we confirmed that Tianma 65-m Radio Telescope (TMRT) significantly enhances the overall array sensitivity, a factor of 4 improvement in baseline sensitivity and 2 in image dynamic range compared to the case of KaVA only. The addition of Nanshan 26-m Radio Telescope (NSRT) further doubled the east-west angular resolution. With the resulting high-dynamic-range, high-resolution images with EAVN (KaVA+TMRT+NSRT), various fine-scale structures in our targets, such as the counter-jet in M87, a kink-like morphology of the 3C273 jet and the weak emission in other sources, are successfully detected. This demonstrates the powerful capability of EAVN to study AGN jets and to achieve other science goals in general. Ongoing expansion of EAVN will further enhance the angular resolution, detection sensitivity and frequency coverage of the network.
|
astrophysics
|
This paper presents a variant of sparse representation modeling method, which has a promising performance of reconstruction of delay differential equation from sampling data. In the new method, a parameterized dictionary of candidate functions is constructed against the traditional expanded dictionary. The parameterized dictionary uses a function with variables to represent a series of functions. It accordingly has the ability to express functions in the continuous function space so that the dimension of the dictionary can be exponentially decreased. This is particularly important when an exhaustion of candidate functions is needed to construct appropriate dictionary. The reconstruction of delay differential equation is such the case that each possible delay item should be considered as the basis to construct the dictionary and this naturally induces the curse of dimensionality. Correspondingly, the parameterized dictionary uses a variable to model the delay item so the curse disappears. Based on the parameterized dictionary, the reconstruction problem is then rewritten and treated as a mixed-integer nonlinear programming with both binary and continuous variables. To the best of our knowledge, such optimization problem is hard to solve with the traditional mathematical methods while the emerging evolutionary computation provides competitive solutions. Hence, the evolutionary computation technique is considered and a typical algorithm named particle swarm optimization is adopted in this paper. Experiments are carried out in 5 test systems including 3 well-known chaotic delay differential equations such as Mackey-Glass system. The experiment result shows the effectiveness of the new method to reconstruct delay differential equation.
|
electrical engineering and systems science
|
This white paper describes the science case for Very Long Baseline Interferometry (VLBI) and provides suggestions towards upgrade paths for the European VLBI Network (EVN). The EVN is a distributed long-baseline radio interferometric array, that operates at the very forefront of astronomical research. Recent results, together with the new science possibilities outlined in this vision document, demonstrate the EVN's potential to generate new and exciting results that will transform our view of the cosmos. Together with e-MERLIN, the EVN provides a range of baseline lengths that permit unique studies of faint radio sources to be made over a wide range of spatial scales. The science cases are reviewed in six chapters that cover the following broad areas: cosmology, galaxy formation and evolution, innermost regions of active galactic nuclei, explosive phenomena and transients, stars and stellar masers in the Milky Way, celestial reference frames and space applications. The document concludes with identifying the synergies with other radio, as well as multi-band/multi-messenger instruments, and provide the recommendations for future improvements. The appendices briefly describe other radio VLBI arrays, the technological framework for EVN developments, and a selection of spectral lines of astrophysical interest below 100 GHz. The document includes a glossary for non-specialists, and a list of acronyms at the end.
|
astrophysics
|
Most existing neural network models for music generation explore how to generate music bars, then directly splice the music bars into a song. However, these methods do not explore the relationship between the bars, and the connected song as a whole has no musical form structure and sense of musical direction. To address this issue, we propose a Multi-model Multi-task Hierarchical Conditional VAE-GAN (Variational Autoencoder-Generative adversarial networks) networks, named MIDI-Sandwich, which combines musical knowledge, such as musical form, tonic, and melodic motion. The MIDI-Sandwich has two submodels: Hierarchical Conditional Variational Autoencoder (HCVAE) and Hierarchical Conditional Generative Adversarial Network (HCGAN). The HCVAE uses hierarchical structure. The underlying layer of HCVAE uses Local Conditional Variational Autoencoder (L-CVAE) to generate a music bar which is pre-specified by the First and Last Notes (FLN). The upper layer of HCVAE uses Global Variational Autoencoder(G-VAE) to analyze the latent vector sequence generated by the L-CVAE encoder, to explore the musical relationship between the bars, and to produce the song pieced together by multiple music bars generated by the L-CVAE decoder, which makes the song both have musical structure and sense of direction. At the same time, the HCVAE shares a part of itself with the HCGAN to further improve the performance of the generated music. The MIDI-Sandwich is validated on the Nottingham dataset and is able to generate a single-track melody sequence (17x8 beats), which is superior to the length of most of the generated models (8 to 32 beats). Meanwhile, by referring to the experimental methods of many classical kinds of literature, the quality evaluation of the generated music is performed. The above experiments prove the validity of the model.
|
electrical engineering and systems science
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.