text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
Though various extensions of the Standard Model with higher gauge group predict the existence of leptoquarks, none of them has been observed yet at any of the colliders. In this paper, we study the prospect of several past and future $e$-$p$ colliders like HERA, LHeC and FCC-he to detect them through radiation amplitude zero. We find that the leptoquarks showing zeros in the tree-level single-photon amplitudes at $e$-$p$ collider lie within the complementary set of those exhibiting zeros at $e$-$\gamma$ collider. We present a PYTHIA-based analysis for HERA, LHeC and FCC-he (run II) to detect the leptoquarks with masses 70 GeV, 900 GeV and 1.5 TeV (2.0 TeV) respectively through radiation amplitude zero. | high energy physics phenomenology |
Let $G$ be a finite simple graph and $I(G)$ denote the corresponding edge ideal in a polynomial ring over a field $\mathbb{K}$. In this paper, we obtain upper bounds for the Castelnuovo-Mumford regularity of symbolic powers of certain classes of edge ideals. We also prove that for several classes of graphs, the regularity of symbolic powers of their edge ideals coincides with that of their ordinary powers. | mathematics |
Shared-task campaigns such as NIST TREC select documents to judge by pooling rankings from many participant systems. Therefore, the quality of the test collection greatly depends on the number of participants and the quality of submitted runs. In this work, we investigate i) how the number of participants, coupled with other factors, affects the quality of a test collection; and ii) whether the quality of a test collection can be inferred prior to collecting relevance judgments. Experiments on six TREC collections demonstrate that the required number of participants to construct a high-quality test collection varies significantly across different test collections due to a variety of factors. Furthermore, results suggest that the quality of test collections can be predicted. | computer science |
We describe co-adjoint orbits and Casimir functions for two-step free-nilpotent Lie algebras. The symplectic foliation consists of affine subspaces of the Lie coalgebra of different dimensions. Further, we consider left-invariant time-optimal problems on two-step Carnot groups, for which the set of admissible velocities is a strictly convex compactum in the first layer of the Lie algebra containing the origin in its interior. We describe integrals for the vertical subsystem of the Hamiltonian system of Pontryagin maximum principle. Further, we describe constancy and periodicity of solutions to this subsystem and controls, and characterize its flow, for two-dimensional co-adjoint orbits. | mathematics |
Network complexity has been studied for over half a century and has found a wide range of applications. Many methods have been developed to characterize and estimate the complexity of networks. However, there has been little research with statistical guarantees. In this paper, we develop a statistical theory of graph complexity in a general model of random graphs, the so-called graphon model. Given a graphon, we endow the latent space of the nodes with the neighborhood distance that measures the propensity of two nodes to be connected with similar nodes. Our complexity index is then based on the covering number and the Minkowski dimension of (a purified version of) this metric space. Although the latent space is not identifiable, these indices turn out to be identifiable. This notion of complexity has simple interpretations on popular examples of random graphs: it matches the number of communities in stochastic block models; the dimension of the Euclidean space in random geometric graphs; the regularity of the link function in H\"older graphon models. From a single observation of the graph, we construct an estimator of the neighborhood-distance and show universal non-asymptotic bounds for its risk, matching minimax lower bounds. Based on this estimated distance, we compute the corresponding covering number and Minkowski dimension and we provide optimal non-asymptotic error bounds for these two plug-in estimators. | statistics |
With the deepening of research on the SLAM system, the possibility of cooperative SLAM with multi-robots has been proposed. This paper presents a map matching and localization approach considering the cooperative SLAM of an aerial-ground system. The proposed approach aims to help precisely matching the map constructed by two independent systems that have large scale variance of viewpoints of the same route and eventually enables the ground mobile robot to localize itself in the global map given by the drone. It contains dense mapping with Elevation Map and software "Metashape", map matching with a proposed template matching algorithm, weighted normalized cross-correlation (WNCC) and localization with particle filter. The approach enables map matching for cooperative SLAM with the feasibility of multiple scene sensors, varies from stereo cameras to lidars, and is insensitive to the synchronization of the two systems. We demonstrate the accuracy, robustness, and the speed of the approach under experiments of the Aero-Ground Dataset. | computer science |
We present a new, analytic, Poisson likelihood derived, technique to account for the statistical uncertainties inherent in simulation samples of limited size. This method has better coverage properties than other techniques, is valid for small data samples, and maintains good computational performance. | physics |
Advancements in deep generative models have made it possible to synthesize images, videos and audio signals that are difficult to distinguish from natural signals, creating opportunities for potential abuse of these capabilities. This motivates the problem of tracking the provenance of signals, i.e., being able to determine the original source of a signal. Watermarking the signal at the time of signal creation is a potential solution, but current techniques are brittle and watermark detection mechanisms can easily be bypassed by applying post-processing transformations (cropping images, shifting pitch in the audio etc.). In this paper, we introduce ReSWAT (Resilient Signal Watermarking via Adversarial Training), a framework for learning transformation-resilient watermark detectors that are able to detect a watermark even after a signal has been through several post-processing transformations. Our detection method can be applied to domains with continuous data representations such as images, videos or sound signals. Experiments on watermarking image and audio signals show that our method can reliably detect the provenance of a signal, even if it has been through several post-processing transformations, and improve upon related work in this setting. Furthermore, we show that for specific kinds of transformations (perturbations bounded in the L2 norm), we can even get formal guarantees on the ability of our model to detect the watermark. We provide qualitative examples of watermarked image and audio samples in https://drive.google.com/open?id=1-yZ0WIGNu2Iez7UpXBjtjVgZu3jJjFga. | computer science |
Understanding the power of depth in feed-forward neural networks is an ongoing challenge in the field of deep learning theory. While current works account for the importance of depth for the expressive power of neural-networks, it remains an open question whether these benefits are exploited during a gradient-based optimization process. In this work we explore the relation between expressivity properties of deep networks and the ability to train them efficiently using gradient-based algorithms. We give a depth separation argument for distributions with fractal structure, showing that they can be expressed efficiently by deep networks, but not with shallow ones. These distributions have a natural coarse-to-fine structure, and we show that the balance between the coarse and fine details has a crucial effect on whether the optimization process is likely to succeed. We prove that when the distribution is concentrated on the fine details, gradient-based algorithms are likely to fail. Using this result we prove that, at least in some distributions, the success of learning deep networks depends on whether the distribution can be well approximated by shallower networks, and we conjecture that this property holds in general. | computer science |
The surface orientation dependence on the hydrogen evolution reaction (HER) performance of topological crystalline insulator (TCI) SnTe thin films was studied. Their intrinsic electrochemical activities were determined by the linear sweep voltammetry and cyclic voltammetry measurements, in which the obtained electrochemical surface areas agree well with their surface topology as revealed by atomic force microscopy image analysis. It was found that the SnTe (001) and (111) surfaces exhibit superior intrinsic activities over those of various topological quantum and nanostructured materials such as the Pt-group-metal-based chiral crystals, while the (211) surface shows uncompetitive activity. Our density functional theory calculations reveal that while pure (001) and (111) are not good electrocatalysts, the SnTe thin film with Sn vacancies or partially oxidized surfaces, with the latter as evidenced by the X-ray photoelectron spectroscopy analysis, have high HER activity. The theoretical calculations show that the overall performance of the (001) and (111) surfaces with topological surface states (TSSs) is better than that of the (211) surface without TSSs, which is further confirmed by the weak antilocalization effect in magneto-transport measurements. We have also addressed the effect of possible surface facets and the contrast of the HER activity of the available active sites among the three samples. The high HER activity of SnTe (001) and (111) is attributed to the enhanced charge transfer between hydrogen atoms and the TSSs. The absence or fragility of TSSs in the lowly symmetric SnTe (211) explains its relatively low HER performance. Our study offers a direction to design cost-effective electrocatalysts by tuning the TSSs and mirror symmetry of TCIs. | condensed matter |
We study the competition between stripe states with different periods and a uniform $d$-wave superconducting state in the extended 2D Hubbard model at 1/8 hole doping using infinite projected entangled-pair states (iPEPS). With increasing strength of negative next-nearest neighbor hopping $t'$, the preferred period of the stripe decreases. For the values of $t'$ predicted for cuprate high-T$_c$ superconductors, we find stripes with a period 4 in the charge order, in agreement with experiments. Superconductivity in the period 4 stripe is suppressed at $1/8$ doping. Only at larger doping, $0.18 \lesssim \delta < 0.25$, the period 4 stripe exhibits coexisting $d$-wave superconducting order. The uniform $d$-wave state is only favored for sufficiently large positive $t'$. | condensed matter |
Deep convolutional neural networks trained for image object categorization have shown remarkable similarities with representations found across the primate ventral visual stream. Yet, artificial and biological networks still exhibit important differences. Here we investigate one such property: increasing invariance to identity-preserving image transformations found along the ventral stream. Despite theoretical evidence that invariance should emerge naturally from the optimization process, we present empirical evidence that the activations of convolutional neural networks trained for object categorization are not robust to identity-preserving image transformations commonly used in data augmentation. As a solution, we propose data augmentation invariance, an unsupervised learning objective which improves the robustness of the learned representations by promoting the similarity between the activations of augmented image samples. Our results show that this approach is a simple, yet effective and efficient (10 % increase in training time) way of increasing the invariance of the models while obtaining similar categorization performance. | computer science |
Quantum technologies are built on the power of coherent superposition. Atomic coherence is typically generated from optical coherence, most often via Rabi oscillations. However, canonical coherent states of light create imperfect resources; a fully-quantized description of "$\tfrac{\pi}{2}$ pulses" shows that the atomic superpositions generated remain entangled with the light. We show that there are quantum states of light that generate coherent atomic states perfectly, with no residual atom-field entanglement. These states can be found for arbitrarily short times and approach slightly-number-squeezed $\tfrac{\pi}{2}$ pulses in the limit of large intensities; similar ideal states can be found for any $(2k+1)\tfrac{\pi}{2}$ pulses, requiring more number squeezing with increasing $k$. Moreover, these states can be repeatedly used as "quantum catalysts" to successfully generate coherent atomic states with high probability. From this perspective we have identified states that are "more coherent" than coherent states. | quantum physics |
The disc surrounding PDS 70, with two directly imaged embedded giant planets, is an ideal laboratory to study planet-disc interaction. We present three-dimensional smoothed particle hydrodynamics simulations of the system. In our simulations, planets, which are free to migrate and accrete mass, end up in a locked resonant configuration that is dynamically stable. We show that features observed at infrared (scattered light) and millimetre (thermal continuum) wavelengths are naturally explained by the accretion stream onto the outer planet, without requiring a circumplanetary disc around planet c. We post-processed our near-infrared synthetic images in order to account for observational biases known to affect high-contrast images. Our successful reproduction of the observations indicates that planet-disc dynamical interactions alone are sufficient to explain the observations of PDS 70. | astrophysics |
We study, using Mean Curvature Flow methods, 2+1 dimensional cosmologies with a positive cosmological constant and matter satisfying the dominant and the strong energy conditions. If the spatial slices are compact with non-positive Euler characteristic and are initially expanding everywhere, then we prove that the spatial slices reach infinite volume, asymptotically converge on average to de Sitter and they become, almost everywhere, physically indistinguishable from de Sitter. This holds true notwithstanding the presence of initial arbitrarily-large density fluctuations and the formation of black holes. | high energy physics theory |
Far from being conclusively understood, the reactive interaction of water with rutile does still present a challenge to atomistic modelling techniques rooted on quantum mechanics. We show that static geometries of stoichiometric TiO$_2$/water interfaces can be described well by Density Functional Tight Binding (DFTB). However, this method needs further improvements to reproduce the low dissociation propensity of H$_2$O after adsorption predicted by Density Functional Theory (DFT). A reliable description of the surface reactivity of water is fundamental to investigate the non-stoichiometric reconstruction of the (001) facet rich in Ti interstitials. Calculations based on (DFT) predict the transition temperature for the onset of reconstruction in remarkable agreement with experiments and suggest that this surface, in contact with liquid water, can promote spontaneous H$_2$O splitting and formation of H$_2$ molecules. | condensed matter |
In this work, we propose a novel approach for generating videos of the six basic facial expressions given a neutral face image. We propose to exploit the face geometry by modeling the facial landmarks motion as curves encoded as points on a hypersphere. By proposing a conditional version of manifold-valued Wasserstein generative adversarial network (GAN) for motion generation on the hypersphere, we learn the distribution of facial expression dynamics of different classes, from which we synthesize new facial expression motions. The resulting motions can be transformed to sequences of landmarks and then to images sequences by editing the texture information using another conditional Generative Adversarial Network. To the best of our knowledge, this is the first work that explores manifold-valued representations with GAN to address the problem of dynamic facial expression generation. We evaluate our proposed approach both quantitatively and qualitatively on two public datasets; Oulu-CASIA and MUG Facial Expression. Our experimental results demonstrate the effectiveness of our approach in generating realistic videos with continuous motion, realistic appearance and identity preservation. We also show the efficiency of our framework for dynamic facial expressions generation, dynamic facial expression transfer and data augmentation for training improved emotion recognition models. | computer science |
In this paper, we obtain various conditions on the parameters $a,\, b,\, c\,, d$ and $e$ for which the hypergeometric functions $z\,_3F_2(a,b,c;d,e;z)$ to be in the class of all close-to-convex function with respect to some well known convex functions. | mathematics |
Kulldorff's (1997) seminal paper on spatial scan statistics (SSS) has led to many methods considering different regions of interest, different statistical models, and different approximations while also having numerous applications in epidemiology, environmental monitoring, and homeland security. SSS provides a way to rigorously test for the existence of an anomaly and provide statistical guarantees as to how "anomalous" that anomaly is. However, these methods rely on defining specific regions where the spatial information a point contributes is limited to binary 0 or 1, of either inside or outside the region, while in reality anomalies will tend to follow smooth distributions with decaying density further from an epicenter. In this work, we propose a method that addresses this shortcoming through a continuous scan statistic that generalizes SSS by allowing the point contribution to be defined by a kernel. We provide extensive experimental and theoretical results that shows our methods can be computed efficiently while providing high statistical power for detecting anomalous regions. | statistics |
On-shell methods have revitalized interest in scattering amplitudes which have, in turn, shed some much needed light on the structure of quantum field theories. These developments have been warmly embraced by the particle physics community. Less so in the astrophyical and cosmological contexts. As part of an effort to address this imbalance, we illustrate these methods by revisiting two classic problems in gravity: gravitational light-bending and the vDVZ discontinuity of massive gravity. | high energy physics theory |
We present a bidirectional quantum communication system based on optical phase conjugation for achieving fully autocompensating high-dimensional quantum cryptography. We prove that random phase shifts and couplings among 2N spatial and polarization optical modes described by SU(2N) transformations due to perturbations are autocompensated after a single round trip between Alice and Bob. Bob can use a source of single photons or, alternatively, coherent states and then Alice attenuates them up to a single photon level, and thus non-perturbated 1-qudit states are generated for high-dimensional QKD protocols, as the BB84 one, of a higher security. | quantum physics |
We present an analysis of the continuous monitoring of a qudit coupled to a cavity using both phase-preserving and phase-sensitive amplification. We construct a stochastic master equation that describes the quantum trajectories of the system, and derive the corresponding Lindblad operators. We find that the measurement backaction causes spiralling in the state coordinates during collapse, which increases as the system levels become less distinguishable. We discuss two examples: a two-level system, and an N-dimensional system and meter with rotational symmetry in the quadrature space. We also provide a comparison of the effects of phase-preserving and phase-sensitive detection on the master equation, and show that the average behaviour is the same in both cases, but individual trajectories collapse at different rates depending on the measurement axis in the quadrature plane. | quantum physics |
Sensitivity of future far infrared 10m class space telescopes will be limited by a confusion noise created by distant galaxies. Our primary goal is to create a model that will allow us to estimate the confusion noise parameters of the Millimetron mission. We construct a model of the Cosmic Infrared Background (CIB) aimed at exploring the methods of prediction and reduction of the confusion noise. The model is based on the publicly available eGALICS simulation. For each simulated galaxy we construct a spectral energy distribution with the help of the GRASIL code. To put our model in the context of the current CIB investigations, we compare the outputs of the model with the available observational data and with three other models. One is a well known "backwards evolution" model of Bethermin et al. 2011 and two others are based on a simple mass-luminosity (M-L) relation applied to simulated dark matter halo catalogs. We conclude that our model reproduces the observational data reasonably well. All four models show significant differences in the predictions of the distribution of sources on the flux-redshift plane, especially at high redshifts. We give estimations of the confusion noise based on number counts (source density criterion, probability of deflection criterion etc.) and based on the analysis of the simulated maps. We show that resolution effects influence the number counts curves and noise estimations. | astrophysics |
We study Brill-Noether existence on a finite graph using methods from polyhedral geometry and lattices. We start by formulating analogues of the Brill-Noether conjectures (both the existence and non-existence parts) for $\mathbb{R}$-divisors, i.e. divisors with real coefficients, on a graph. We establish Brill-Noether existence for $\mathbb{R}$-divisors on graphs that are sufficiently dense and prove a weak version for arbitrary graphs. Using this, we prove an approximate version of the Brill-Noether existence conjecture for divisors on a graph. We also prove the Brill-Noether existence conjecture for degree equal to genus minus one on sufficiently dense graphs. As applications, we derive upper bounds on the gonality of a graph and its $\mathbb{R}$-divisor analogue. | mathematics |
In this note we investigate the existence of time-periodic solutions to the $p$-Navier-Stokes system in the singular case of $p\in (1, 2)$, that describes the flows of an incompressible shear-thinning fluid. In the $3D$ space-periodic setting and for $p \in [ \frac{5}{3} , 2)$ we prove the existence of a regular time-periodic solution corresponding to a time periodic force data which is assumed small in a suitable sense. As a particular case we obtain `regular' steady solutions. | mathematics |
Perturbative expansions for short-distance quantities in QCD are factorially divergent and this deficiency can be turned into a useful tool to investigate nonperturbative corrections. In this work, we use this approach to study the structure of power corrections to parton quasi-distributions and pseudo-distributions which appear in lattice calculations of parton distribution functions. As the main result, we predict the functional dependence of the leading power corrections to quasi(pseudo)-distributions on the Bjorken $x$ variable. We also show that these corrections can be strongly affected by the normalization procedure. | high energy physics phenomenology |
Unlike the entanglement of quantum states, very little is known about the entanglement of bipartite channels, called dynamical entanglement. Here we work with the partial transpose of a superchannel, and use it to define computable measures of dynamical entanglement, such as the negativity. We show that a version of it, the max-logarithmic negativity, represents the exact asymptotic dynamical entanglement cost. We discover a family of dynamical entanglement measures that provide necessary and sufficient conditions for bipartite channel simulation under local operations and classical communication and under operations with positive partial transpose. | quantum physics |
Observational evidence has been mounting for the existence of intermediate mass black holes (IMBHs, 10^2-10^5 Msun), but observing them at all, much less constraining their masses, is very challenging. In one theorized formation channel, IMBHs are the seeds for supermassive black holes in the early universe. As a result, IMBHs are predicted to exist in the local universe in dwarf galaxies, as well as wandering in more massive galaxy halos. However, these environments are not conducive to the accretion events or dynamical signatures that allow us to detect IMBHs. The Laser Interferometer Space Antenna (LISA) will demystify IMBHs by detecting the mergers of these objects out to extremely high redshifts, while measuring their masses with extremely high precision. These observations of merging IMBHs will allow us to constrain the formation mechanism and subsequent evolution of massive black holes, from the 'dark ages' to the present day, and reveal the role that IMBHs play in hierarchical galaxy evolution. | astrophysics |
The resonant interaction between x-ray photons and nuclei is one of the most exciting subjects of the burgeoning field of x-ray quantum optics. A resourceful platform used so far are thin-film x-ray cavities with embedded layers or M\"ossbauer nuclei such as $^{57}\mathrm{Fe}$. A new quantum optical model based on the classical electromagnetic Green's function is developed to investigate theoretically the nuclear response inside the x-ray cavity. The model is versatile and provides an intuitive picture about the influence of the cavity structure on the resulting spectra. We test its predictive powers with the help of the semiclassical coherent scattering formalism simulations and discuss our results for increasing complexity of layer structures. | quantum physics |
Musical expressivity is an important aspect of musical performance for humans as well as robotic musicians. We present a novel mechatronics-driven implementation of Brushless Direct Current (BLDC) motors in a robotic marimba player, named Shimon, designed to improve speed, dynamic range (loudness), and ultimately perceived musical expressivity in comparison to state-of-the-art robotic percussionist actuators. In an objective test of dynamic range, we find that our implementation provides wider and more consistent dynamic range response in comparison with solenoid-based robotic percussionists. Our implementation also outperforms both solenoid and human marimba players in striking speed. In a subjective listening test measuring musical expressivity, our system performs significantly better than a solenoid-based system and is statistically indistinguishable from human performers. | computer science |
We propose a scheme for detecting time-varying weak forces using quantum probe consisting of single spin and quantum oscillator under the effect of collective dissipation. We study the force estimation in the steady-state regime where the information of the force is extracted by measuring observable of the oscillator such as quadrature and mean phonon excitation. We quantify the force sensitivity in terms of quantum Fisher information and show that it diverges approaching the critical spin-boson coupling making the system sensitive to very small force perturbation. We show that close to the critical coupling the measurement of the oscillator quadrature is optimal in a sense that saturates the fundamental Cramer-Rao bound. Furthermore, we study the force estimation in the presence of phonon squeezing and show that it can significantly improve the sensitivity reaching minimal detectable force of order of xN (1${\rm xN=10^{-27}{\rm N}}$). | quantum physics |
The suitability of molecular statics (MS) simulations to model the structure of 90{\deg} glide set partial dislocation cores in GaAs is analyzed. In the MS simulations the atomic positions are iteratively relaxed by energy minimization, for which a Tersoff potential parametrization appropriate for nanostructures has been used. We show that for the Ga terminated partial the resulting bond lengths of the atoms in the dislocation core agree within 5-10% with those of previous density functional theory studies, whereas a significant discrepancy appears in the case of the As terminated partial. | condensed matter |
In this paper, we consider the task of predicting travel times between two arbitrary points in an urban scenario. We view this problem from two temporal perspectives: long-term forecasting with a horizon of several days and short-term forecasting with a horizon of one hour. Both of these perspectives are relevant for planning tasks in the context of urban mobility and transportation services. We utilize tree-based ensemble methods that we train and evaluate on a dataset of taxi trip records from New York City. Through extensive data analysis, we identify relevant temporal and spatial features. We also engineer additional features based on weather and routing data. The latter is obtained via a routing solver operating on the road network. The computational results show that the addition of this routing data can be beneficial to the model performance. Moreover, employing different models for short and long-term prediction is useful as short-term models are better suited to mirror current traffic conditions. In fact, we show that accurate short-term predictions may be obtained with only little training data. | statistics |
An indium arsenide photovoltaic cell with gold front contacts is designed for use in a near-field thermophotovoltaic (NF-TPV) device consisting of millimeter-size surfaces separated by a nanosize vacuum gap. The device operates with a doped silicon radiator maintained at a temperature of 800 K. The architecture of the photovoltaic cell, including the emitter and base thicknesses, the doping level of the base, and the front contact grid parameters, are optimized for maximizing NF-TPV power output. This is accomplished by solving radiation and charge transport in the cell via fluctuational electrodynamics and the minority charge carrier continuity equations, in addition to accounting for the shading losses due to the front contacts and additional series resistance losses introduced by the front contacts and the substrate. The results reveal that these additional loss mechanisms negatively affect NF-TPV performance in a non-negligible manner, and that the maximum power output is a trade-off between shading losses and series resistance losses introduced by the front contacts. For instance, when the cell is optimized for a 1 x 1 mm2 device operating at a vacuum gap of 100 nm, the losses introduced by the front contacts reduce the maximum power output by a factor of ~ 2.5 compared to the idealized case when no front contact grid is present. If the optimized grid for the 1 x 1 mm2 device is scaled up for a 5 x 5 mm2 device, the maximum power output is only increased by a factor of ~ 1.08 with respect to the 1 x 1 mm2 case despite an increase of the surface area by a factor of 25. This work demonstrates that the photovoltaic cell in a NF-TPV device must be designed not only for a specific radiator temperature, but also for specific gap thickness and device surface area. | physics |
A new class of representations of the Brauer algebra that centralizes the action of orthogonal and symplectic groups in tensor spaces is found. These representations make it possible to apply the technique of building primitive orthogonal idempotents of the Brauer algebra to the construction of integer spin Behrends-Fronsdal type projectors of an arbitrary type of symmetries. | high energy physics theory |
In recent years, hematite potential as a photoanode material for solar hydrogen production has ignited a renewed interest in its physical and interfacial properties, which continues to be an active field of research. Research on hematite photoanodes provides new insights on the correlations between electronic structure, transport properties, excited state dynamics and charge transfer phenomena, and expands our knowledge on solar cell materials into correlated electron systems. This research news article presents a snapshot of selected theoretical and experimental developments linking the electronic structure to the photoelectrochemical performance, with particular focus on optoelectronic properties and charge carrier dynamics. | physics |
In two-dimensional traps, since the theoretical study of Bose-Einstein condensation (BEC) will encounter the problem of divergence, the actual contribution of the divergent terms is often estimated in some indirect ways with the accuracy to the leading order. In this paper, by using an analytical continuation method to solve the divergence problem, we obtain the analytical expressions of critical temperature and condensate fraction for Bose gases in a two-dimensional anisotropic box and harmonic trap, respectively. They are consistent with or better than previous studies. Then, we further consider the nonvanishing chemical potential, and obtain the expressions of chemical potential and more precise condensate fraction. These results agree with the numerical calculation well, especially for the case of harmonic traps. The comparison between the grand canonical and canonical ensembles shows that our calculation in the grand canonical ensemble is reliable. | condensed matter |
We introduce a simple single-system game inspired by the Clauser-Horne-Shimony-Holt (CHSH) game. For qubit systems subjected to unitary gates and projective measurements, we prove that any strategy in our game can be mapped to a strategy in the CHSH game, which implies that Tsirelson's bound also holds in our setting. More generally, we show that the optimal success probability depends on the reversible or irreversible character of the gates, the quantum or classical nature of the system and the system dimension. We analyse the bounds obtained in light of Landauer's principle, showing the entropic costs of the erasure associated with the game. This shows a connection between the reversibility in fundamental operations embodied by Landauer's principle and Tsirelson's bound, that arises from the restricted physics of a unitarily-evolving single-qubit system. | quantum physics |
We construct a proof of the second law of thermodynamics in an arbitrary diffeomorphism invariant theory of gravity working within the approximation of linearized dynamical fluctuations around stationary black holes. We achieve this by establishing the existence of an entropy current defined on the horizon of the dynamically perturbed black hole in such theories. By construction, this entropy current has non-negative divergence, suggestive of a mechanism for the dynamical black hole to approach a final equilibrium configuration via entropy production as well as the spatial flow of it on the null horizon. This enables us to argue for the second law in its strongest possible form, which has a manifest locality at each space-time point. We explicitly check that the form of the entropy current that we construct in this paper exactly matches with previously reported expressions computed considering specific four derivative theories of higher curvature gravity. Using the same set up we also provide an alternative proof of the physical process version of the first law applicable to arbitrary higher derivative theories of gravity. | high energy physics theory |
Programmable, intelligent surfaces can manipulate electromagnetic waves impinging upon them, producing arbitrarily shaped reflection, refraction and diffraction, to the benefit of wireless users. Moreover, in their recent form of HyperSurfaces, they have acquired inter-networking capabilities, enabling the Internet of Material Properties with immense potential in wireless communications. However, as with any system with inputs and outputs, accurate sensing of the impinging wave attributes is imperative for programming HyperSurfaces to obtain a required response. Related solutions include field nano-sensors embedded within HyperSurfaces to perform minute measurements over the area of the HyperSurface, as well as external sensing systems. The present work proposes a sensing system that can operate without such additional hardware. The novel scheme programs the HyperSurface to perform compressed sensing of the impinging wave via simple one-antenna power measurements. The HyperSurface can jointly be programmed for both wave sensing and wave manipulation duties at the same time. Evaluation via simulations validates the concept and highlight its promising potential. | computer science |
Our social interactions mainly depend on the social phenomenon called trust. We evaluate our trust in our peer to decide whether to start an interaction or not. When our information about the peer is not sufficient, we use the knowledge of others. This knowledge can also be referred to as the reputation of the peer in the community. Like real-life communities, trust and reputation play a key role in virtual communities, too. These two notions help us overcome the complex interactions between agents in virtual communities. In previous studies regarding this topic, the social aspect of trust and reputation is partly ignored. In this paper, we will review an article which we accept as a starting point and compare it with another article that provides a more advanced model. Additionally, a new trust model which is mainly based on sociological notions will also be introduced. | computer science |
Stellar streams formed by tidal stripping of progenitors orbiting around the Milky Way are expected to be perturbed by encounters with dark matter subhalos. Recent studies have shown that they are an excellent proxy to infer properties of the perturbers, such as their mass. Here we present two different methodologies that make use of the fully non-Gaussian density distribution of stellar streams: a Bayesian model selection based on the probability density function (PDF) of stellar density, and a likelihood-free gradient boosting classifier. As an application, we forecast model selection strength of evidence for cold dark matter clusters of masses $10^3$-$10^5 M_{\odot}$, $10^5$-$10^7 M_{\odot}$ and $10^7$-$10^9 M_{\odot}$, based on a GD-1-like stellar stream and including realistic observational errors. Evidence for the smaller mass range, so far under-explored, is particularly interesting for the primordial black holes cold dark matter hypothesis. We expect moderate to strong evidence for model selection based on the PDF analysis when assuming low and intermediate dark matter perturbers mass ranges as fiducial models, but only weak evidence when the larger mass range is taken to be the fiducial model. Instead, the gradient boosting model is a highly efficient classifier ($F_1$-scores larger than 90%) for all mass ranges here considered. | astrophysics |
In the first part of this note we argue that ten dimensional consistency requirements in the form of a certain tadpole cancellation condition can be satisfied by KKLT type vacua of type IIB string theory. We explain that a new term of non-local nature is generated dynamically once supersymmetry is broken and ensures cancellation of the tadpole. It can be interpreted as the stress caused by the restoring force that the stabilization mechanism exerts on the volume modulus. In the second part, we explain that it is surprisingly difficult to engineer sufficiently long warped throats to prevent decompactification which are also small enough in size to fit into the bulk Calabi-Yau (CY). We give arguments that achieving this with reasonable amount of control may not be possible in generic CY compactifications while CYs with very non-generic geometrical properties might evade our conclusion. | high energy physics theory |
Recently it was proposed that the entanglement entropy of the Hawking radiation reflects the information of a region including the interior of the event horizon, which is called as "island." This also implies that the information of the complement of the Hawking radiation, which is nothing but the black hole, is not placed inside the event horizon, at least after the Page time. We study the entanglement entropy of the black hole as the complement of the Hawking radiation, in the eternal Schwarzschild black hole in the four-dimensional asymptotically flat spacetime. Although the entanglement entropy of the black hole before the Page time is given by that of a region including the interior of the event horizon, it can be interpreted as a consequence of the replica trick in gravitational theories, in a similar fashion to the island for the Hawking radiation. By comparing it with the maximally extended island, the information of the black hole is considered to be localized on a surface near the event horizon, which would be interpreted as the stretched horizon. This structure also resembles black holes in the AdS spacetime with an auxiliary flat spacetime, where the information of the black hole is localized at the interface between the AdS spacetime and the flat spacetime. | high energy physics theory |
The photoelectric effect has a sister process relevant in optoelectronics called internal photoemission. Here an electron is photoemitted from a metal into a semiconductor. While the photoelectric effect takes place within less than 100 attoseconds, the attosecond time scale has so far not been measured for internal photoemission. Based on the new method CHArge transfer time MEasurement via Laser pulse duration-dependent saturation fluEnce determinatiON, CHAMELEON, we show that the atomically thin semi-metal graphene coupled to bulk silicon carbide, forming a Schottky junction, allows charge transfer times as fast as (300 $\pm$ 200) attoseconds. These results are supported by a simple quantum mechanical model simulation. With the obtained cut-off bandwidth of 3.3 PHz for the charge transfer rate, this semimetal-semiconductor interface represents the first functional solid-state interface offering the speed and design space required for future light-wave signal processing. | physics |
The distribution of galaxies on a colour-magnitude diagram reveals a bimodality, featuring a passively evolving red sequence and a star-forming blue cloud. The region between these two, the Green Valley (GV), represents a fundamental transition where quenching processes operate. We exploit an alternative definition of the GV using the 4,000 Angstrom break strength, an indicator that is more resilient than colour to dust attenuation. We compare and contrast our GV definition with the traditional one, based on dust-corrected colour, making use of data from the Sloan Digital Sky Survey. Our GV selection - that does not need a dust correction and thus does not carry the inherent systematics - reveals very similar trends regarding nebular activity (star formation, AGN, quiescence) to the standard dust-corrected $^{0.1}(g-r)$. By use of high SNR stacked spectra of the quiescent GV subsample, we derive the simple stellar population (SSP) age difference across the GV, a rough proxy of the quenching timescale ($\Delta$t). We obtain an increasing trend with velocity dispersion ($\sigma$), from $\Delta$t$\sim$1.5Gyr at $\sigma$=100km/s, up to 3.5Gyr at $\sigma$=200km/s, followed by a rapid decrease in the most massive GV galaxies ($\Delta$t$\sim$1Gyr at $\sigma$=250km/s), suggesting two different modes of quenching, or the presence of an additional channel (rejuvenation). | astrophysics |
We measured the height of the chromospheric network in the 1700, 1600, and 304 A wavelength bands of the Atmospheric Imaging Assembly (AIA) onboard the Solar Dynamics Observatory (SDO) from the shift of features on the disk with respect to corresponding features in SDO/Helioseismic and Magnetic Imager (HMI) images of the absolute value of the longitudinal magnetic field. We found that near the limb the 304 A network emission forms 3.60$\pm$0.24 Mm above the 1600 A emission, which, in turn, forms 0.48$\pm$0.10 Mm above the HMI (6173 A) level. At the center of the disk the corresponding height differences are 2.99$\pm$0.02 Mm and 0.39$\pm$0.06 Mm respectively. We also found that the 1600 A network emission forms 0.25$\pm$0.02 Mm above the 1700 A emission near the limb and 0.20$\pm$0.02 Mm at the disk center. Finally, we examined possible variations with the solar cycle. Our results can help to check and refine atmospheric models. | astrophysics |
The Glashow resonant scattering, $i.e$. ${\overline{\nu}^{}_{e} + e^{-} \rightarrow W^{-} \rightarrow \text{anything}}$, offers us a possibility of disentangling $\overline{\nu}^{}_{e}$ from the total astrophysical neutrino fluxes. Meanwhile, a great number of high-energy neutrino telescopes, with various detection mechanisms, are advancing towards a better understanding of one of the most energetic frontiers of the Universe. In this work, we investigate a connection between through-going muons at IceCube and the Glashow resonance signal through the channel $W^{-} \rightarrow \mu$. We find that for IceCube, muons from $\overline{\nu}^{}_{e}$ can induce a $\sim20\%$ excess of PeV events around the horizontal direction. However, the current statistic of IceCube is not enough to observe such an excess. We also address the novel possibility of $\overline{\nu}^{}_{e}$ detection via $W^{-} \rightarrow \tau$ at telescopes aiming to detect Earth-skimming and mountain-penetrating neutrinos. The subsequent hadronic decay of a tau will induce an extensive air shower which can be detected by telescopes with Cherenkov or fluorescence techniques. Similar to IceCube, it is challenging to observe the Glashow resonance excess from the Earth-skimming neutrinos. Nevertheless, we find it is promising to observe Glashow resonance events with a mountain as the target. | high energy physics phenomenology |
As big spatial data becomes increasingly prevalent, classical spatiotemporal (ST) methods often do not scale well. While methods have been developed to account for high-dimensional spatial objects, the setting where there are exceedingly large samples of spatial observations has had less attention. The variational autoencoder (VAE), an unsupervised generative model based on deep learning and approximate Bayesian inference, fills this void using a latent variable specification that is inferred jointly across the large number of samples. In this manuscript, we compare the performance of the VAE with a more classical ST method when analyzing longitudinal visual fields from a large cohort of patients in a prospective glaucoma study. Through simulation and a case study, we demonstrate that the VAE is a scalable method for analyzing ST data, when the goal is to obtain accurate predictions. R code to implement the VAE can be found on GitHub: https://github.com/berchuck/vaeST. | statistics |
We study the Carbon Monoxide (CO) excitation, mean molecular gas density and interstellar radiation field (ISRF) intensity in a comprehensive sample of 76 galaxies from local to high redshift (z~0-6), selected based on detections of their CO transitions J=2-1 and 5-4 and their optical/infrared/(sub-)millimeter spectral energy distributions (SEDs). We confirm the existence of a tight correlation between CO excitation as traced by the CO(5-4)/(2-1) line ratio (R52), and the mean ISRF intensity U as derived from infrared SED fitting using dust SED templates. By modeling the molecular gas density probability distribution function (PDF) in galaxies and predicting CO line ratios with large velocity gradient radiative transfer calculations, we present a framework linking global CO line ratios to the mean molecular hydrogen gas density nH2 and kinetic temperature Tkin. Mapping in this way observed R52 ratios to nH2 and Tkin probability distributions, we obtain positive U-nH2 and U-Tkin correlations, which imply a scenario in which the ISRF in galaxies is mainly regulated by Tkin and (non-linearly) by nH2. A small fraction of starburst galaxies showing enhanced nH2 could be due to merger-driven compaction. Our work demonstrates that ISRF and CO excitation are tightly coupled, and that density-PDF modeling is a promising tool for probing detailed ISM properties inside galaxies. | astrophysics |
Remote Direct Memory Access (RDMA) is becoming widely available in data centers. This technology allows a process to directly read and write the memory of a remote host, with a mechanism to control access permissions. In this paper, we study the fundamental power of these capabilities. We consider the well-known problem of achieving consensus despite failures, and find that RDMA can improve the inherent trade-off in distributed computing between failure resilience and performance. Specifically, we show that RDMA allows algorithms that simultaneously achieve high resilience and high performance, while traditional algorithms had to choose one or another. With Byzantine failures, we give an algorithm that only requires $n \geq 2f_P + 1$ processes (where $f_P$ is the maximum number of faulty processes) and decides in two (network) delays in common executions. With crash failures, we give an algorithm that only requires $n \geq f_P + 1$ processes and also decides in two delays. Both algorithms tolerate a minority of memory failures inherent to RDMA, and they provide safety in asynchronous systems and liveness with standard additional assumptions. | computer science |
It is currently believed that the turbulent fluctuations pervade the outermost heliosphere. Turbulence, magnetic reconnection, and their link may be responsible for magnetic energy conversion in these regions. The governing mechanisms of such anisotropic and compressible magnetic turbulence in the inner heliosheath (IHS) and in the local interstellar medium (LISM) still lack a thorough description. The present literature mainly concerns large scales which are not representative of the inertial-cascade dynamics of turbulence. Moreover, lack of broadband spectral analysis makes the IHS dynamics critically understudied. Our recent study shows that 48 s magnetic-field data from the Voyager mission are appropriate for a spectral analysis over a frequency range of six decades, from 5 x 10-8 Hz to 10-2 Hz. Here, focusing on the Voyager 2 observation interval from 2013.824 to 2016.0, we describe the structure of turbulence in a sector zone of the IHS. A spectral break around 7 x 10-7 Hz (magnetic structures with size l~1.3 Astronomical Units) separates the energy-injection regime from the inertial-cascade regime of turbulence. A second scale is observed around 6 x 10-5 Hz (l~ 0.017 AU) and corresponds to a peak of compressibility and intermittency of fluctuations. | physics |
Ram Pressure Stripping can remove gas from satellite galaxies in clusters via a direct interaction between the intracluster medium (ICM) and the interstellar medium. This interaction is generally thought of as a contact force per area, however we point out that these gases must interact in a hydrodynamic fashion, and argue that this will lead to mixing of the galactic gas with the ICM wind. We develop an analytic framework for how mixing is related to the acceleration of stripped gas from a satellite galaxy. We then test this model using three "wind-tunnel" simulations of Milky Way-like galaxies interacting with a moving ICM, and find excellent agreement with predictions using the analytic framework. Focusing on the dense clumps in the stripped tails, we find that they are nearly uniformly mixed with the ICM, indicating that all gas in the tail mixes with the surroundings, and dense clumps are not separate entities to be modeled differently than diffuse gas. We find that while mixing drives acceleration of stripped gas, the density and velocity of the surrounding wind will determine whether the mixing results in the heating of stripped gas into the ICM, or the cooling of the ICM into dense clouds. | astrophysics |
Numerous deep learning architectures have been developed to accommodate the diversity of time series datasets across different domains. In this article, we survey common encoder and decoder designs used in both one-step-ahead and multi-horizon time series forecasting -- describing how temporal information is incorporated into predictions by each model. Next, we highlight recent developments in hybrid deep learning models, which combine well-studied statistical models with neural network components to improve pure methods in either category. Lastly, we outline some ways in which deep learning can also facilitate decision support with time series data. | statistics |
Parity-Time ($\mathcal{PT}$) symmetry has become an important concept in the design of synthetic optical materials, with exotic functionalities such as unidirectional transport and non-reciprocal reflection. At exceptional points, this symmetry is spontaneously broken, and solutions transition from those with conserved intensity to exponential growth or decay. Here we analyze a quantum-photonic surface formed by a single layer of atoms in an array with light mediating strong cooperative many-body interactions. We show how delocalized collective excitation eigenmodes can exhibit an effective $\mathcal{PT}$ symmetry and non-exponential decay. This effective symmetry is achieved in a passive system without gain by balancing the scattering of a bright mode with the loss from a subradiant dark mode. These modes coalesce at exceptional points, evidenced by the emergence of coherent perfect absorption where coherent incoming light is perfectly absorbed and scattered only incoherently. We also show how $\mathcal{PT}$ symmetry can be generated in total reflection and by balancing scattering and loss between different polarizations of collective modes. | physics |
Societal biases are a major issue in school students' access to and interaction with science. Schools engagement programmes in science from universities, like independent research projects, which could try and tackle these problems are, however, often inequitable. We evaluate these concerns applied to one such programme, `Physics Research in School Environments' (PRiSE), which features projects in space science, astronomy, and particle physics. Comparing the schools involved with PRiSE to those of other similar schemes and UK national statistics, we find that PRiSE has engaged a much more diverse set of schools with significantly more disadvantaged groups than is typical. While drop-off occurs within the protracted programme, we find no evidence of systematic biases present. The majority of schools that complete projects return for multiple years of the programme, with this repeated buy-in from schools again being unpatterned by typical societal inequalities. Therefore, schools' ability to succeed at independent research projects appears independent of background within the PRiSE framework. Qualitative feedback from teachers show that the diversity and equity of the programme, which they attribute to the level of support offered through PRiSE's framework, is valued and they have highlighted further ways of making the projects potentially even more accessible. Researcher-involvement, uncommon in many other programmes, along with teacher engagement and communication are found to be key elements to success in independent research projects overall. | physics |
We show that a non-zero renormalised value of the zero-point energy in $\lambda\phi^4$-theory over Minkowski spacetime is in tension with the scalar-field equation at two-loop order in perturbation theory. | high energy physics theory |
The hadronic vacuum polarization function $\Pi_h$ for two light flavors is computed on the entire domain of spacelike and timelike momenta using a framework of Dyson-Schwinger equations. The analytical continuation of the function $\Pi_h$ is based on the utilization of the Gauge Technique with the entry of QCD Green's functions determined from generalized quark spectral functions. For the first time, the light quark spectral functions are extracted from the solution of the gap equation for the quark propagator. The scale is set up by the phenomena of dynamical chiral symmetry breaking, which is a striking feature of low energy QCD. | high energy physics phenomenology |
We present new results on approximate colourings of graphs and, more generally, approximate H-colourings and promise constraint satisfaction problems. First, we show NP-hardness of colouring $k$-colourable graphs with $\binom{k}{\lfloor k/2\rfloor}-1$ colours for every $k\geq 4$. This improves the result of Bul\'in, Krokhin, and Opr\v{s}al [STOC'19], who gave NP-hardness of colouring $k$-colourable graphs with $2k-1$ colours for $k\geq 3$, and the result of Huang [APPROX-RANDOM'13], who gave NP-hardness of colouring $k$-colourable graphs with $2^{k^{1/3}}$ colours for sufficiently large $k$. Thus, for $k\geq 4$, we improve from known linear/sub-exponential gaps to exponential gaps. Second, we show that the topology of the box complex of H alone determines whether H-colouring of G-colourable graphs is NP-hard for all (non-bipartite, H-colourable) G. This formalises the topological intuition behind the result of Krokhin and Opr\v{s}al [FOCS'19] that 3-colouring of G-colourable graphs is NP-hard for all (3-colourable, non-bipartite) G. We use this technique to establish NP-hardness of H-colouring of G-colourable graphs for H that include but go beyond $K_3$, including square-free graphs and circular cliques (leaving $K_4$ and larger cliques open). Underlying all of our proofs is a very general observation that adjoint functors give reductions between promise constraint satisfaction problems. | computer science |
Nowadays, with many e-commerce platforms conducting global business, e-commerce search systems are required to handle product retrieval under multilingual scenarios. Moreover, comparing with maintaining per-country specific e-commerce search systems, having a universal system across countries can further reduce the operational and computational costs, and facilitate business expansion to new countries. In this paper, we introduce a universal end-to-end multilingual retrieval system, and discuss our learnings and technical details when training and deploying the system to serve billion-scale product retrieval for e-commerce search. In particular, we propose a multilingual graph attention based retrieval network by leveraging recent advances in transformer-based multilingual language models and graph neural network architectures to capture the interactions between search queries and items in e-commerce search. Offline experiments on five countries data show that our algorithm outperforms the state-of-the-art baselines by 35% recall and 25% mAP on average. Moreover, the proposed model shows significant increase of conversion/revenue in online A/B experiments and has been deployed in production for multiple countries. | computer science |
As the simplest model of transition between the superhydrophobic Cassie-Baxter (CB) and Wenzel (W) states of a macroscopic droplet sitting on a microscopically rough or corrugated substrate, a substrate whose surface is covered by identical truncated or inverted truncated conical pores is considered. The free energy landscapes of the intrusion and extrusion processes of a liquid into single pore are analyzed when the liquid is compressed or stretched so that the liquid phase is either stable or metastable relative to the vapor phase. Therefore, this model is also relevant to the stability of the superhydrophobic submerged substrates. In this study, the macroscopic classical capillary theory is adopted. Even within this simplified model, two simple geometries of truncated and inverted truncated cones lead to completely different free-energy landscapes. A simple criterion for the stability of the CB state based on Laplace pressure is shown not to be sufficient to understand the destruction and recovery of the CB state. The free-energy landscapes indicate that a gradual and an abrupt destruction of CB state is possible, which depends on the orientation of the conical pore and whether the liquid is compressed or stretched. The extensions of these theoretical results to more complex geometries are briefly discussed. | condensed matter |
Nonlinear fluctuating hydrodynamics (NFHD) is a powerful framework for understanding transport, but checking its validity with molecular dynamics is still challenging. Here, we overcome this challenge by developing an effective scheme for detecting hydrodynamic modes that takes into account the role of pressure fluctuations. We show that the predictions given by NFHD on the relaxation processes of hydrodynamic modes are valid only when the pressure of the system is zero and the pressure fluctuations are weak. For nonvanishing pressure, two other regimes arise as the hydrodynamic modes can respond to small and large pressure fluctuations and relax in two additionally distinct manners. In contrast to the previous finding of two classes, our results suggest that there are at least three universality classes of transport in anharmonic chains. | condensed matter |
We study on topological properties of global supply chain network in terms of degree distribution, hierarchical structure, and degree-degree correlation in the global supply chain network. The global supply chain data is constructed by collecting various company data from the web site of Standard & Poor's Capital IQ platform in 2018. The in- and out-degree distributions are characterized by a power law with in-degree exponent = 2.42 and out-degree exponent = 2.11. The clustering coefficient decays as power law with an exponent = 0.46. The nodal degree-degree correlation indicates the absence of assortativity. The Bow-tie structure of GWCC reveals that the OUT component is the largest and it consists 41.1% of total firms. The GSCC component comprises 16.4% of total firms. We observe that the firms in the upstream or downstream sides are mostly located a few steps away from the GSCC. Furthermore, we uncover the community structure of the network and characterize them according to their location and industry classification. We observe that the largest community consists of consumer discretionary sector mainly based in the US. These firms belong to the OUT component in the bow-tie structure of the global supply chain network. Finally, we confirm the validity for propositions S1 (short path length), S2 (power-law degree distribution), S3 (high clustering coefficient), S4 ("fit-gets-richer" growth mechanism), S5 (truncation of power-law degree distribution), and S7 (community structure with overlapping boundaries) in the global supply chain network. | physics |
The Lindblad form of the master equation has proven to be one of the most convenient ways to describe the impact of an environment interacting with a quantum system of interest. For single systems the jump operators characterizing these interactions usually take simple forms with a clear interpretation. However, for coupled systems these operators take significantly different forms and the full dynamics cannot be described by jump operators acting on the individual subsystems only. In this work, we investigate the differences between a common phenomenological model for the master equation and the more rigorous dressed-state master equation for optomechanical systems. We provide an analytical method to obtain the absorption spectrum of the system for both models and show the breakdown of the phenomenological model in both the bad cavity and the ultra-strong coupling limit. We present a careful discussion of the indirect dephasing of the optical cavity in both models and its role in the differences of their predicted absorption spectra. Our work provides a simple experimental test to determine whether the simpler phenomenological model can be used to describe the system and is a step forward toward a better understanding of the role of the coupling between subsystems for open-quantum-system dynamics. | quantum physics |
Contrastive learning methods have significantly narrowed the gap between supervised and unsupervised learning on computer vision tasks. In this paper, we explore their application to remote sensing, where unlabeled data is often abundant but labeled data is scarce. We first show that due to their different characteristics, a non-trivial gap persists between contrastive and supervised learning on standard benchmarks. To close the gap, we propose novel training methods that exploit the spatiotemporal structure of remote sensing data. We leverage spatially aligned images over time to construct temporal positive pairs in contrastive learning and geo-location to design pre-text tasks. Our experiments show that our proposed method closes the gap between contrastive and supervised learning on image classification, object detection and semantic segmentation for remote sensing and other geo-tagged image datasets. | computer science |
Objects gravitationally captured by the Earth-Moon system are commonly called temporarily captured orbiters (TCOs), natural Earth satellites, or minimoons. TCOs are a crucially important subpopulation of near-Earth objects (NEOs) to understand because they are the easiest targets for future sample-return, redirection, or asteroid mining missions. Only one TCO has ever been observed telescopically, 2006 RH 120, and it orbited Earth for about 11 months. Additionally, only one TCO fireball has ever been observed prior to this study. We present our observations of an extremely slow fireball (codename DN160822_03) with an initial velocity of around 11.0 km s-1 that was detected by six of the high-resolution digital fireball observatories located in the South Australian region of the Desert Fireball Network. Due to the inherent dynamics of the system, the probability of the meteoroid being temporarily captured before impact is extremely sensitive to its initial velocity. We examine the sensitivity of the fireball's orbital history to the chosen triangulation method. We use the numerical integrator REBOUND to assess particle histories and assess the statistical origin of DN160822_03. From our integrations we have found that the most probable capture time, velocity, semimajor axis, NEO group, and capture mechanism vary annually for this event. Most particles show that there is an increased capture probability during Earth's aphelion and perihelion. In the future, events like these may be detected ahead of time using telescopes like the Large Synoptic Survey Telescope, and the pre-atmospheric trajectory can be verified. | astrophysics |
Local deformations in medical modalities are common phenomena due to a multitude of factors such as metallic implants or limited field of views in magnetic resonance imaging (MRI). Completion of the missing or distorted regions is of special interest for automatic image analysis frameworks to enhance post-processing tasks such as segmentation or classification. In this work, we propose a new generative framework for medical image inpainting, titled ipA-MedGAN. It bypasses the limitations of previous frameworks by enabling inpainting of arbitrary shaped regions without a prior localization of the regions of interest. Thorough qualitative and quantitative comparisons with other inpainting and translational approaches have illustrated the superior performance of the proposed framework for the task of brain MR inpainting. | electrical engineering and systems science |
We compute the critical exponents $\nu$, $\eta$ and $\omega$ of $O(N)$ models for various values of $N$ by implementing the derivative expansion of the nonperturbative renormalization group up to next-to-next-to-leading order [usually denoted $\mathcal{O}(\partial^4)$]. We analyze the behavior of this approximation scheme at successive orders and observe an apparent convergence with a small parameter -- typically between $1/9$ and $1/4$ -- compatible with previous studies in the Ising case. This allows us to give well-grounded error bars. We obtain a determination of critical exponents with a precision which is similar or better than those obtained by most field theoretical techniques. We also reach a better precision than Monte-Carlo simulations in some physically relevant situations. In the $O(2)$ case, where there is a longstanding controversy between Monte-Carlo estimates and experiments for the specific heat exponent $\alpha$, our results are compatible with those of Monte-Carlo but clearly exclude experimental values. | condensed matter |
Quantum computing harnesses quantum laws of nature to enable new types of algorithms, not efficiently possible on traditional computers, that may lead to breakthroughs in crucial areas like materials science and chemistry. There is rapidly growing demand for a quantum workforce educated in the basics of quantum computing, in particular in quantum programming. However, there are few offerings for non-specialists and little information on best practices for training computer science and engineering students. In this report we describe our experience teaching an undergraduate course on quantum computing using a practical, software-driven approach. We centered our course around teaching quantum algorithms through hands-on programming, reducing the significance of traditional written assignments and relying instead on self-paced programming exercises ("Quantum Katas"), a variety of programming assignments, and a final project. We observed that the programming sections of the course helped students internalize theoretical material presented during the lectures. In the survey results, students indicated that the programming exercises and the final project contributed the most to their learning process. We describe the motivation for centering the course around quantum programming, discuss major artifacts used in this course, and present our lessons learned and best practices for a future improved course offering. We hope that our experience will help guide instructors who want to adopt a practical approach to teaching quantum computing and will enable more undergraduate programs to offer quantum programming as an elective. | physics |
We study a generic model in which the dark sector is composed of a Majorana dark matter $\chi_1$, its excited state $\chi_2$, both at the electroweak scale, and a light dark photon $Z'$ with $m_{Z'} \sim 10^{-4}$ eV. The light $Z'$ enhances the self-scattering elastic cross section $\chi_1 \chi_1 \to \chi_1 \chi_1$ enough to solve the small scale problems in the $N$-body simulations with the cold dark matter. The dark matter communicates with the SM via kinetic mixing parameterized by $\epsilon$. The inelastic scattering process $\chi_1 \chi_1 \to \chi_2 \chi_2$ followed by the prompt decay $\chi_2 \to \chi_1 Z'$ generates energetic $Z'$. By setting $\delta \equiv m_{\chi_2} - m_{\chi_1} \simeq 2.8$ keV and $\epsilon \sim 10^{-10}$ the excess in the electron-recoil data at the XENON1T experiment can be explained by the dark photoelectric effect. The relic abundance of the dark matter can also be accommodated by the thermal freeze-out mechanism via the annihilation $\chi_1 \chi_1 (\chi_2 \chi_2) \to Z' Z'$ with the dark gauge coupling constant $\alpha_X \sim 10^{-3}$. | high energy physics phenomenology |
In this paper, we consider decompositions of 3-manifolds with three handlebodies. We classify such decompositions of the 3-sphere and lens spaces with small genera. These decompositions admit operations called stabilizations. We also determine whether these decompositions are stabilized. | mathematics |
Visual Query Answering (VQA) is of great significance in offering people convenience: one can raise a question for details of objects, or high-level understanding about the scene, over an image. This paper proposes a novel method to address the VQA problem. In contrast to prior works, our method that targets single scene VQA, replies on graph-based techniques and involves reasoning. In a nutshell, our approach is centered on three graphs. The first graph, referred to as inference graph GI , is constructed via learning over labeled data. The other two graphs, referred to as query graph Q and entity-attribute graph GEA, are generated from natural language query Qnl and image Img, that are issued from users, respectively. As GEA often does not take sufficient information to answer Q, we develop techniques to infer missing information of GEA with GI . Based on GEA and Q, we provide techniques to find matches of Q in GEA, as the answer of Qnl in Img. Unlike commonly used VQA methods that are based on end-to-end neural networks, our graph-based method shows well-designed reasoning capability, and thus is highly interpretable. We also create a dataset on soccer match (Soccer-VQA) with rich annotations. The experimental results show that our approach outperforms the state-of-the-art method and has high potential for future investigation. | computer science |
Efficiently estimating properties of large and strongly coupled quantum systems is a central focus in many-body physics and quantum information theory. While quantum computers promise speedups for many such tasks, near-term devices are prone to noise that will generally reduce the accuracy of such estimates. Here we show how to mitigate errors in the shadow estimation protocol recently proposed by Huang, Kueng, and Preskill. By adding an experimentally friendly calibration stage to the standard shadow estimation scheme, our robust shadow estimation algorithm can obtain an unbiased estimate of the classical shadow of a quantum system and hence extract many useful properties in a sample-efficient and noise-resilient manner given only minimal assumptions on the experimental conditions. We give rigorous bounds on the sample complexity of our protocol and demonstrate its performance with several numerical experiments. | quantum physics |
Neutrinos may acquire small Dirac or Majorana masses by new low-energy physics in terms of the chiral gravitational anomaly, as proposed by Dvali and Funcke (2016). This model predicts fast neutrino decays, $\nu_i\to\nu_j+\phi$ and $\nu_i\to\bar{\nu}_j+\phi$, where the gravi-majorons $\phi$ are pseudoscalar Nambu-Goldstone bosons. The final-state neutrino and antineutrino distributions differ depending on the Dirac or Majorana mass of the initial state. This opens a channel for distinguishing these cases, for example in the spectrum of high-energy astrophysical neutrinos. In particular, we put bounds on the neutrino lifetimes in the Majorana case, ${\tau_2}/{m_2}> 1.1\times 10^{-3}(6.7\times 10^{-4})~{\rm s/eV}$ and ${\tau_3}/{m_3}> 2.2\times 10^{-5}(1.3\times 10^{-4})~{\rm s/eV}$ at 90% CL for hierarchical (degenerate) masses, using data from experiments searching for antineutrino appearance from the Sun. | high energy physics phenomenology |
The kink instability of magnetohydrodynamics is believed to be fundamental to many aspects of the dynamic activity of the solar atmosphere, such as the initiation of flares and the heating of the solar corona. In this work, we investigate the importance of viscosity on the kink instability. In particular, we focus on two forms of viscosity; isotropic viscosity (independent of the magnetic field) and anisotropic viscosity (with a preferred direction following the magnetic field). Through the detailed analysis of magnetohydrodynamic simulations of the kink instability with both types of viscosity, we show that the form of viscosity has a significant effect on the nonlinear dynamics of the instability. The different viscosities allow for different flow and current structures to develop, thus affecting the behaviour of magnetic relaxation, the formation of secondary instabilities and the Ohmic and viscous heating produced. Our results have important consequences for the interpretation of solar observations of the kink instability. | astrophysics |
Active colloidal particles that are propelled by a self-diffusiophoretic mechanism are often described by Langevin equations that are either postulated on physical grounds or derived using the methods of fluctuating hydrodynamics. While these descriptions are appropriate for colloids of micrometric and larger size, they will break down for very small active particles. A fully microscopic derivation of Langevin equations for self-diffusiophoretic particles powered by chemical reactions catalyzed asymmetrically by the colloid is given in this paper. The derivation provides microscopic expressions for the translational and rotational friction tensors, as well as reaction rate coefficients appearing in the Langevin equations. The diffusiophoretic force and torque are expressed in terms of nonequilibrium averages of fluid fields that satisfy generalized transport equations. The results provide a description of active motion on small scales where descriptions in terms of coarse grained continuum fluid equations combined with boundary conditions that account for the presence of the colloid may not be appropriate. | condensed matter |
In this paper, we consider the problem of sequential binary hypothesis test in adversary environment based on observations from s sensors, with the caveat that a subset of c sensors is compromised by an adversary, whose observations can be manipulated arbitrarily. We choose the asymptotic Average Sample Number (ASN) required to reach a certain level of error probability as the performance metric of the system. The problem is cast as a game between the detector and the adversary, where the detector aims to optimize the system performance while the adversary tries to deteriorate it. We propose a pair of flip attack strategy and voting hypothesis testing rule and prove that they form an equilibrium strategy pair for the game. We further investigate the performance of our proposed detection scheme with unknown number of compromised sensors and corroborate our result with simulation. | electrical engineering and systems science |
This paper is concerned with the inverse scattering problem for the three-dimensional Maxwell's equations in bi-anisotropic periodic structures. The inverse scattering problem aims to determine the shape of bi-anisotropic periodic scatterers from electromagnetic near field data at a fixed frequency. The Factorization method is studied as an analytical and numerical tool for solving the inverse problem. We provide a rigorous justification of the Factorization method which results in the unique determination and a fast imaging algorithm for the periodic scatterer. Numerical examples for imaging three-dimensional periodic structures are presented to examine the efficiency of the method. | mathematics |
Model checking is an established technique to formally verify automation systems which are required to be trusted. However, for sufficiently complex systems model checking becomes computationally infeasible. On the other hand, testing, which offers less reliability, often does not present a serious computational challenge. Searching for synergies between these two approaches, this paper proposes a framework to ensure reliability of industrial automation systems by means of hybrid use of model checking and testing. This framework represents a way to achieve a trade-off between verification reliability and computational complexity which has not yet been explored in other approaches. Instead of undergoing usual model checking, system requirements are checked only on particular system behaviors which represent a test suite achieving coverage for both the system and the requirements. Then, all stages of the framework support the case of a closed-loop model, where not only the controller, but also the plant is modeled. | computer science |
Black hole perturbation theory is a useful approach to study interactions between black holes and fundamental fields. A particular class of black hole solutions arising out of modification of Einstein's general theory of relativity are regular black holes (RBHs) which can be constructed using a nonlinear electrodynamic Lagrangian. Because of their importance, we are interested in studying the behavior of three kinds of such RBHs under perturbations generated by an external field. Indeed, we investigate the quasinormal modes (QNMs) of a massive scalar field propagating near the RBHs which is non-minimally coupled to the Ricciscalar tensor of background geometry. We numerically find the low-lying quasinormal frequencies of the perturbations by using the third-order WKB approximation. The frequencies of QNMs only depend on the properties of the black hole and are complex which their imaginary parts are important in determineing the stability of black hole against the scalar perturbations. We also study the greybody factors for RBHs in the WKB approximation using a numerical analysis. | high energy physics theory |
We study possible relations between the structure of the connectome, white matter connecting different regions of brain, and Alzheimer disease. Regression models in covariates including age, gender and disease status for the extent of white matter connecting each pair of regions of brain are proposed. Subject We study possible relations between the Alzheimer's disease progression and the structure of the connectome, white matter connecting different regions of brain. Regression models in covariates including age, gender and disease status for the extent of white matter connecting each pair of regions of brain are proposed. Subject inhomogeneity is also incorporated in the model through random effects with an unknown distribution. As there are large number of pairs of regions, we also adopt a dimension reduction technique through graphon (Lovasz and Szegedy (2006)) functions, which reduces functions of pairs of regions to functions of regions. The connecting graphon functions are considered unknown but assumed smoothness allows putting priors of low complexity on them. We pursue a nonparametric Bayesian approach by assigning a Dirichlet process scale mixture of zero mean normal prior on the distributions of the random effects and finite random series of tensor products of B-splines priors on the underlying graphon functions. Markov chain Monte Carlo techniques, for drawing samples for the posterior distributions are developed. The proposed Bayesian method overwhelmingly outperforms similar ANCOVA models in the simulation setup. The proposed Bayesian approach is applied on a dataset of 100 subjects and 83 brain regions and key regions implicated in the changing connectome are identified. | statistics |
Let $(R, \frak m)$ be a Noetherian local ring and $M$ a finitely generated $R$-module of dimension $d$. A famous result of Northcott says that if $M$ is Cohen-Macaulay, then the index of reducibility of parameter ideals on $M$ is an invariant of the module. The aim of this paper is to extend Northcott's theorem for any finitely generated $R$-module. We call this invariant the stable value of the indices of reducibility of parameter ideals of $M$. We also introduce the limit value of the indices of reducibility of parameter ideals of $M$. | mathematics |
There is growing interest in large-scale machine learning and optimization over decentralized networks, e.g. in the context of multi-agent learning and federated learning. Due to the imminent need to alleviate the communication burden, the investigation of communication-efficient distributed optimization algorithms - particularly for empirical risk minimization - has flourished in recent years. A large fraction of these algorithms have been developed for the master/slave setting, relying on a central parameter server that can communicate with all agents. This paper focuses on distributed optimization over networks, or decentralized optimization, where each agent is only allowed to aggregate information from its neighbors. By properly adjusting the global gradient estimate via local averaging in conjunction with proper correction, we develop a communication-efficient approximate Newton-type method Network-DANE, which generalizes DANE to the decentralized scenarios. Our key ideas can be applied in a systematic manner to obtain decentralized versions of other master/slave distributed algorithms. A notable development is Network-SVRG/SARAH, which employs variance reduction to further accelerate local computation. We establish linear convergence of Network-DANE and Network-SVRG for strongly convex losses, and Network-SARAH for quadratic losses, which shed light on the impacts of data homogeneity, network connectivity, and local averaging upon the rate of convergence. We further extend Network-DANE to composite optimization by allowing a nonsmooth penalty term. Numerical evidence is provided to demonstrate the appealing performance of our algorithms over competitive baselines, in terms of both communication and computation efficiency. Our work suggests that performing a certain amount of local communications and computations per iteration can substantially improve the overall efficiency. | statistics |
Modern wind turbine control algorithms typically utilize rotor effective wind speed measured from an anemometer on the turbine's nacelle. Unfortunately, the measured wind speed from such a single measurement point does not give a good representation of the effective wind speed over the blades, as it does not take the varying wind condition within the entire rotor area into account. As such, Blade Effective Wind Speed (BEWS) estimation can be seen as a more accurate alternative. This paper introduces a novel Subspace Predictive Repetitive Estimator (SPRE) approach to estimate the BEWS using blade load measurements. In detail, the azimuth-dependent cone coefficient is firstly formulated to describe the mapping between the out-of-plane blade root bending moment and the wind speed over blades. Then, the SPRE scheme, which is inspired by Subspace Predictive Repetitive Control (SPRC), is proposed to estimate the BEWS. Case studies exhibit the proposed method's effectiveness at predicting BEWS and identifying wind shear in varying wind speed conditions. Moreover, this novel technique enables complicated wind inflow conditions, where a rotor is impinged and overlapped by wake shed from an upstream turbine, to be estimated. | electrical engineering and systems science |
Respiration-induced B$_0$ fluctuation corrupts MRI images by inducing phase errors in k-space. A few approaches such as navigator have been proposed to correct for the artifacts at the expense of sequence modification. In this study, a new deep learning method, which is referred to as DeepResp, is proposed for reducing the respiration-artifacts in multi-slice gradient echo (GRE) images. DeepResp is designed to extract the respiration-induced phase errors from a complex image using deep neural networks. Then, the network-generated phase errors are applied to the k-space data, creating an artifact-corrected image. For network training, the computer-simulated images were generated using artifact-free images and respiration data. When evaluated, both simulated images and in-vivo images of two different breathing conditions (deep breathing and natural breathing) show improvements (simulation: normalized root-mean-square error (NRMSE) from 7.8% to 1.3%; structural similarity (SSIM) from 0.88 to 0.99; ghost-to-signal-ratio (GSR) from 7.9% to 0.6%; deep breathing: NRMSE from 13.9% to 5.8%; SSIM from 0.86 to 0.95; GSR 20.2% to 5.7%; natural breathing: NRMSE from 5.2% to 4.0%; SSIM from 0.94 to 0.97; GSR 5.7% to 2.8%). Our approach does not require any modification of the sequence or additional hardware, and may therefore find useful applications. Furthermore, the deep neural networks extract respiration-induced phase errors, which is more interpretable and reliable than results of end-to-end trained networks. | electrical engineering and systems science |
In this Letter, we propose a new approach to process high-dimensional quantum information encoded in a photon frequency domain. In contrast to previous approaches based on nonlinear optical processes, no active control of photon energy is required. Arbitrary unitary transformation and projection measurement can be realized with passive photonic circuits and time-resolving detection. A systematic circuit design for a quantum frequency comb with arbitrary size has been given. The criteria to verify quantum frequency correlation has been derived. By considering the practical condition of detector's finite response time, we show that high-fidelity operation can be readily realized with current device performance. This work will pave the way towards scalable and high-fidelity quantum information processing based on high-dimensional frequency encoding. | quantum physics |
Generative adversarial networks (GAN) have recently been shown to be efficient for speech enhancement. However, most, if not all, existing speech enhancement GANs (SEGAN) make use of a single generator to perform one-stage enhancement mapping. In this work, we propose to use multiple generators that are chained to perform multi-stage enhancement mapping, which gradually refines the noisy input signals in a stage-wise fashion. Furthermore, we study two scenarios: (1) the generators share their parameters and (2) the generators' parameters are independent. The former constrains the generators to learn a common mapping that is iteratively applied at all enhancement stages and results in a small model footprint. On the contrary, the latter allows the generators to flexibly learn different enhancement mappings at different stages of the network at the cost of an increased model size. We demonstrate that the proposed multi-stage enhancement approach outperforms the one-stage SEGAN baseline, where the independent generators lead to more favorable results than the tied generators. The source code is available at http://github.com/pquochuy/idsegan. | computer science |
The use of low-cost sensors in air quality monitoring networks is still a much-debated topic among practitioners: they are much cheaper than traditional air quality monitoring stations set up by public authorities (a few hundred dollars compared to a few dozens of thousand dollars) at the cost of a lower accuracy and robustness. This paper presents a case study of using low-cost sensors measurements in an air quality prediction engine. The engine predicts jointly PM2.5 and PM10 concentrations in the United States at a very high resolution in the range of a few dozens of meters. It is fed with the measurements provided by official air quality monitoring stations, the measurements provided by a network of more than 4000 low-cost sensors across the country, and traffic estimates. We show that the use of low-cost sensors' measurements improves the engine's accuracy very significantly. In particular, we derive a strong link between the density of low-cost sensors and the predictions' accuracy: the more low-cost sensors are in an area, the more accurate are the predictions. As an illustration, in areas with the highest density of low-cost sensors, the low-cost sensors' measurements bring a 25% and 15% improvement in PM2.5 and PM10 predictions' accuracy respectively. An other strong conclusion is that in some areas with a high density of low-cost sensors, the engine performs better when fed with low-cost sensors' measurements only than when fed with official monitoring stations' measurements only: this suggests that an air quality monitoring network composed of low-cost sensors is effective in monitoring air quality. This is a very important result, as such a monitoring network is much cheaper to set up. | electrical engineering and systems science |
In this paper, we first present an arc based algorithm for fan-beam computed tomography (CT) reconstruction via applying Katsevich's helical CT formula to 2D fan-beam CT reconstruction. Then, we propose a new weighting function to deal with the redundant projection data. By extending the weighted arc based fan-beam algorithm to circle cone-beam geometry, we also obtain a new FDK-similar algorithm for circle cone-beam CT reconstruction. Experiments show that our methods can obtain higher PSNR and SSIM compared to the Parker-weighted conventional fan-beam algorithm and the FDK algorithm for super-short-scan trajectories. | electrical engineering and systems science |
Diffusion-driven patterns appear on curved surfaces in many settings, initiated by unstable modes of an underlying Laplacian operator. On a flat surface or perfect sphere, the patterns are degenerate, reflecting translational/rotational symmetry. Deformations, e.g. by a bulge or indentation, break symmetry and can pin a pattern. We adapt methods of conformal mapping and perturbation theory to examine how curvature inhomogeneities select and pin patterns, and confirm the results numerically. The theory provides an analogy to quantum mechanics in a geometry-dependent potential and yields intuitive implications for cell membranes, tissues, thin films, and noise-induced quasipatterns. | condensed matter |
We revisit the calculation of the strong couplings $D^*D\pi$ and $B^*B\pi$ from the QCD light-cone sum rules using the pion light-cone distribution amplitudes. The accuracy of the correlation function, calculated from the operator product expansion near the light-cone, is upgraded by taking into account the gluon radiative corrections to the twist-3 terms. The double spectral density of the correlation function, including the twist-2, 3 terms at ${\cal O} (\alpha_s)$ and the twist-4 LO terms, is presented in an analytical form for the first time. This form allows us to use various versions of the quark-hadron duality regions in the double dispersion relation underlying the sum rules. We predict $g_{D^*D\pi}=14.1^{+1.3}_{-1.2}$ and $g_{B^*B\pi}=30.0^{+2.6}_{-2.4}$ when the decay constants of heavy mesons entering the light-cone sum rule are taken from lattice QCD results. We compare our results with the experimental value for the charmed meson coupling and with the lattice QCD calculations. | high energy physics phenomenology |
We systematically construct all the tetraquark currents/operators of $J^{PC} = 1^{++}$ with the quark configurations $[cq][\bar c \bar q]$, $[\bar c q][\bar q c]$, and $[\bar c c][\bar q q]$ ($q=u/d$). Their relations are derived using the Fierz rearrangement of the Dirac and color indices, through which we study decay properties of the $X(3872)$ under both the compact tetraquark and hadronic molecule interpretations. We propose to search for the $X(3872) \rightarrow \chi_{c0} \pi$, $\eta_c \pi \pi$, and $\chi_{c1} \pi \pi$ decay processes in particle experiments. | high energy physics phenomenology |
Orbital dissimilarity, or D, criteria are often used to select members of a meteor shower from a set of meteor observations. These criteria provide a quantitative description of the degree to which two orbits differ; if the degree of dissimilarity between a shower's reference orbit and an individual meteor does not exceed a selected threshold, the meteor is considered to be a member of that shower. However, members of a meteor shower tend to disperse in longitude of the ascending node (and thus in solar longitude) while preserving a common Sun-centered ecliptic radiant. Employing dissimilarity criteria to judge shower membership may therefore make the shower appear briefer than it actually is. We demonstrate this effect for two simulated meteor showers and assess the maximum permitted deviation in solar longitude as a function of radiant and velocity measurement error. | astrophysics |
Ultrarelativistic electron beam-laser pulse scattering experiments are the workhorse for the investigation of QED and of possible signatures of new physics in the still largely unexplored strong-field regime. However, shot-to-shot fluctuations both of the electron beam and of the laser pulse parameters render it difficult to discern the dynamics of the interaction. Consequently, the possibility of benchmarking theoretical predictions against experimental results, which is essential for validating theoretical models, is severely limited. Here we show that the stochastic nature of quantum emission events provides a unique route to the on-shot diagnostic of the electron beam-laser pulse interaction, therefore paving the way for accurate measurements of strong-field QED effects. | physics |
The sample efficiency of Bayesian optimization(BO) is often boosted by Gaussian Process(GP) surrogate models. However, on mixed variable spaces, surrogate models other than GPs are prevalent, mainly due to the lack of kernels which can model complex dependencies across different types of variables. In this paper, we propose the frequency modulated (FM) kernel flexibly modeling dependencies among different types of variables, so that BO can enjoy the further improved sample efficiency. The FM kernel uses distances on continuous variables to modulate the graph Fourier spectrum derived from discrete variables. However, the frequency modulation does not always define a kernel with the similarity measure behavior which returns higher values for pairs of more similar points. Therefore, we specify and prove conditions for FM kernels to be positive definite and to exhibit the similarity measure behavior. In experiments, we demonstrate the improved sample efficiency of GP BO using FM kernels (BO-FM).On synthetic problems and hyperparameter optimization problems, BO-FM outperforms competitors consistently. Also, the importance of the frequency modulation principle is empirically demonstrated on the same problems. On joint optimization of neural architectures and SGD hyperparameters, BO-FM outperforms competitors including Regularized evolution(RE) and BOHB. Remarkably, BO-FM performs better even than RE and BOHB using three times as many evaluations. | statistics |
The construction of conformal blocks for the analysis of multipoint correlation functions with $N > 4$ local field insertions is an important open problem in higher dimensional conformal field theory. This is the first in a series of papers in which we address this challenge, following and extending our short announcement in [Phys. Rev. Lett. 126, 021602]. According to Dolan and Osborn, conformal blocks can be determined from the set of differential eigenvalue equations that they satisfy. We construct a complete set of commuting differential operators that characterize multipoint conformal blocks for any number $N$ of points in any dimension and for any choice of OPE channel through the relation with Gaudin integrable models we uncovered in [Phys. Rev. Lett. 126, 021602]. For 5-point conformal blocks, there exist five such operators which are worked out smoothly in the dimension $d$. | high energy physics theory |
The James Webb Space Telescope will provide observational capabilities that far exceed those of current ground- or space-based instrumentation. In particular, the NIRSpec instrument will take highly sensitive spectroscopic data for hundreds of objects simultaneously from 0.6-5.3 microns. Current photometric observations suggest a large and increasing number of faint (M_UV > -16) galaxies at high-redshift, with increasing evidence that galaxies at these redshifts have optical emission lines with extremely high equivalent widths. A simple model of their emission line fluxes and number density evolution with redshift is used to predict the number of galaxies that NIRSpec will serendipitously observe during normal observations with the microshutter array. At exposure times of ~20 hours in the low-resolution prism mode, the model predicts that, on average, every open 1x3 'microslit' will contain an un-targeted galaxy with a detectable [O III] and/or H$\alpha$ emission line; while most of these detections are predicted to be of [O III], H $\alpha$ detections alone would still number 0.56 per open 'microslit' for this exposure time. Many of these objects are spectroscopically detectable even when they are fainter than current photometric limits and/or their flux centroids lie outside of the open microshutter area. The predicted number counts for such galaxies match z ~ 2 observations of [O III] emitters from slitless grism spectroscopic surveys, as well as theoretical predictions based on sophisticated modeling of galaxy spectral energy distributions. These serendipitous detections could provide the largest numbers of z > 6 spectroscopic confirmations in the deepest NIRSpec surveys. | astrophysics |
We reconsider the tensionless limit on bosonic closed string theory, where the 3d Bondi-Metzner-Sachs (BMS) algebra appears as symmetries on the worldsheet, as opposed to two copies of the Virasoro algebra in the case of the usual tensile theory. This is an ultra-relativistic limit on the worldsheet. We consider the induced representations of the BMS algebra in the oscillator basis and show that the limit takes the tensile closed string vacuum to the "induced" vacuum which is identified as a Neumann boundary state. Hence, rather remarkably, an open string emerges from closed strings in the tensionless limit. We also follow the perturbative states in the tensile theory in the limit and show that there is a Bose-Einstein like condensation of all perturbative states on this induced vacuum. This ties up nicely with the picture of the formation of a long string from a gas of strings in the Hagedorn temperature, where the effective string tension goes to zero. | high energy physics theory |
Plasmon-induced-transparency (PIT) in nanostructures has been intensively investigated, however, no existing metasurface nanostructure exhibits all-optically tunable properties, where the number of transparency windows can be tuned successively, and switched to off-state. Here, we theoretically investigate and demonstrate dynamically tunable, multichannel PIT at optical frequencies. The inplane destructive interference between bright and dark dipolar resonances in coupled plasmonic nanobar topologies is exploited to produce tunable PIT with unique characteristics. In particular, we demonstrate sequential polarization-selective multispectral operation whereby the number of PIT channels can be varied successively from '3' to '0'. The results provide a promising route for active manipulation of PIT and show potential applications for multifunctional dynamic nanophotonics devices. | physics |
Multiparty quantum communication provides delightful applications including quantum cryptographic communication and quantum secret sharing. Measurement-Device-Independent (MDI) quantum communication based on the Greenberg-Horne-Zeilinger (GHZ) state measurement provides a practical way to implement multiparty quantum communication. With the standard spatially localized GHZ state measurement, however, information can be imbalanced among the communication parties that can cause significant problems in multiparty cryptographic communication. Here, we propose an equitable multiparty quantum communication where information balance among the communication parties is achieved without a trusted third party. Our scheme is based on the GHZ state measurement which is not spatially localized but implemented in a way that all the distant communication parties symmetrically participate. We also verify the feasibility of our scheme by presenting the proof-of-principle experimental demonstration of informationally balanced three-party quantum communication using weak coherent pulses. | quantum physics |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.