text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
Quantum walk research has mainly focused on evolutions due to repeated applications of time-independent unitary coin operators. However, the idea of controlling the single particle evolution using time-dependent unitary coins has still been a subject of multiple studies as it not only hosts interesting possibilities for quantum information processing but also opens a much richer array of phenomena including static and dynamic localizations. So far, such studies have been performed only for single quantum walkers. In case of multi-walker systems, time-dependent coins may generate measurable phenomena not described by the single-particle model, due to entanglement and interaction among the walkers. In this context, we present here a thorough numerical study of an one dimensional system of two quantum walkers exhibiting rich collective dynamics controlled by simple time-dependent unitary coins proposed in [Phys. Rev. A \textbf{80}, 042332(2009)] and [Phys. Rev. A \textbf{73},062304(2006)]. We study how the interplay of coin time-dependence, simple interaction schemes, entanglement and the relative phase between the coin states of the particles influences the evolution of the quantum walk. The results show that the system offers a rich variety of collective dynamical behavior while being controlled by time dependent coins. In particular, we find and characterize fascinating two-body localization phenomena with tunable quasiperiodic dynamics of correlations and entanglements which are quantities of quantum origin. | quantum physics |
Photonic nanostructures simultaneously maximizing spectral and spatial overlap between fundamental and second-harmonic confined modes are highly desirable for enhancing second-order nonlinear effects in nonlinear media. These conditions have thus far remained challenging to satisfy in photonic crystal cavities because of the difficulty in designing a band gap at the second-harmonic frequency. Here, we solve this issue by using instead a bound state in the continuum at that frequency, and we design a doubly-resonant photonic crystal slab cavity with strongly improved figures of merit for nonlinear frequency conversion when compared to previous photonic crystal designs. Furthermore, we show that the far-field emission at both frequencies is highly collimated around normal incidence, which allows for simultaneously efficient pump excitation and collection of the generated nonlinear signal. Our results could lead to unprecedented conversion efficiencies in both parametric down conversion and second harmonic generation in an extremely compact architecture. | physics |
Anomalous Nernst effect, a result of charge current driven by temperature gradient, provides a probe of the topological nature of materials due to its sensitivity to the Berry curvature near the Fermi level. Fe3GeTe2, one important member of the recently discovered two-dimensional van der Waals magnetic materials, offers a unique platform for anomalous Nernst effect because of its metallic and topological nature. Here, we report the observation of large anomalous Nernst effect in Fe3GeTe2. The anomalous Hall angle and anomalous Nernst angle are about 0.07 and 0.09 respectively, far larger than those in common ferromagnets. By utilizing the Mott relation, these large angles indicate a large Berry curvature near the Fermi level, consistent with the recent proposal for Fe3GeTe2 as a topological nodal line semimetal candidate. Our work provides evidence of Fe3GeTe2 as a topological ferromagnet, and demonstrates the feasibility of using two-dimensional magnetic materials and their band topology for spin caloritronics applications. | condensed matter |
We describe 4D evaporating black holes as quantum field configurations by solving the semi-classical Einstein equation $G_{\mu\nu}=8\pi G \langle \psi|T_{\mu\nu}|\psi \rangle$ and quantum matter fields in a self-consistent manner. As the matter fields we consider $N$ massless free scalar fields ($N$ is large). We find a spherically symmetric self-consistent solution of the metric $g_{\mu\nu}$ and state $|\psi\rangle$. Here, $g_{\mu\nu}$ is locally $AdS_2\times S^2$ geometry, and $|\psi\rangle$ provides $\langle \psi|T_{\mu\nu}|\psi \rangle=\langle0|T_{\mu\nu}|0 \rangle+T_{\mu\nu}^{(\psi)}$, where $|0\rangle$ is the ground state of the matter fields in the metric and $T_{\mu\nu}^{(\psi)}$ consists of the excitation of s-waves that describe the collapsing matter and Hawking radiation with the ingoing negative energy flow. This object is supported by a large tangential pressure $\langle0|T^\theta{}_\theta|0 \rangle$ due to the vacuum fluctuation of the bound modes with large angular momenta. This describes the interior of the black hole when the back reaction of the evaporation is considered. The black hole is a compact object with a surface (instead of horizon) that looks like a conventional black hole from the outside and eventually evaporates without a singularity. If we count the number of self-consistent configurations $\{|\psi\rangle\}$, we reproduce the area law of the entropy. This tells that the information is carried by the s-waves inside the black hole. $|\psi\rangle$ also describes the process that the negative ingoing energy flow created with Hawking radiation is superposed on the collapsing matter to decrease the total energy while the total energy density remains positive. As a special case, we consider conformal matter fields and show that the interior metric is determined by the matter content of the theory, which leads to a new constraint to the matter content. | high energy physics theory |
Relativistic hydrodynamic equations for particles with spin one-half are used to determine the space-time evolution of the spin polarization in a boost-invariant and transversely homogeneous background. The hydrodynamic approach uses the forms of the energy-momentum and spin tensors based on de Groot, van Leeuwen, and van Weert formalism. Our calculations illustrate how the formalism of hydrodynamics with spin can be used to determine physical observables related to the spin polarization and how the latter can be compared with the experimental data. | high energy physics phenomenology |
The main result of this paper is the following: if F is any field and R is any F-subalgebra of the algebra of nxn matrices over F with Lie nilpotence index m, then the F-dimension of R is less or equal than M(m+1,n), where M(m+1,n) is the maximum of a certain expression of m+1 nonnegative integers and n. The case m=1 reduces to a classical theorem of Schur (1905), later generalized by Jacobson (1944) to all fields, which asserts that if F is an algebraically closed field of characteristic zero, and R is any commutative F-subalgebra of the full nxn matrix algebra over F, then the F-dimension of R is less or equal than ((n^2)/4)+1. Examples constructed from block upper triangular matrices show that the upper bound of M(m+1,n) cannot be lowered for any choice of m and n. An explicit formula for M(m+1,n) is also derived. | mathematics |
We discuss the thermoelectric effect of hot and dense hadron gas within the framework of the hadron resonance gas model. Using the relativistic Boltzmann equation within the relaxation time approximation we estimate the Seebeck coefficient of the hot and dense hadronic medium with a gradient in temperature and baryon chemical potential. The hadronic medium in this calculation is modeled by the hadron resonance gas (HRG) model with hadrons and their resonances up to a mass cutoff $\Lambda\sim 2.6$ GeV. We also extend the formalism of the thermoelectric effect for a nonvanishing magnetic field. The presence of magnetic field also leads to a Hall type thermoelectric coefficient (Nernst coefficient) for the hot and dense hadronic matter apart from a magneto-Seebeck coefficient. We find that generically in the presence of a magnetic field Seebeck coefficient decreases while the Nernst coefficient increases with the magnetic field. At higher temperature and/or baryon chemical potential these coefficients approach to their values at vanishing magnetic field. | high energy physics phenomenology |
In the millimeter wave (30-300 GHz) and Terahertz (0.1-10 THz) frequency bands, high spreading loss and molecular absorption often limit the signal transmission distance and coverage range. In this paper, four directions to tackle the crucial problem of distance limitation are investigated, namely, a physical layer distance-aware design, ultra-massive MIMO communication, reflectarrays, and intelligent surfaces. Additionally, the potential joint design of these technologies is proposed to combine the benefits and possibly further extend the communication distance. Qualitative analyses and quantitative simulations are provided to illustrate the benefits of the proposed techniques and demonstrate the feasibility of mm-wave and THz band communications up to 100 meters in both line-of-sight and non-line-of-sight areas. | electrical engineering and systems science |
We provide a robust defence to adversarial attacks on discriminative algorithms. Neural networks are naturally vulnerable to small, tailored perturbations in the input data that lead to wrong predictions. On the contrary, generative models attempt to learn the distribution underlying a dataset, making them inherently more robust to small perturbations. We use Boltzmann machines for discrimination purposes as attack-resistant classifiers, and compare them against standard state-of-the-art adversarial defences. We find improvements ranging from 5% to 72% against attacks with Boltzmann machines on the MNIST dataset. We furthermore complement the training with quantum-enhanced sampling from the D-Wave 2000Q annealer, finding results comparable with classical techniques and with marginal improvements in some cases. These results underline the relevance of probabilistic methods in constructing neural networks and demonstrate the power of quantum computers, even with limited hardware capabilities. This work is dedicated to the memory of Peter Wittek. | quantum physics |
In this article, a new perspective for obtaining the magnetic evolution of $\pi-\pi $ scattering lengths in the frame of the linear sigma model is presented. When computing the relevant one-loop diagrams that contribute to these parameters, the sum over Landau levels --emerging from the expansion of the Schwinger propagator-- is handled in a novel way that could also be applied to the calculation of other magnetic-type corrections. Essentially, we have obtained an expansion in terms of Hurwitz Zeta functions. It is necessary to regularize our expressions by an appropriate physical subtraction when $|qB| \rightarrow 0$ ($q$ the meson charge and $B$ the magnetic field strength). In this way, we are able to interpolate between the very high magnetic field strength region, usually handled in terms of the Lowest Landau Level (LLA) approximation, and the weak field region, discussed in a previous paper by some of us, which is based on an appropriate expansion of the Schwinger propagator up to order $|qB|^{2}$. Our results for the scattering lengths parameters produce a soft evolution in a wide region of magnetic field strengths, reducing to the previously found expressions in both limits. | high energy physics phenomenology |
We construct a gravity dual to a system with multiple (2+1)-dimensional layers in a (3+1)-dimensional ambient theory. Following a top-down approach, we generate a geometry corresponding to the intersection of D3- and D5-branes along 2+1 dimensions. The D5-branes create a codimension one defect in the worldvolume of the D3-branes and are homogeneously distributed along the directions orthogonal to the defect. We solve the fully backreacted ten-dimensional supergravity equations of motion with smeared D5-brane sources. The solution is supersymmetric, has an intrinsic mass scale, and exhibits anisotropy at short distances in the gauge theory directions. We illustrate the running behavior in several observables, such as Wilson loops, entanglement entropy, and within thermodynamics of probe branes. | high energy physics theory |
In this work, we propose a unified spatio-temporal model to study the temporal dynamics in continuously pumped ytterbium-doped fiber lasers (YDFLs). Different from previously reported theories, this model is capable of obtaining the temporal evolution of an YDFL from relaxation oscillation region to relative stable region in different time scales ranging from sub-nanosecond to millisecond. It reveals that there exists dual time scale characteristics in the temporal evolution of a multi-longitudinal mode YDFL. Specifically, the temporal evolution would experience sharp change during one cavity round-trip while keep relatively stable between adjacent cavity round-trips. Representative cases are simulated to study the influences of structure parameters on the temporal dynamics and the longitudinal mode characteristics in YDFLs. Three types of temporal instabilities, i.e. sustained self-pulsing, self-mode locking, and turbulence-like pulsing, coexist in a multi-longitudinal mode YDFL. The simulation results clarify that the three temporal instabilities are all the reflectors of intrinsic characteristics of longitudinal modes superposition in multi-longitudinal mode YDFLs. In addition, the strength of the irregular sustained self-pulsing is the major issue which impacts the macroscopic temporal fluctuations in YDFLs. | physics |
When we use simulation to evaluate the performance of a stochastic system, the simulation often contains input distributions estimated from real-world data; therefore, there is both simulation and input uncertainty in the performance estimates. Ignoring either source of uncertainty underestimates the overall statistical error. Simulation uncertainty can be reduced by additional computation (e.g., more replications). Input uncertainty can be reduced by collecting more real-world data, when feasible. This paper proposes an approach to quantify overall statistical uncertainty when the simulation is driven by independent parametric input distributions; specifically, we produce a confidence interval that accounts for both simulation and input uncertainty by using a metamodel-assisted bootstrapping approach. The input uncertainty is measured via bootstrapping, an equation-based stochastic kriging metamodel propagates the input uncertainty to the output mean, and both simulation and metamodel uncertainty are derived using properties of the metamodel. A variance decomposition is proposed to estimate the relative contribution of input to overall uncertainty; this information indicates whether the overall uncertainty can be significantly reduced through additional simulation alone. Asymptotic analysis provides theoretical support for our approach, while an empirical study demonstrates that it has good finite-sample performance. | statistics |
The question of a hidden variable interpretation of quantum contextuality in the Mermin-Peres square is considered. The Kochen-Specker theorem implies that quantum mechanics may be interpreted as a contextual hidden variable theory. It is shown that such a hidden variable description can be viewed as either contextual in the random variables mapping hidden states to observable outcomes or in the probability measure on the hidden state space. The latter view suggests that this apparent contextuality may be interpreted as a simple consequence of measurement disturbance, wherein the initial hidden state is altered through interaction with the measuring device, thereby giving rise to a possibly different final hidden variable state from which the measurement outcome is obtained. In light of this observation, a less restrictive and, arguably, more reasonable definition of noncontextuality is suggested. To prove that such a description is possible, an explicit and, in this sense, noncontextual hidden variable model is constructed which reproduces all quantum theoretic predictions for the Mermin-Peres square. A critical analysis of some recent and proposed experimental tests of contextuality is also provided. Although the discussion is restricted to a four-dimensional Hilbert space, the approach and conclusions are expected to generalize to any Hilbert space. | quantum physics |
This work introduces a linearized analytical model for the study of the dynamic of satellites in near circular orbits under the effects of the atmospheric drag. This includes the evaluation of the station keeping required for each satellite subjected to a control box strategy, and also the study of the dynamic of tandem formations between two or more satellites that are located on the same nominal space-track. The model takes into account the effect of the orbit perturbation provoked by the atmospheric drag, while the effects of the Earth gravitation potential are included in the definition of the nominal orbits of the satellites. This allows to easily define the maneuvering strategies for the satellites involved in the tandem formation and study their absolute and relative dynamic. In particular, this work focuses on the study of a master-slave scenario and the in plane maneuvers that these satellites require, proposing two different control strategies for the formation. | astrophysics |
We present multi-wavelength observations of two gap transients followed by the Carnegie Supernova Project-II and supplemented with data obtained by a number of different programs. Here in the first of two papers, we focus on the intermediate luminosity red transient (ILRT) designated SNhunt120, while in a companion paper we examine the luminous red novae AT 2014ej. Our data set for SNhunt120 consists of an early optical discovery, estimated to be within 3 days after outburst, the subsequent optical and near-infrared broadband followup extending over a $\sim$2 month period, two visual- and two near-infrared wavelength spectra, and Spitzer Space Telescope observations extending from early ($+$28 d) to late ($+$1155 d) phases. SNhunt120 resembles other ILRTs such as NGC 300-2008-OT and SN 2008S, and like these other ILRTs, SNhunt120 exhibits prevalent mid-infrared emission at both early and late phases. From the comparison of SNhunt120 and other ILRTs to electron-capture supernova simulations, we find that the current models underestimate the explosion kinetic energy and thereby produce synthetic light curves that over-estimate the luminosity. Finally, examination of pre-outburst Hubble Space Telescope images yields no progenitor detection. | astrophysics |
Glassy solids may undergo a fluidization (yielding) transition upon deformation whereby the material starts to flow plastically. It has been a matter of debate whether this process is controlled by a specific time scale, from among different competing relaxation/kinetic processes. Here, two constitutive models of cage relaxation are examined within the microscopic model of nonaffine elasto-plasticity. One (widely used) constitutive model implies that the overall relaxation rate is dominated by the fastest between the structural ($\alpha$) relaxation rate and the shear-induced relaxation rate. A different model is formulated here which, instead, assumes that the slowest (global) relaxation process controls the overall relaxation. We show that the first model is not compatible with the existence of finite elastic shear modulus for quasistatic (low-frequency) deformation, while the second model is able to describe all key features of deformation of `hard' glassy solids, including the yielding transition, the nonaffine-to-affine plateau crossover, and the rate-stiffening of the modulus. The proposed framework provides an operational way to distinguish between `soft' glasses and `hard' glasses based on the shear-rate dependence of the structural relaxation time. | condensed matter |
Neuromorphic engineering is a rapidly developing field that aims to take inspiration from the biological organization of neural systems to develop novel technology for computing, sensing and actuating. The unique properties of such systems call for new signal processing and control paradigms. The article introduces the mixed feedback organization of excitable neuronal systems, consisting of interlocked positive and negative feedback loops acting in distinct timescales. The principles of biological neuromodulation suggest a methodology for designing and controlling mixed-feedback systems neuromorphically. The proposed design consists of a parallel interconnection of elementary circuit elements that mirrors the organization of biological neurons and utilizes the hardware components of neuromorphic electronic circuits. The interconnection structure endows the neuromorphic systems with a simple control methodology that reframes the neuronal control as an input-output shaping problem. The potential of neuronal control is illustrated on simple network examples that suggest the scalability of the mixed-feedback principles. | electrical engineering and systems science |
We propose a Markov chain Monte Carlo-based deconvolution method designed to estimate the number of peaks in spectral data, along with the optimal parameters of each radial basis function. Assuming cases where the number of peaks is unknown, and a sweep simulation on all candidate models is computationally unrealistic, the proposed method efficiently searches over the probable candidates via trans-dimensional moves assisted by annealing effects from replica exchange Monte Carlo moves. Through simulation using synthetic data, the proposed method demonstrates its advantages over conventional sweep simulations, particularly in model selection problems. Application to a set of olivine reflectance spectral data with varying forsterite and fayalite mixture ratios reproduced results obtained from previous mineralogical research, indicating that our method is applicable to deconvolution on real data sets. | statistics |
In recent times, the charged-current mediated semileptonic $b \to c \tau \bar \nu_\tau$ processes have attracted a lot of attention after the observation of lepton non-universality ratios, $R_{D^{(*)}}$, $R_{J/\psi}$ and the measurements on $D^*$ and $\tau$ longitudinal polarization fractions in $\bar B\to D^* \tau \bar \nu_\tau$ processes. We present a model-independent analysis of $ \bar B\to D^{(*)} \tau \bar \nu_\tau$, $ B_s\to D_s^{(*)} \tau \bar \nu_\tau$, $ B_c^+ \to (\eta_c, J/\psi) \tau^+ \nu_\tau$, $\Lambda_b \to \Lambda_c \tau \bar \nu_\tau$ and $\bar B \to D^{**} \tau \bar \nu_\tau$ (where $D^{**} = \{D^*_0, D_1^*, D_1, D_2^*\}$ are the four lightest excited charm mesons) processes involving $b \to c \tau \bar \nu$ quark level transitions by considering the most general effective Lagrangian in the presence of new physics. We perform a global fit to various set of new coefficients, including the measurements on $R_{D^{(*)}}$, $R_{J/\psi}$ and the upper limit on Br($B_c^+ \to \tau^+ \bar \nu_\tau$). We then show the implications of constrained new couplings on the branching fractions, lepton non-universality ratios and various angular observables of these decay modes in four different bins of $q^2$. | high energy physics phenomenology |
Star clusters are ideal platforms for categorising X-ray emitting stars and to study X-ray emission as a function of stellar age and activity. We present a comprehensive study of an open star cluster, NGC2527, by combining data from XMM-UVOT-Gaia. Cluster membership of stars and their photometry are taken from Gaia and cross-matched with XMM and UVOT detections. We estimate the age of NGC2527 as ~630 Myr, reddening as E(B-V)=0.13 mag, and a distance of 642+/-30 pc using PARSEC isochrones. We detect 5 sub-subgiants and 5 bandgap stars, which defy single star evolution. We estimate the temperature, mass, radius, and luminosity of 53 single stars and 10 potential binary stars using a python code which fits single and composite Kurucz spectra to broad-band Spectral Energy Distribution. Among the 12 X-ray emitting members, we find 5 are potential RS CVn type binaries, 2 are potential FK Comae type of RGB stars, and 5 are main sequence (MS) stars with high coronal activity. Members with strong UV emission comprise of 1 RGB star, and several MS stars with UV excess suggestive of chromospheric activity. Based on comparison with other clusters, we tentatively suggest that X-ray luminosity of both RS CVn and contact binaries increases with age suggesting more active binaries are present in older clusters as compared to younger clusters. This study suggests the possible presence of W UMa and FK Comae type stars in younger (age~630 Myr) clusters. | astrophysics |
We present and analyze spatially-resolved maps for the observed $V$- and $g$-band to 3.6$\mu$m flux ratios and the inferred dust extinction values, $A_V$, for a sample of 257 nearby NGC and IC galaxies. Flux ratio maps are constructed using PSF-matched mosaics of SDSS $g$- and $r$-band images and Spitzer/IRAC 3.6$\mu$m mosaics, with all pixels contaminated by foreground stars or background objects masked out. By applying the $\beta_V$ method (Tamura et al. 2009, 2010), which was recently calibrated as a function of redshift and morphological type by Kim, Jansen, & Windhorst (2017), dust extinction maps were created for each galaxy. The typical 1-$\sigma$ scatter in $\beta_V$ around the average, both within a galaxy and in each morphological type bin, is $\sim$20%. Combined, these result in a $\sim$0.4 mag scatter in $A_V$. $\beta_V$ becomes insensitive to small-scale variations in stellar populations once resolution elements subtend an angle larger than that of a typical giant molecular cloud ($\sim$200pc). We find noticeably redder $V$$-$3.6$\mu$m colors in the center of star-forming galaxies and galaxies with a weak AGN. The derived intrinsic $V$$-$3.6$\mu$m colors for each Hubble type are generally consistent with the model predictions of Kim et al. (2017). Finally, we discuss the applicability of the $\beta_V$ dust-correction method to more distant galaxies, for which well-matched HST rest-frame visible and JWST rest-frame $\sim$3.5$\mu$m images will become available in the near-future. | astrophysics |
There has lately been increased interest in describing complex systems not merely as single networks but rather as collections of networks that are coupled to one another. We introduce an analytically tractable model that enables one to connect two layers in a multilayer network by controlling the locality of coupling. In particular we introduce a tractable model for embedding one network (A) into another (B), focusing on the case where network A has many more nodes than network B. In our model, nodes in network A are assigned, or embedded, to the nodes in network B using an assignment rule where the extent of node localization is controlled by a single parameter. We start by mapping an unassigned `source' node in network A to a randomly chosen `target' node in network B. We then assign the neighbors of the source node to the neighborhood of the target node using a random walk starting at the target node and with a per-step stopping probability $q$. By varying the parameter $q$, we are able to produce a range of embeddings from local ($q = 1$) to global ($q \to 0$). The simplicity of the model allows us to calculate key quantities, making it a useful starting point for more realistic models. | physics |
In model-based reinforcement learning, the agent interleaves between model learning and planning. These two components are inextricably intertwined. If the model is not able to provide sensible long-term prediction, the executed planner would exploit model flaws, which can yield catastrophic failures. This paper focuses on building a model that reasons about the long-term future and demonstrates how to use this for efficient planning and exploration. To this end, we build a latent-variable autoregressive model by leveraging recent ideas in variational inference. We argue that forcing latent variables to carry future information through an auxiliary task substantially improves long-term predictions. Moreover, by planning in the latent space, the planner's solution is ensured to be within regions where the model is valid. An exploration strategy can be devised by searching for unlikely trajectories under the model. Our method achieves higher reward faster compared to baselines on a variety of tasks and environments in both the imitation learning and model-based reinforcement learning settings. | statistics |
This paper investigates the experimental performance of a discrete portfolio optimization problem relevant to the financial services industry on the gate-model of quantum computing. We implement and evaluate a portfolio rebalancing use case on an idealized simulator of a gate-model quantum computer. The characteristics of this exemplar application include trading in discrete lots, non-linear trading costs, and the investment constraint. We design a novel problem encoding and hard constraint mixers for the Quantum Alternating Operator Ansatz, and compare to its predecessor the Quantum Approximate Optimization Algorithm. Experimental analysis demonstrates the potential tractability of this application on Noisy Intermediate-Scale Quantum (NISQ) hardware, identifying portfolios within 5% of the optimal adjusted returns and with the optimal risk for a small eight-stock portfolio. | quantum physics |
Knowledge distillation (KD), as an efficient and effective model compression technique, has been receiving considerable attention in deep learning. The key to its success is to transfer knowledge from a large teacher network to a small student one. However, most of the existing knowledge distillation methods consider only one type of knowledge learned from either instance features or instance relations via a specific distillation strategy in teacher-student learning. There are few works that explore the idea of transferring different types of knowledge with different distillation strategies in a unified framework. Moreover, the frequently used offline distillation suffers from a limited learning capacity due to the fixed teacher-student architecture. In this paper we propose a collaborative teacher-student learning via multiple knowledge transfer (CTSL-MKT) that prompts both self-learning and collaborative learning. It allows multiple students learn knowledge from both individual instances and instance relations in a collaborative way. While learning from themselves with self-distillation, they can also guide each other via online distillation. The experiments and ablation studies on four image datasets demonstrate that the proposed CTSL-MKT significantly outperforms the state-of-the-art KD methods. | computer science |
We construct an effective action for "soft" gluons by integrating out hard thermal modes of topologically massive vector bosons at one loop order. The loop carrying hard gluons (momentum $\sim T$) are known as hard thermal loop (HTL). The gluons are massive in the non-Abelian topologically massive model (TMM) due to a quadratic coupling $B\wedge F$ where a 2-form field $B$ is coupled quadratically with the field strength $F$ of Yang-Mills (YM) field. The mass of the gluons plays an important role in the perturbative analysis of thermal field theory. Due to the presence of this infrared cut-off in the model, the color diffusion constant and conductivity can be analyzed in perturbative regime. | high energy physics theory |
Planar nitrogen-incorporated ultrananocrystalline diamond, (N)UNCD, has emerged as a unique field emission source attractive for accelerator applications because of its capability to generate high charge beam and handle moderate vacuum conditions. Most importantly, (N)UNCD sources are simple to produce: conventional high aspect ratio isolated emitters are not required to be formed on the surface, and the actual emitter surface roughness is on the order of only 100~nm. Careful reliability assessment of (N)UNCD is required before it may find routine application in accelerator systems. In the present study using an L-band normal conducting single-cell rf gun, a (N)UNCD cathode has been conditioned to $\sim$42~MV/m in a well-controlled manner. It reached a maximum output charge of 15~nC corresponding to an average current of 6~mA during an emission period of 2.5~$\mu$s. Imaging of emission current revealed a large number of isolated emitters (density over 100/cm$^{2}$) distributed on the cathode, which is consistent with previous tests in dc environments. The performance metrics, the emission imaging, and the systematic study of emission properties during rf conditioning in a wide gradient range assert (N)UNCD as an enabling electron source for rf injector designs serving industrial and scientific applications. These studies also improve the fundamental knowledge of the practical conditioning procedure via better understanding of emission mechanisms. | physics |
In this work we propose and analyze an abstract parameter dependent model written as a mixed variational formulation based on Volterra integrals of second kind. For the analysis, we consider a suitable adaptation to the classic mixed theory in the Volterra equations setting, and prove the well posedness of the resulting mixed viscoelastic formulation. Error estimates are derived, using the available results for Volterra equations, where all the estimates are independent of the perturbation parameter. We consider an application of the developed theory in a viscoelastic Timoshenko beam, and report numerical experiments in order to assess the independence of the perturbation parameter. | mathematics |
We present results on the properties of neon emission in $z\sim2$ star-forming galaxies drawn from the MOSFIRE Deep Evolution Field (MOSDEF) survey. Doubly-ionized neon ([NeIII]3869) is detected at $\geq3\sigma$ in 61 galaxies, representing $\sim$25% of the MOSDEF sample with H$\alpha$, H$\beta$, and [OIII]$5007$ detections at similar redshifts. We consider the neon emission-line properties of both individual galaxies with [NeIII]3869 detections and composite $z\sim2$ spectra binned by stellar mass. With no requirement of [NeIII]3869 detection, the latter provide a more representative picture of neon emission-line properties in the MOSDEF sample. The [NeIII]3869/[OII]3727 ratio (Ne3O2) is anti-correlated with stellar mass in $z\sim2$ galaxies, as expected based on the mass-metallicity relation. It is also positively correlated with the [OIII]$5007$/[OII]$3727$ ratio (O32), but $z\sim2$ line ratios are offset towards higher Ne3O2 at fixed O32, compared with both local star-forming galaxies and individual H~II regions. Despite the offset towards higher Ne3O2 at fixed O32 at $z\sim2$, biases in inferred Ne3O2-based metallicity are small. Accordingly, Ne3O2 may serve as an important metallicity indicator deep into the reionization epoch. Analyzing additional rest-optical line ratios including [NeIII]$3869$/[OIII]$5007$ (Ne3O3) and [OIII]$5007$/H$\beta$ (O3H$\beta$), we conclude that the nebular emission-line ratios of $z\sim2$ star-forming galaxies suggest a harder ionizing spectrum (lower stellar metallicity, i.e., Fe/H) at fixed gas-phase oxygen abundance, compared to systems at $z\sim0$. These new results based on neon lend support to the physical picture painted by oxygen, nitrogen, hydrogen, and sulfur emission, of an ionized ISM in high-redshift star-forming galaxies irradiated by chemically young, $\alpha$-enhanced massive stars. | astrophysics |
Processes of electron-positron annihilation into a pair of fermions were considered. Forward-backward and left-right asymmetries were studied, taking into account polarization of initial and final particles. Complete 1-loop electroweak radiative corrections were included. A wide energy range including the $Z$ boson peak and higher energies relevant for future $e^+e^-$ colliders was covered. Sensitivity of observable asymmetries to the electroweak mixing angle and fermion weak coupling was discussed. | high energy physics phenomenology |
We explore a few-body mixture of two bosonic species confined in quasi-one-dimensional parabolic traps of different length scales. The ground state phase diagrams in the three-dimensional parameter space spanned by the harmonic length scale ratio, inter-species coupling strength and particle number ratio are investigated. As a first case study we use the mean-field ansatz (MF) to perform a detailed analysis of the separation mechanism. It allows us to derive a simple and intuitive rule predicting which of the immiscible phases is energetically more favorable at the miscible-immiscible phase boundary. We estimate the critical coupling strength for the miscible-immiscible transition and perform a comparison to correlated many-body results obtained by means of the Multi-Layer Multi-Configuration Time Dependent Hartree method for bosonic mixtures (ML-X). At a critical ratio of the trap frequencies, determined solely by the particle number ratio, the deviations between MF and ML-X are very pronounced and can be attributed to a high degree of entanglement between the components. As a result, we evidence the breakdown of the effective one-body picture. Additionally, when many-body correlations play a substantial role, the one-body density is in general not sufficient for deciding upon the phase at hand which we demonstrate exemplarily. | condensed matter |
Fredholm integral equations of the first kind are the prototypical example of ill-posed linear inverse problems. They model, among other things, reconstruction of distorted noisy observations and indirect density estimation and also appear in instrumental variable regression. However, their numerical solution remains a challenging problem. Many techniques currently available require a preliminary discretization of the domain of the solution and make strong assumptions about its regularity. For example, the popular expectation maximization smoothing (EMS) scheme requires the assumption of piecewise constant solutions which is inappropriate for most applications. We propose here a novel particle method that circumvents these two issues. This algorithm can be thought of as a Monte Carlo approximation of the EMS scheme which not only performs an adaptive stochastic discretization of the domain but also results in smooth approximate solutions. We analyze the theoretical properties of the EMS iteration and of the corresponding particle algorithm. Compared to standard EMS, we show experimentally that our novel particle method provides state-of-the-art performance for realistic systems, including motion deblurring and reconstruction of cross-section images of the brain from positron emission tomography. | statistics |
In 2+1 dimensions, QED becomes exactly solvable for all values of the fermion charge $e$ in the limit of many fermions $N_f\gg 1$. We present results for the free energy density at finite temperature $T$ to next-to-leading-order in large $N_f$. In the naive large $N_f$ limit, we uncover an apparently UV-divergent contribution to the vacuum energy at order ${\cal O}(e^6 N_f^3)$, which we argue to become a finite contribution of order ${\cal O}(N_f^4 e^6)$ when resumming formally higher-order $1/N_f$ contributions. We find the finite-temperature free energy to be well-behaved for all values of the dimensionless coupling $e^2N_f/T$, and to be bounded by the free energy of $N_f$ free fermions and non-interacting QED3, respectively. We invite follow-up studies from finite-temperature lattice gauge theory at large but fixed $N_f$ to test our results in the regime $e^2N_f/T\gg 1$. | high energy physics theory |
We consider the so-called simplest correlation function of four infinitely heavy half-BPS operators in planar N=4 SYM in the limit when the operators are light-like separated in a sequential manner. We find a closed-form expression for the correlation function in this limit as a function of the 't Hooft coupling and residual cross ratios. Our analysis heavily relies on the factorization of the correlation function into the product of null octagons and on the recently established determinant representation for the latter. We show that the null octagon is given by a Fredholm determinant of a certain integral operator which has a striking similarity to those previously encountered in the study of two-point correlation functions in exactly solvable models at finite temperature and of level spacing distributions for random matrices. This allows us to compute the null octagon exactly by employing a method of differential equations. | high energy physics theory |
Search for compressed supersymmetry at multi-TeV scale, in the presence of a light gravitino dark matter, can get sizable uplift while looking into the associated fat-jets with missing transverse momenta as a signature of the boson produced in the decay process of much heavier next-to-lightest sparticle. We focus on the hadronic decay of the ensuing Higgs and/or $Z$ boson giving rise to at least two fat-jets and $\slashed{E}_T$ in the final state. We perform a detailed background study adopting a multivariate analysis using a boosted decision tree to provide a robust investigation to explore the discovery potential for such signal at 14 TeV LHC considering different benchmark points satisfying all the theoretical and experimental constraints. This channel provides the best discovery prospects with most of the benchmarks discoverable within an integrated luminosity of $\mathcal{L}=200$ fb$^{-1}$. Kinematic observables are investigated in order to distinguish between compressed and uncompressed spectra having similar event yields. | high energy physics phenomenology |
A determination of the mass function (MF) of stellar clusters can be quite dependent on the range of measured masses, the fitting technique, and the analytic function that is being fit to the data. Here, we use HST/WFPC2 data of NGC 1711, a stellar cluster in the Large Magellanic Cloud, as a test case to explore a range of possible determinations of the MF from a single dataset. We employ the analytic modified lognormal power-law (MLP) distribution, a hybrid function that has a peaked lognormal-like body and a power-law tail at intermediate and high masses. A fit with the MLP has the advantage that the resulting best-fit function can be either a hybrid function, a pure lognormal, or a pure power law, in different limits of the function. The completeness limit for the observations means that the data contains masses above $\sim 0.90\,M_{\odot}$. In this case, the MLP fits yield essentially a pure power-law MF. We demonstrate that the nonlinear regression/least-squares approach is not justified since the underlying assumptions are not satisfied. By using maximum likelihood estimation, which is independent of binning, we find a best-fit functional form $dN/d\ln m \propto m^{-\alpha}$, where $\alpha = 1.72 \pm 0.05$ or $1.75 \pm 0.05$ for two different theoretical isochrone models, respectively. Furthermore, we explore the possibility of systematic errors in the determination of the power-law index due to the depth of the observations. When we combine the observational data with artificially generated data from the lognormal Chabrier IMF for masses below $0.90\, M_{\odot}$, the best fit MLP is a hybrid function but with a steeper asymptotic slope i.e., $\alpha = 2.04 \pm 0.07$. This illustrates the systematic uncertainties in commonly used MF parameters that can depend on the range of data that is fitted. | astrophysics |
Model of a partial current-carrying torus loop anchored to the photosphere is analyzed. Conditions of the catastrophic loss of equilibrium are considered and corresponding value of the critical decay index of external magnetic field is found. Taking into account line-tying conditions leads to non-monotonous dependence of the critical decay index on the height of the apex and length of the flux rope (its endpoints separation). For relatively short flux ropes, the critical decay index is significantly lower than unity, which is in contrast to widespread models with the typical critical decay index above unity. The steep decrease of the critical index with height at low heights is due to the sharp increase of the curvature of the flux-rope axis that transforms from a nearly straight line to a crescent. | astrophysics |
Indoor intrusion detection technology has been widely utilized in network security monitoring, smart city, entertainment games, and other fields. Most existing indoor intrusion detection methods directly exploit the Received Signal Strength (RSS) data collected by Monitor Points (MPs) and do not consider the instability of WLAN signals in the complex indoor environments. In response to this urgent problem, this paper proposes a novel WLAN indoor intrusion detection method based on deep signal feature fusion and Minimized Multiple Kernel Maximum Mean Discrepancy (Minimized-MKMMD). Firstly, the multi-branch deep convolutional neural network is used to conduct the dimensionality reduction and feature fusion of the RSS data, and the tags are obtained according to the features of the offline and online RSS fusion features that are corresponding to the silence and intrusion states, and then based on this, the source domain and target domain are constructed respectively. Secondly, the optimal transfer matrix is constructed by minimizing MKMMD. Thirdly, the transferred RSS data in the source domain is utilized for training the classifiers that are applying in getting the classification of the RSS fusion features in the target domain in the same shared subspace. Finally, the intrusion detection of the target environment is realized by iteratively updating the process above until the algorithm converges. The experimental results show that the proposed method can effectively improve the accuracy and robustness of the intrusion detection system. | electrical engineering and systems science |
3D Convolution Neural Networks (CNNs) have been widely applied to 3D scene understanding, such as video analysis and volumetric image recognition. However, 3D networks can easily lead to over-parameterization which incurs expensive computation cost. In this paper, we propose Channel-wise Automatic KErnel Shrinking (CAKES), to enable efficient 3D learning by shrinking standard 3D convolutions into a set of economic operations e.g., 1D, 2D convolutions. Unlike previous methods, CAKES performs channel-wise kernel shrinkage, which enjoys the following benefits: 1) enabling operations deployed in every layer to be heterogeneous, so that they can extract diverse and complementary information to benefit the learning process; and 2) allowing for an efficient and flexible replacement design, which can be generalized to both spatial-temporal and volumetric data. Further, we propose a new search space based on CAKES, so that the replacement configuration can be determined automatically for simplifying 3D networks. CAKES shows superior performance to other methods with similar model size, and it also achieves comparable performance to state-of-the-art with much fewer parameters and computational costs on tasks including 3D medical imaging segmentation and video action recognition. Codes and models are available at https://github.com/yucornetto/CAKES | computer science |
In this paper, a siamese DNN model is proposed to learn the characteristics of the audio dynamic range compressor (DRC). This facilitates an intelligent control system that uses audio examples to configure the DRC, a widely used non-linear audio signal conditioning technique in the areas of music production, speech communication and broadcasting. Several alternative siamese DNN architectures are proposed to learn feature embeddings that can characterise subtle effects due to dynamic range compression. These models are compared with each other as well as handcrafted features proposed in previous work. The evaluation of the relations between the hyperparameters of DNN and DRC parameters are also provided. The best model is able to produce a universal feature embedding that is capable of predicting multiple DRC parameters simultaneously, which is a significant improvement from our previous research. The feature embedding shows better performance than handcrafted audio features when predicting DRC parameters for both mono-instrument audio loops and polyphonic music pieces. | electrical engineering and systems science |
Coherent scattering of an elliptically polarised tweezer into a cavity mode provides a promising platform for cooling levitated nanoparticles into their combined rotational and translational quantum regime [Phys. Rev. Lett. 126, 163603 (2021)]. This article presents the theory of how aspherical nanoparticles are affected by elliptically polarised laser beams, how two orthogonal cavity modes enable rotational and translational cooling, and how the resulting power spectra contain signatures of rotational non-linearities. We provide analytic expressions for the resulting trapping frequencies, opto-mechanical coupling strengths, cooling rates, and steady-state occupations and we study their dependence on the tweezer ellipticity. | quantum physics |
Pezzini et al. reported an unconventional mass enhancement in topological nodal line semimetal ZrSiS (Nat. Phys. 14, 178 (2018), whose origin remains puzzling. In this material, strong short-range interactions might induce excitonic particle-hole pairs. Here we study the renormalization of fermion velocities and find that the mass enhancement in ZrSiS can be well understood if we suppose that ZrSiS is close to the quantum critical point between semimetal and excitonic insulator. Near this quantum critical point, the fermion velocities are considerably reduced by excitonic quantum fluctuation, leading to fermion mass enhancement. The quasiparticle residue is suppressed as the energy decreases but is finite at zero energy. This indicates that ZrSiS is a strongly correlated Fermi liquid, and explains why the mass enhancement is weaker than non-Fermi liquids. Our results suggest that ZrSiS is a rare example of 3D topological semimetal exhibiting unusual quantum criticality. | condensed matter |
Measurements of transverse profiles using Ionization Profile Monitors (IPMs) for high brightness beams are affected by the electromagnetic field of the beam. This interaction may cause a distortion of the measured profile shape despite strong external magnetic field applied to impose limits on the transverse movement of electrons. The mechanisms leading to this distortion are discussed in detail. The distortion itself is described by means of analytic calculations for simplified beam distributions and a full simulation model for realistic distributions. Simple relation for minimum magnetic field scaling with beam parameters for avoiding profile distortions is presented. Further, application of machine learning algorithms to the problem of reconstructing the actual beam profile from distorted measured profile is presented. The obtained results show good agreement for tests on simulation data. The performance of these algorithms indicate that they could be very useful for operations of IPMs on high brightness beams or IPMs with weak magnetic field. | physics |
Dependency analysis is recognized as an important field of software engineering due to a variety of reasons. There exists a large pool of tools providing assistance to software developers and architects. Analysis of inter- and intra-project dependencies can help provide various insights about the entire development process. There is, however, currently a lack of tools that would support researchers by extracting intra-project dependencies data in a format most suited for further analysis. In this paper we introduce DepMiner - an open source, language-agnostic tool for mining detailed dependencies data from source code, based on extensive static analysis capabilities of an industry standard IDE. DepMiner can be easily integrated into arbitrary mining pipelines to conduct large-scale source code processing jobs involving intra-project dependencies. It is easily extensible to support other languages of source code, different granularities of analysis, and other use-specific needs. | computer science |
We use photometric and kinematic data from Gaia DR2 to explore the structure of the star forming region associated with the molecular cloud of Perseus. Apart from the two well known clusters, IC 348 and NGC 1333, we present five new clustered groups of young stars, which contain between 30 and 300 members, named Autochthe, Alcaeus, Heleus, Electryon and Mestor. We demonstrate these are co-moving groups of young stars, based on how the candidate members are distributed in position, proper motion, parallax and colour-magnitude space. By comparing their colour-magnitude diagrams to isochrones we show that they have ages between 1 and 5 Myr. Using 2MASS and WISE colours we find that the fraction of stars with discs in each group ranges from 10 to 50 percent. The youngest of the new groups is also associated with a reservoir of cold dust, according to the Planck map at 353 GHz. We compare the ages and proper motions of the five new groups to those of IC 348 and NGC 1333. Autochthe is clearly linked with NGC 1333 and may have formed in the same star formation event. The seven groups separate roughly into two sets which share proper motion, parallax and age: Heleus, Electryon, Mestor as the older set, and NGC 1333, Autochthe as the younger set. Alcaeus is kinematically related to the younger set, but at a more advanced age, while the properties of IC 348 overlap with both sets. All older groups in this star forming region are located at higher galactic latitude. | astrophysics |
We study the symbol and the alphabet for two-loop NMHV amplitudes in planar ${\cal N}=4$ super-Yang-Mills from the $\bar{Q}$ equations, which provide a first-principle method for computing multi-loop amplitudes. Starting from one-loop N${}^2$MHV ratio functions, we explain in detail how to use $\bar{Q}$ equations to obtain the total differential of two-loop $n$-point NMHV amplitudes, whose symbol contains letters that are algebraic functions of kinematics for $n\geq 8$. We present explicit formula with nice patterns for the part of the symbol involving algebraic letters for all multiplicities, and we find $17-2m$ multiplicative-independent letters for a given square root of Gram determinant, with $0\leq m\leq 4$ depending on the number of particles involved in the square root. We also observe that these algebraic letters can be found as poles of one-loop four-mass leading singularities with MHV or NMHV trees. As a byproduct of our algebraic results, we find a large class of components of two-loop NMHV, which can be written as differences of two double-pentagon integrals, particularly simple and absent of square roots. As an example, we present the complete symbol for $n=9$ whose alphabet contains $59\times 9$ rational letters, in addition to the $11 \times 9$ independent algebraic ones. We also give all-loop NMHV last-entry conditions for all multiplicities. | high energy physics theory |
Astronomers have come to recognize the benefits of photonics, often in combination with optical systems, in solving longstanding experimental problems in Earth-based astronomy. Here, we explore some of the recent advances made possible by integrated photonics. We also look to the future with a view to entirely new kinds of astronomy, particularly in an era of the extremely large telescopes. | astrophysics |
We construct a range of supersymmetric cubic vertices for three massless higher spin supermultiplets in the four-dimensional space. We use frame-like multispinor formalism, which allows to avoid most of the technical difficulties and provides a uniform description for bosons and fermions. Our work is based on the so-called Fradkin-Vasiliev formalism for construction of the cubic vertices, which requires the non-zero cosmological constant. Thus we first construct the vertices in AdS space and then consider the flat limit. We show that the AdS supersymmetric vertex is a sum of four elementary vertices for supermultiplet components, while one of the vertices vanishes in the flat limit in agreement with the Metsaev's classification. | high energy physics theory |
For many years, the Simplified Refined Instrumental Variable method for Continuous-time systems (SRIVC) has been widely used for identification. The intersample behaviour of the input plays an important role in this method, and it has been shown recently that the SRIVC estimator is not consistent if an incorrect assumption on the intersample behaviour is considered. In this paper, we present an extension of the SRIVC algorithm that is able to deal with continuous-time multisine signals, which cannot be interpolated exactly through hold reconstructions. The proposed estimator is generically consistent for any input reconstructed through zero or first-order-hold devices, and we show that it is generically consistent for continuous-time multisine inputs as well. The statistical performance of the proposed estimator is compared to the standard SRIVC estimator through extensive simulations. | electrical engineering and systems science |
A weighted Shiryaev-Roberts change detection procedure is shown to approximately minimize the expected delay to detection as well as higher moments of the detection delay among all change-point detection procedures with the given low maximal local probability of a false alarm within a window of a fixed length in pointwise and minimax settings for general non-i.i.d. data models and for the composite post-change hypothesis when the post-change parameter is unknown. We establish very general conditions for the models under which the weighted Shiryaev-Roberts procedure is asymptotically optimal. These conditions are formulated in terms of the rate of convergence in the strong law of large numbers for the log-likelihood ratios between the "change" and "no-change" hypotheses, and we also provide sufficient conditions for a large class of ergodic Markov processes. Examples, where these conditions hold, are given. | mathematics |
The spin of the supermassive black hole that resides at the Galactic Centre can in principle be measured by accurate measurements of the orbits of stars that are much closer to SgrA* than S2, the orbit of which recently provided the measurement of the gravitational redshift and the Schwarzschild precession. The GRAVITY near-infrared interferometric instrument combining the four 8m telescopes of the VLT provides a spatial resolution of 2-4 mas, breaking the confusion barrier for adaptive-optics-assisted imaging with a single 8-10m telescope. We used GRAVITY to observe SgrA* over a period of six months in 2019 and employed interferometric reconstruction methods developed in radio astronomy to search for faint objects near SgrA*. This revealed a slowly moving star of magnitude 18.9 in K band within 30mas of SgrA*. The position and proper motion of the star are consistent with the previously known star S62, which is at a substantially larger physical distance, but in projection passes close to SgrA*. Observations in August and September 2019 easily detected S29, with K-magnitude of 16.6, at approximately 130 mas from SgrA*. The planned upgrades of GRAVITY, and further improvements in the calibration, hold the promise of finding stars fainter than magnitude 19 at K. | astrophysics |
Axion-like particles $a$ (ALPs) that couple to the Standard Model (SM) gauge fields could be observed in the high-energy photon scattering $\gamma N\to N a$ off nuclei followed by the $a\to \gamma\gamma$ decay. In the present paper we describe the calculation of the ALP production cross-section and the properties of this production. The cross section formulas are implemented in the program for the simulation of events in the NA64 experiment, the active electron beam dump facility at the CERN SPS. We study the prospects of the NA64 experiment to search for ALP in the $10\, \mbox{MeV} \lesssim m_a\lesssim 100$ MeV mass range for the statistics corresponding to up to $5\times 10^{12}$ electrons on target (EOT). | high energy physics phenomenology |
Single phonon excitations are sensitive probes of light dark matter in the keV-GeV mass window. For anisotropic target materials, the signal depends on the direction of the incoming dark matter wind and exhibits a daily modulation. We discuss in detail the various sources of anisotropy, and carry out a comparative study of 26 crystal targets, focused on sub-MeV dark matter benchmarks. We compute the modulation reach for the most promising targets, corresponding to the cross section where the daily modulation can be observed for a given exposure, which allows us to combine the strength of DM-phonon couplings and the amplitude of daily modulation. We highlight Al$_2$O$_3$ (sapphire), CaWO$_4$ and h-BN (hexagonal boron nitride) as the best polar materials for recovering a daily modulation signal, which feature $\mathcal{O}(1 - 100)\%$ variations of detection rates throughout the day, depending on the dark matter mass and interaction. The directional nature of single phonon excitations offers a useful handle to mitigate backgrounds, which is crucial for fully realizing the discovery potential of near future experiments. | high energy physics phenomenology |
In this paper we construct spectral triples $(A,H,D)$ on the symbolic space when the alphabet is finite. We describe some new results for the associated Dixmier trace representations for Gibbs probabilities (for potentials with less regularity than H\"older) and for a certain class of functions. The Dixmier trace representation can be expressed as the limit of a certain zeta function obtained from high order iterations of the Ruelle operator. Among other things we consider a class of examples where we can exhibit the explicit expression for the zeta function. We are also able to apply our reasoning for some parameters of the Dyson model (a potential on the symbolic space $\{-1,1\}^\mathbb{N}$) and for a certain class of observables. Nice results by R. Sharp, M.~Kesseb\"ohmer and T.~Samuel for Dixmier trace representations of Gibbs probabilities considered the case where the potential is of H\"older class. We also analyze a particular case of a pathological continuous potential where the Dixmier trace representation - via the associated zeta function - is not true. | mathematics |
The task of developing high performing parallel software must be made easier and more cost effective in order to fully exploit existing and emerging large scale computer systems for the advancement of science. The Super Instruction Architecture is a parallel programming platform geared towards applications that need to manage large amounts of data stored in potentially sparse multidimensional arrays during calculations. The SIA platform was originally designed for the Quantum Chemistry software package ACESIII. More recently, the SIA was reimplemented to overcome limitations in the original ACESIII program. It has now been successfully employed in the new Aces4 Quantum Chemistry software package and the development of the atmospheric transport application MATLOC, thus demonstrating the versatility of the SIA approach. MATLOC calculates transport and dispersion of mass over regions in the range of 100-1000s of square kilometers and is a significant improvement over existing community codes. This paper describes results from both the transport and dispersion application as well as some difficult Quantum Chemistry open shell coupled cluster benchmark calculations using Aces4. | physics |
The instantaneous quantum polynomial time model (or the IQP model) is one of promising models to demonstrate a quantum computational advantage over classical computers. If the IQP model can be efficiently simulated by a classical computer, an unlikely consequence in computer science can be obtained (under some unproven conjectures). In order to experimentally demonstrate the advantage using medium or large-scale IQP circuits, it is inevitable to efficiently verify whether the constructed IQP circuits faithfully work. There exists two types of IQP models, each of which is the sampling on hypergraph states or weighted graph states. For the first-type IQP model, polynomial-time verification protocols have already been proposed. In this paper, we propose verification protocols for the second-type IQP model. To this end, we propose polynomial-time fidelity estimation protocols of weighted graph states for each of the following four situations where a verifier can (i) choose any measurement basis and perform adaptive measurements, (ii) only choose restricted measurement bases and perform adaptive measurements, (iii) choose any measurement basis and only perform non-adaptive measurements, and (iv) only choose restricted measurement bases and only perform non-adaptive measurements. In all of our verification protocols, the verifier's quantum operations are only single-qubit measurements. Since we assume no i.i.d. property on quantum states, our protocols work in any situation. | quantum physics |
Reachability analysis, in general, is a fundamental method that supports formally-correct synthesis, robust model predictive control, set-based observers, fault detection, invariant computation, and conformance checking, to name but a few. In many of these applications, one requires to compute a reachable set starting within a previously computed reachable set. While it was previously required to re-compute the entire reachable set, we demonstrate that one can leverage the dependencies of states within the previously computed set. As a result, we almost instantly obtain an over-approximative subset of a previously computed reachable set by evaluating analytical maps. The advantages of our novel method are demonstrated for falsification of systems, optimization over reachable sets, and synthesizing safe maneuver automata. In all of these applications, the computation time is reduced significantly. | electrical engineering and systems science |
We reformulate the twistor construction for hyper- and quaternion-K\"ahler manifolds, introducing new sigma models that compute scalar potentials for the geometry. These sigma models have the twistor space of the quaternionic manifold as their target and encode finite non-linear perturbations of the flat structures. In the hyperk\"ahler case our twistor sigma models compute both Plebanski fundamental forms (including the K\"ahler potential), while in the quaternion-K\"ahler setting the twistor sigma model computes the K\"ahler potential for the hyperk\"ahler structure on non-projective twistor space. In four-dimensions, one of the models provides the generating functional of tree-level MHV graviton scattering amplitudes; perturbations of the hyperk\"ahler structure corresponding to positive helicity gravitons. The sigma model's perturbation theory gives rise to a sum of tree diagrams observed previously in the literature, and their summation via a matrix tree theorem gives a first-principles derivation of Hodges' formula for MHV graviton amplitudes directly from general relativity. We generalise the twistor sigma model to higher-degree (defined in the first case with a cosmological constant), giving a new generating principle for the full tree-level graviton S-matrix. | high energy physics theory |
The unplaced Fragment D of the Antikythera Mechanism with an unknown operation was a mystery since the beginning of its discovery. The gear r1, which was detected on the Fragment radiographies by C. Karakalos, is preserved in excellent contdition, but this was not enough to correlate it to the existing gear trainings of the Mechanism. After the analysis of AMRP tomographies of Fragment D and its mechanical characteritics revealed that it could be a part of the Draconic gearing. Although the Draconic cycle wa well known during the Mechanism's era as represents the fourth Lunar cycle, it seems that it is missing from the Antikythera Mechanism. The study of Fragment D was supported by the bronze reconstruction of the Draconic gearing by the authors. The adaptation of the Draconic gearing on the Antikythera Mechanism improves its functionality and gives answers on several questions. | physics |
In dynamic magnetic resonance (MR) imaging, low-rank plus sparse (L+S) decomposition, or robust principal component analysis (PCA), has achieved stunning performance. However, the selection of the parameters of L+S is empirical, and the acceleration rate is limited, which are common failings of iterative compressed sensing MR imaging (CS-MRI) reconstruction methods. Many deep learning approaches have been proposed to address these issues, but few of them use a low-rank prior. In this paper, a model-based low-rank plus sparse network, dubbed L+S-Net, is proposed for dynamic MR reconstruction. In particular, we use an alternating linearized minimization method to solve the optimization problem with low-rank and sparse regularization. Learned soft singular value thresholding is introduced to ensure the clear separation of the L component and S component. Then, the iterative steps are unrolled into a network in which the regularization parameters are learnable. We prove that the proposed L+S-Net achieves global convergence under two standard assumptions. Experiments on retrospective and prospective cardiac cine datasets show that the proposed model outperforms state-of-the-art CS and existing deep learning methods and has great potential for extremely high acceleration factors (up to 24x). | electrical engineering and systems science |
This paper investigates the channel aging problem of light-fidelity (LiFi) systems. In the LiFi physical layer, the majority of the optimization problems for mobile users are nonconvex and require the use of dual decomposition or heuristics techniques. Such techniques are based on iterative algorithms, and often, cause a high processing time at the physical layer. Hence, the obtained solutions are no longer optimal since the LiFi channels are evolving. In this paper, a proactive-optimization approach that can alleviate the LiFi channel aging problem is proposed. The core idea is to design a long-short-term memory (LSTM) network that is capable of predicting posterior positions and orientations of mobile users, which can be then used to predict their channel coefficients. Consequently, the obtained channel coefficients can be exploited for deriving near-optimal transmission-schemes prior to the intended service-time, which enables real-time service. Through various simulations, the performance of the designed LSTM model is evaluated in terms of prediction accuracy and time. Finally, the performance of the proposed PO approach is investigated in the sum rate maximization problem of multiuser cell-free LiFi systems with quality-of-service constraints, where a performance gap of less than 7% is achieved, while eliminating up to 100% of the online processing-time. | electrical engineering and systems science |
We discuss a model where a spontaneous quantum collapse is induced by the gravitational interaction, treated classically. Its dynamics couples the standard wave function of a system with the Bohmian positions of its particles, which are considered as the only source of the gravitational attraction. The collapse is obtained by adding a small imaginary component to the gravitational coupling. It predicts extremely small perturbations of microscopic systems, but very fast collapse of QSMDS (quantum superpositions of macroscopically distinct quantum states) of a solid object, varying as the fifth power of its size. The model does not require adding any dimensional constant to those of standard physics. | quantum physics |
We present the first high-resolution, simultaneous observations of the solar chromosphere in the optical and millimeter wavelength ranges, obtained with ALMA and the IBIS instrument at the Dunn Solar Telescope. In this paper we concentrate on the comparison between the brightness temperature observed in ALMA Band 3 (3 mm; 100 GHz) and the core width of the H$\alpha$ 656.3 nm line, previously identified as a possible diagnostic of the chromospheric temperature. We find that in the area of plage, network and fibrils covered by our FOV the two diagnostics are well correlated, with similar spatial structures observed in both. The strength of the correlation is remarkable, given that the source function of the mm-radiation obeys local thermodynamic equilibrium, while the H$\alpha$ line has a source function that deviates significantly from the local Planck function. The observed range of ALMA brightness temperatures is sensibly smaller than the temperature range that was previously invoked to explain the observed width variations in H$\alpha$. We employ analysis from forward modeling with the RH code to argue that the strong correlation between H$\alpha$ width and ALMA brightness temperature is caused by their shared dependence on the population number $n_2$ of the first excited level of hydrogen. This population number drives millimeter opacity through hydrogen ionization via the Balmer continuum, and H$\alpha$ width through a curve-of-growth-like opacity effect. Ultimately, the $n_2$ population is regulated by the enhancement or lack of downward Ly$\alpha$ flux, which coherently shifts the formation height of both diagnostics to regions with different temperature, respectively. | astrophysics |
Temporal action detection is a fundamental yet challenging task in video understanding. Video context is a critical cue to effectively detect actions, but current works mainly focus on temporal context, while neglecting semantic context as well as other important context properties. In this work, we propose a graph convolutional network (GCN) model to adaptively incorporate multi-level semantic context into video features and cast temporal action detection as a sub-graph localization problem. Specifically, we formulate video snippets as graph nodes, snippet-snippet correlations as edges, and actions associated with context as target sub-graphs. With graph convolution as the basic operation, we design a GCN block called GCNeXt, which learns the features of each node by aggregating its context and dynamically updates the edges in the graph. To localize each sub-graph, we also design an SGAlign layer to embed each sub-graph into the Euclidean space. Extensive experiments show that G-TAD is capable of finding effective video context without extra supervision and achieves state-of-the-art performance on two detection benchmarks. On ActivityNet-1.3, it obtains an average mAP of 34.09%; on THUMOS14, it reaches 51.6% at [email protected] when combined with a proposal processing method. G-TAD code is publicly available at https://github.com/frostinassiky/gtad. | computer science |
In this work, we study the $CP$ asymmetry in the angular distribution of $\tau\to K_S\pi\nu_\tau$ decays, taking into account the known $CP$ violation in $K^0-\bar{K}^0$ mixing. It is pointed out for the first time that, once the well-measured $CP$ violation in the neutral kaon system is invoked, a non-zero $CP$ asymmetry would appear in the angular observable of the decays considered, even within the Standard Model. By employing the reciprocal basis, which is most convenient when a $K_{S(L)}$ is involved in the final state, the $CP$-violating angular observable is derived to be two times the product of the time-dependent $CP$ asymmetry in $K\to \pi^+\pi^-$ and the mean value of the angular distribution in $\tau^\pm\to K^0(\bar{K}^0)\pi^\pm\bar{\nu}_\tau(\nu_\tau)$ decays. Compared with the Belle results measured in four different bins of the $K\pi$ invariant mass, our predictions lie within the margins of these measurements, except for a $1.7~\sigma$ deviation for the lowest mass bin. While being below the current Belle detection sensitivity that is of $\mathcal{O}(10^{-3})$, our predictions are expected to be detectable at the Belle II experiment, where $\sqrt{70}$ times more sensitive results will be obtained with a $50~\text{ab}^{-1}$ data sample. | high energy physics phenomenology |
Stochastic gradient methods (SGMs) have been widely used for solving stochastic optimization problems. A majority of existing works assume no constraints or easy-to-project constraints. In this paper, we consider convex stochastic optimization problems with expectation constraints. For these problems, it is often extremely expensive to perform projection onto the feasible set. Several SGMs in the literature can be applied to solve the expectation-constrained stochastic problems. We propose a novel primal-dual type SGM based on the Lagrangian function. Different from existing methods, our method incorporates an adaptiveness technique to speed up convergence. At each iteration, our method inquires an unbiased stochastic subgradient of the Lagrangian function, and then it renews the primal variables by an adaptive-SGM update and the dual variables by a vanilla-SGM update. We show that the proposed method has a convergence rate of $O(1/\sqrt{k})$ in terms of the objective error and the constraint violation. Although the convergence rate is the same as those of existing SGMs, we observe its significantly faster convergence than an existing non-adaptive primal-dual SGM and a primal SGM on solving the Neyman-Pearson classification and quadratically constrained quadratic programs. Furthermore, we modify the proposed method to solve convex-concave stochastic minimax problems, for which we perform adaptive-SGM updates to both primal and dual variables. A convergence rate of $O(1/\sqrt{k})$ is also established to the modified method for solving minimax problems in terms of primal-dual gap. | mathematics |
Wireless all-analog biosensor design for concurrent microfluidic and physiological signal monitoring is presented in this work. The key component is an all-analog circuit capable of compressing two analog sources into one analog signal by Analog Joint Source-Channel Coding (AJSCC). Two circuit designs are discussed, including the stacked-Voltage Controlled Voltage Source (VCVS) design with the fixed number of levels, and an improved design, which supports a flexible number of AJSCC levels. Experimental results are presented on the wireless biosensor prototype, composed of Printed Circuit Board (PCB) realizations of the stacked-VCVS design. Furthermore, circuit simulation and wireless link simulation results are presented on the improved design. Results indicate that the proposed wireless biosensor is well suited for sensing two biological signals simultaneously with high accuracy, and can be applied to a wide variety of low-power and low-cost wireless continuous health monitoring applications. | electrical engineering and systems science |
We use the jackknife to bias correct the log-periodogram regression(LPR) estimator of the fractional parameter in a stationary fractionally integrated model. The weights for the jackknife estimator are chosen in such a way that bias reduction is achieved without the usual increase in asymptotic variance, with the estimator viewed as `optimal' in this sense. The theoretical results are valid under both the non-overlapping and moving-block sub-sampling schemes that can be used in the jackknife technique, and do not require the assumption of Gaussianity for the data generating process. A Monte Carlo study explores the finite sample performance of different versions of the jackknife estimator, under a variety of scenarios. The simulation experiments reveal that when the weights are constructed using the parameter values of the true data generating process, a version of the optimal jackknife estimator almost always out-performs alternative semi-parametric bias-corrected estimators. A feasible version of the jackknife estimator, in which the weights are constructed using estimates of the unknown parameters, whilst not dominant overall, is still the least biased estimator in some cases. Even when misspecified short run dynamics are assumed in the construction of the weights, the feasible jackknife still shows significant reduction in bias under certain designs. As is not surprising, parametric maximum likelihood estimation out-performs all semi-parametric methods when the true values of the short memory parameters are known, but is dominated by the semi-parametric methods (in terms of bias) when the short memory parameters need to be estimated, and in particular when the model is misspecified. | statistics |
Despite the promises of data-driven artificial intelligence (AI), little is known about how we can bridge the gulf between traditional physician-driven diagnosis and a plausible future of medicine automated by AI. Specifically, how can we involve AI usefully in physicians' diagnosis workflow given that most AI is still nascent and error-prone (e.g., in digital pathology)? To explore this question, we first propose a series of collaborative techniques to engage human pathologists with AI given AI's capabilities and limitations, based on which we prototype Impetus - a tool where an AI takes various degrees of initiatives to provide various forms of assistance to a pathologist in detecting tumors from histological slides. We summarize observations and lessons learned from a study with eight pathologists and discuss recommendations for future work on human-centered medical AI systems. | computer science |
We review the development of generative modeling techniques in machine learning for the purpose of reconstructing real, noisy, many-qubit quantum states. Motivated by its interpretability and utility, we discuss in detail the theory of the restricted Boltzmann machine. We demonstrate its practical use for state reconstruction, starting from a classical thermal distribution of Ising spins, then moving systematically through increasingly complex pure and mixed quantum states. Intended for use on experimental noisy intermediate-scale quantum (NISQ) devices, we review recent efforts in reconstruction of a cold atom wavefunction. Finally, we discuss the outlook for future experimental state reconstruction using machine learning, in the NISQ era and beyond. | quantum physics |
The paper is devoted to the study of pro-solvable Lie algebras whose maximal pro-nilpotent ideal is either $\mathfrak{m}_0$ or $\mathfrak{m}_2$. Namely, we describe such Lie algebras and establish their completeness. Triviality of the second cohomology group for one of the obtained algebra is established. | mathematics |
A common-sense perception of a physical system is that it is inseparable from its physical properties. The notion of Quantum Cheshire Cat challenges this, as far as quantum systems are concerned. It shows that a quantum system can be decoupled from its physical property under suitable pre and postselections. However, in the Quantum Cheshire Cat setup, the decoupling is not permanent. The photon, for example, and its circular polarization is separated and then recombined. In this paper, we present a thought experiment where we decouple two photons from their respective polarizations and then interchange them during recombination. Thus, our proposal shows that the belongingness of a property for a physical system is very volatile in the quantum world. This raises the question of reality of an observable at a much deeper level. | quantum physics |
In this review, we discuss disruptive decoupled water splitting schemes, in which the concurrent production of hydrogen and oxygen in close proximity to each other in conventional electrolysis is replaced by time- or space-separated hydrogen and oxygen production steps. We present the main decoupling strategies, including electrolytic and electrochemical chemical water splitting cycles, and the redox materials that facilitate them by mediating the ion exchange between the hydrogen and oxygen evolution reactions. Decoupled water splitting offers increased flexibility and robustness and provides new opportunities for hydrogen production from renewable sources. | physics |
Sequence classification is the task of predicting a class label given a sequence of observations. In many applications such as healthcare monitoring or intrusion detection, early classification is crucial to prompt intervention. In this work, we learn sequence classifiers that favour early classification from an evolving observation trace. While many state-of-the-art sequence classifiers are neural networks, and in particular LSTMs, our classifiers take the form of finite state automata and are learned via discrete optimization. Our automata-based classifiers are interpretable---supporting explanation, counterfactual reasoning, and human-in-the-loop modification---and have strong empirical performance. Experiments over a suite of goal recognition and behaviour classification datasets show our learned automata-based classifiers to have comparable test performance to LSTM-based classifiers, with the added advantage of being interpretable. | computer science |
Increasingly large electronic health records (EHRs) provide an opportunity to algorithmically learn medical knowledge. In one prominent example, a causal health knowledge graph could learn relationships between diseases and symptoms and then serve as a diagnostic tool to be refined with additional clinical input. Prior research has demonstrated the ability to construct such a graph from over 270,000 emergency department patient visits. In this work, we describe methods to evaluate a health knowledge graph for robustness. Moving beyond precision and recall, we analyze for which diseases and for which patients the graph is most accurate. We identify sample size and unmeasured confounders as major sources of error in the health knowledge graph. We introduce a method to leverage non-linear functions in building the causal graph to better understand existing model assumptions. Finally, to assess model generalizability, we extend to a larger set of complete patient visits within a hospital system. We conclude with a discussion on how to robustly extract medical knowledge from EHRs. | statistics |
We revisit the problem of performing conformal block decomposition of exchange Witten diagrams in the crossed channel. Using properties of conformal blocks and Witten diagrams, we discover infinitely many linear relations among the crossed channel decomposition coefficients. These relations allow us to formulate a recursive algorithm that solves the decomposition coefficients in terms of certain seed coefficients. In one dimensional CFTs, the seed coefficient is the decomposition coefficient of the double-trace operator with the lowest conformal dimension. In higher dimensions, the seed coefficients are the coefficients of the double-trace operators with the minimal conformal twist. We also discuss the conformal block decomposition of a generic contact Witten diagram with any number of derivatives. As a byproduct of our analysis, we obtain a similar recursive algorithm for decomposing conformal partial waves in the crossed channel. | high energy physics theory |
We investigate the two-dimensional conformal field theories (CFTs) of $c=\frac{47}{2}$, $c=\frac{116}{5}$ and $c=23$ `dual' to the critical Ising model, the three state Potts model and the tensor product of two Ising models, respectively. We argue that these CFTs exhibit moonshines for the double covering of the baby Monster group, $2\cdot \mathbb{B}$, the triple covering of the largest Fischer group, $3\cdot \text{Fi}_{24}'$ and multiple-covering of the second largest Conway group, $2\cdot 2^{1+22} \cdot \text{Co}_2$. Various twined characters are shown to satisfy generalized bilinear relations involving Mckay-Thompson series. We also rediscover that the `self-dual' two-dimensional bosonic conformal field theory of $c=12$ has the Conway group $\text{Co}_{0}\simeq2\cdot\text{Co}_1$ as an automorphism group. | high energy physics theory |
Heralded single-photon source (HSPS) with competitive single photon purity and indistinguishability has become an essential resource for photonic quantum information processing. Here, for the first time, we proposed a theoretical regime to enhance heralded single-photons generation by multiplexing the degree of the freedom of orbital angular momentum (OAM) of down-converted entangled photon pairs emitted from a nonlinear crystal. Experimentally, a proof-of-principle experiment has been performed through multiplexing three OAM modes. We achieve a 47$\%$ enhancement in single photon rate. A second-order autocorrelation function $g^{(2)}(0)<0.5$ ensures our multiplexed heralded single photons with good single photon purity. We further indicate that an OAM-multiplexed HSPS with high quality can be constructed by generating higher dimensional entangled state and sorting them with high efficiency in OAM space. Our avenue may approach a good HSPS with the deterministic property. | quantum physics |
Quantum Hypothesis Testing has shown the advantages that quantum resources can offer in the discrimination of competing hypothesis. Here, we apply this framework to optomechanical systems and fundamental physics questions. In particular, we focus on an optomechanical system composed of two cavities employed to perform quantum channel discrimination. We show that input squeezed optical noise, and feasible measurement schemes on the output cavity modes, allow to obtain an advantage with respect to any comparable classical schemes. We apply these results to the discrimination of models of spontaneous collapse of the wavefunction, highlighting the possibilities offered by this scheme for fundamental physics searches. | quantum physics |
Surface phonon polaritons (SPhPs) in polar dielectrics offer new opportunities for infrared nanophotonics due to sub-diffraction confinement with low optical losses. Though the polaritonic field confinement can be significantly improved by modifying the dielectric environment, it is challenging to break the fundamental limits in photon confinement and propagation behavior of SPhP modes. In particular, as SPhPs inherently propagate isotropically in these bulk polar dielectrics, how to collectively realize ultra-large field confinement, in-plane hyperbolicity and unidirectional propagation remains elusive. Here, we report an approach to solve the aforementioned issues of bulk polar dielectric's SPhPs at one go by constructing a heterostructural interface between biaxial van der Waals material (e.g., MoO3) and bulk polar dielectric (e.g., SiC, AlN, and GaN). Due to anisotropy-oriented mode couplings at the interface, the hybridized SPhPs with a large confinement factor (>100) show in-plane hyperbolicity that has been switched to the orthogonal direction as compared to that in natural MoO3. More interestingly, this proof of concept allows steerable, angle-dependent and unidirectional polariton excitation by suspending MoO3 on patterned SiC air cavities. Our finding exemplifies a generalizable framework to manipulate the flow of nano-light and engineer unusual polaritonic responses in many other hybrid systems consisting of van der Waals materials and bulk polar dielectrics. | physics |
Let $P=A_1\ldots A_n$ be a generic polygon in three-dimensional space and let $v_1,v_2,\ldots,v_n$ be vectors $\overline{A_1A_2},\overline{A_2A_3},\ldots,\overline{A_nA_1}$, respectively. $P$ will be called \emph{regular}, if there exist vectors $u_1,\ldots,u_n$ such that cross products $[u_1,u_2],[u_2,u_3],\ldots,[u_n,u_1]$ are equal to vectors $v_2,v_3,\ldots,v_1$, respectively. In this case the polygon $P'$, defined be vectors $u_2-u_1,u_3-u_2,\ldots,u_1-u_n$ will be called the \emph{derived polygon} or the \emph{derivative} of the polygon $P$. In this work we formulate conditions for regularity and discuss geometric properties of derived polygons for $n=4,5,6$. | mathematics |
High-pressure phases of Mg$_2$GeO$_4$ have been explored as an analogue system for the ultra-high pressure behavior of Mg$_2$SiO$_4$. Using a laser-heated diamond anvil cell combined with in situ synchrotron X-ray diffraction, we have identified a novel phase in which germanium adopts eight-fold coordination with oxygen. The cubic Th3P4-type phase was synthesized upon heating Mg$_2$GeO$_4$ to ~1600 K at 162 GPa and it remained stable up to ~ 275 GPa and 2020 K. While the Th3P4-type phase is commonly observed in chalcogenides, it has not been reported before in any oxide composition. If applicable to silicates, the formation of this highly coordinated and intrinsically disordered phase or a closely related structure as suggested by theoretical calculations would have important implications for the interior mineralogy of large, rocky extrasolar planets. | condensed matter |
We present numerical simulations of dust clumping and planetesimal formation initiated by the streaming instability with self-gravity. We examine the variability in the planetesimal formation process by employing simulation domains with large radial and azimuthal extents and a novel approach of re-running otherwise identical simulations with different random initializations of the dust density field.We find that the planetesimal mass distribution and the total mass of dust that is converted to planetesimals can vary substantially between individual small simulations and within the domains of larger simulations. Our results show that the non-linear nature of the developed streaming instability introduces substantial variability in the planetesimal formation process that has not been previously considered and suggests larger scale dynamics may affect the process. | astrophysics |
The purpose of this paper is to articulate a coherent and easy-to-understand way of doing quantum mechanics in any finite-dimensional Liouville space, based on the use of Kronecker product and what we have termed the `bra-flipper' operator. One of the greater strengths of the formalism expatiated on here is the striking similarities it bears with Dirac's bra-ket notation. For the purpose of illustrating how the formalism can be effectively employed, we use it to solve a quantum optical master equation for a two-level quantum system and find its Kraus operator sum representation. The paper is addressed to students and researchers with some basic knowledge of linear algebra who want to acquire a deeper understanding of the Liouville space formalism. The concepts are conveyed so as to make the application of the formalism to more complex problems in quantum physics straightforward and unencumbered. | quantum physics |
Interactions between transportation networks and territories are the subject of open scientific debates, in particular regarding the possible existence of structuring effects of networks, and linked to crucial practical issues of territorial development. We propose an entry on these through co-evolution, and more particularly by the modeling of co-evolution processes between transportation networks and territories. We construct a multi-disciplinary definition of co-evolution which is proper to territorial systems and which can be tested empirically. We then develop the lessons learnt from the development of two types of models, macroscopic interaction models in systems of cities and mesoscopic morphogenesis models through co-evolution. This research opens the perspective of multi-scale models that could be applied to territorial prospective. | physics |
We examine thermal Green's functions of fermionic operators in quantum field theories with gravity duals. The calculations are performed on the gravity side using ingoing Eddington-Finkelstein coordinates. We find that at negative imaginary Matsubara frequencies and special values of the wavenumber, there are multiple solutions to the bulk equations of motion that are ingoing at the horizon and thus the boundary Green's function is not uniquely defined. At these points in Fourier space a line of poles and a line of zeros of the correlator intersect. We analyze these `pole-skipping' points in three-dimensional asymptotically anti-de Sitter spacetimes where exact Green's functions are known. We then generalize the procedure to higher-dimensional spacetimes. We also discuss the special case of a fermion with half-integer mass in the BTZ background. We discuss the implications and possible generalizations of the results. | high energy physics theory |
The elliptic curve discrete logarithm problem is considered a secure cryptographic primitive. The purpose of this paper is to propose a paradigm shift in attacking the elliptic curve discrete logarithm problem. In this paper, we will argue that initial minors are a viable way to solve this problem. This paper will present necessary algorithms for this attack. We have written a code to verify the conjecture of initial minors using Schur complements. We were able to solve the problem for groups of order up to $2^{50}$. | computer science |
We present a completely new approach to quantum circuit optimisation, based on the ZX-calculus. We first interpret quantum circuits as ZX-diagrams, which provide a flexible, lower-level language for describing quantum computations graphically. Then, using the rules of the ZX-calculus, we give a simplification strategy for ZX-diagrams based on the two graph transformations of local complementation and pivoting and show that the resulting reduced diagram can be transformed back into a quantum circuit. While little is known about extracting circuits from arbitrary ZX-diagrams, we show that the underlying graph of our simplified ZX-diagram always has a graph-theoretic property called generalised flow, which in turn yields a deterministic circuit extraction procedure. For Clifford circuits, this extraction procedure yields a new normal form that is both asymptotically optimal in size and gives a new, smaller upper bound on gate depth for nearest-neighbour architectures. For Clifford+T and more general circuits, our technique enables us to to `see around' gates that obstruct the Clifford structure and produce smaller circuits than naive 'cut-and-resynthesise' methods. | quantum physics |
Heat transfer between two surfaces separated by a nanometre gap is important for a number of applications ranging from spaced head disk systems, scanning thermal microscopy and thermal transport in aerogels. At these separation distances, near field radiative heat transfer competes with heat transfer mediated by phonons. Here we quantify the contribution of phonon assisted heat transfer between apolar solids using lattice dynamics combined with ab-initio calculations. We clearly demonstrate that phonons dominate heat transfer for subnanometre gaps. Strikingly, we conclude that even in the situation where the gap is filled with air molecules, phonons provide the dominant energy channel between the two solids nearly in contact. Our results predict orders of magnitude enhanced phonon heat transfer compared to previous works and bring forward a methodology to analyse phonon transmission across nanoscale vacuum gaps between apolar materials. | condensed matter |
Hyperparameters play a critical role in the performances of many machine learning methods. Determining their best settings or Hyperparameter Optimization (HPO) faces difficulties presented by the large number of hyperparameters as well as the excessive training time. In this paper, we develop an effective approach to reducing the total amount of required training time for HPO. In the initialization, the nested Latin hypercube design is used to select hyperparameter configurations for two types of training, which are, respectively, heavy training and light training. We propose a truncated additive Gaussian process model to calibrate approximate performance measurements generated by light training, using accurate performance measurements generated by heavy training. Based on the model, a sequential model-based algorithm is developed to generate the performance profile of the configuration space as well as find optimal ones. Our proposed approach demonstrates competitive performance when applied to optimize synthetic examples, support vector machines, fully connected networks and convolutional neural networks. | statistics |
For paired comparison experiments involving competing options described by two-level attributes several different methods of constructing designs having block paired observations under the main effects model are presented. These designs are compared to alternative designs available in the literature. | statistics |
We present a general framework encompassing a number of continuous-variable quantum key distribution protocols, including standard one-way protocols, measurement-device-independent protocols as well as some two-way protocols, or any other continuous-variable protocol involving only a Gaussian modulation of coherent states and heterodyne detection. The main interest of this framework is that the corresponding protocols are all covariant with respect to the action of the unitary group $U(n)$, implying that their security can be established thanks to a Gaussian de Finetti reduction. In particular, we give a composable security proof of two-way continuous-variable quantum key distribution against general attacks. We also prove that no active symmetrization procedure is required for these protocols, which would otherwise make them prohibitively costly to implement. | quantum physics |
The use of low- and no-code modeling tools is today an established way in practice to give non-programmers an opportunity to master their digital challenges independently, using the means of model-driven software development. However, the existing tools are limited to a very small number of different domains such as mobile app development, which can be attributed to the enormous demands that a user has on such a tool today. These demands exceed the mere use of a modeling environment as such and require cross-cutting concerns such as: easy access, direct usability and simultaneous collaboration, which result in additional effort in the realization of such tools. Our solution is based on the idea to support and simplify the creation of new domain-specific holistic tools by generating it entirely based on a declarative specification with a domain-specific meta-tool. The meta-tool Pyro demonstrated and analyzed here focuses on graph-based graphical languages to fully generate a complete, directly executable tool starting from a meta-model in order to meet all cross-cutting requirements. | computer science |
In this paper, a theoretical journey from electronic to magneto-caloric effect has been shown through the magnetic properties of aluminium induced yttrium chromate. The ground state electronic band structure and density of states have been studied using first principle calculations under GGA+U schemes. From the energy minimization, the ferromagnetic structure is more stable than the antiferromagnetic one. The interaction constant as well as the magnetic moment, have been determined from mean-field theory and DFT, respectively. The Monte-Carlo simulation under Metropolis algorithm has been employed to determine the critical temperature ($T_C$), which is nearly same as the experimental value. The temperature-dependent magnetization shows that these materials exhibit a paramagnetic to ferromagnetic phase transition at ~136 K, 130 K, 110 K, and 75 K respectively. The two inherent properties named the isothermal entropy change ($\Delta S_M$) and the adiabatic temperature change ($\Delta T_{ad}$) as a function of temperature for different applied magnetic fields have been determined to measure the magnetocaloric efficiency of these materials. The relative cooling power (RCP), which is calculated around $T_C$, changes from 4.7 J/Kg to 2.5 J/Kg with the decreasing Cr-concentration. | condensed matter |
We present a new loss function called Distribution-Balanced Loss for the multi-label recognition problems that exhibit long-tailed class distributions. Compared to conventional single-label classification problem, multi-label recognition problems are often more challenging due to two significant issues, namely the co-occurrence of labels and the dominance of negative labels (when treated as multiple binary classification problems). The Distribution-Balanced Loss tackles these issues through two key modifications to the standard binary cross-entropy loss: 1) a new way to re-balance the weights that takes into account the impact caused by label co-occurrence, and 2) a negative tolerant regularization to mitigate the over-suppression of negative labels. Experiments on both Pascal VOC and COCO show that the models trained with this new loss function achieve significant performance gains over existing methods. Code and models are available at: https://github.com/wutong16/DistributionBalancedLoss . | computer science |
We consider a new inflationary model in which an antisymmetric tensor field $A_{\nu\rho\sigma}$ and its four-form field strength $F_{\mu\nu\rho\sigma}=4\partial_{[\mu} A_{\nu\rho\sigma]}$ are coupled to the scalar sector of the standard model and to the Ricci scalar $\mathcal{R}$. The four-form field induces modifications to the Higgs self-coupling constant, the cosmological constant, and the non-minimal coupling constant, which results in the modification to the inflaton potential. We also show that there is no need for the Higgs-gravity coupling in the presence of four-form-gravity interaction, but still can produce the right amount of density perturbation for inflation. | high energy physics phenomenology |
Ancillaries have become a major source of revenue and profitability in the travel industry. Yet, conventional pricing strategies are based on business rules that are poorly optimized and do not respond to changing market conditions. This paper describes the dynamic pricing model developed by Deepair solutions, an AI technology provider for travel suppliers. We present a pricing model that provides dynamic pricing recommendations specific to each customer interaction and optimizes expected revenue per customer. The unique nature of personalized pricing provides the opportunity to search over the market space to find the optimal price-point of each ancillary for each customer, without violating customer privacy. In this paper, we present and compare three approaches for dynamic pricing of ancillaries, with increasing levels of sophistication: (1) a two-stage forecasting and optimization model using a logistic mapping function; (2) a two-stage model that uses a deep neural network for forecasting, coupled with a revenue maximization technique using discrete exhaustive search; (3) a single-stage end-to-end deep neural network that recommends the optimal price. We describe the performance of these models based on both offline and online evaluations. We also measure the real-world business impact of these approaches by deploying them in an A/B test on an airline's internet booking website. We show that traditional machine learning techniques outperform human rule-based approaches in an online setting by improving conversion by 36% and revenue per offer by 10%. We also provide results for our offline experiments which show that deep learning algorithms outperform traditional machine learning techniques for this problem. Our end-to-end deep learning model is currently being deployed by the airline in their booking system. | statistics |
Visible and infrared image fusion is one of the most important areas in image processing due to its numerous applications. While much progress has been made in recent years with efforts on developing fusion algorithms, there is a lack of code library and benchmark which can gauge the state-of-the-art. In this paper, after briefly reviewing recent advances of visible and infrared image fusion, we present a visible and infrared image fusion benchmark (VIFB) which consists of 21 image pairs, a code library of 20 fusion algorithms and 13 evaluation metrics. We also carry out large scale experiments within the benchmark to understand the performance of these algorithms. By analyzing qualitative and quantitative results, we identify effective algorithms for robust image fusion and give some observations on the status and future prospects of this field. | computer science |
The forward-backward (FB) asymmetry of $b$ quarks in $e^+e^-$ collisions at the Z pole measured at LEP, $A_{FB}^{0,b} = 0.0992\pm0.0016$, remains today the electroweak precision observable with the largest disagreement (2.4$\sigma$) with respect to the Standard Model prediction, $(A_{FB}^{0,b})_{_{\rm th}} = 0.1030 \pm 0.0002$. Beyond the dominant statistical uncertainties, QCD effects, such as $b$-quark showering and hadronization, are the leading sources of $A_{FB}^{0,b}$ systematic uncertainty, and have not been revised in the last twenty years. We reassess the QCD uncertainties of the eight original $A_{FB}^{0,b}$ LEP measurements, using modern parton shower PYTHIA-8 and PYTHIA-8 + VINCIA simulations with nine different implementations of soft and collinear radiation as well as of parton fragmentation. Our analysis, combined with NNLO massive $b$-quark corrections independently computed recently, indicates total propagated QCD uncertainties of $\sim$0.7% and $\sim$0.3% for the lepton-charge and jet-charge analyses, respectively, that are about a factor of two smaller than those of the original LEP results. Accounting for such updated QCD effects leads to a new $A_{FB}^{0,b} = 0.0996\pm0.0016$ average, with a data-theory tension slightly reduced from 2.4$\sigma$ to 2.1$\sigma$. Confirmation or resolution of this long-term discrepancy requires a new high-luminosity $e^+e^-$ collider collecting orders-of-magnitude more data at the Z pole to significantly reduce the $A_{FB}^{0,b}$ statistical uncertainties. | high energy physics phenomenology |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.