text
stringlengths
11
9.77k
label
stringlengths
2
104
High-resolution coherent Raman spectroscopic measurements of all three tritium-containing molecular hydrogen isotopologues T$_2$, DT and HT were performed to determine the ground electronic state fundamental Q-branch ($v=0 \rightarrow 1, \Delta J = 0$) transition frequencies at accuracies of $0.0005$ cm$^{-1}$. An over hundred-fold improvement in accuracy over previous experiments allows the comparison with the latest ab initio calculations in the framework of Non-Adiabatic Perturbation Theory including nonrelativisitic, relativisitic and QED contributions. Excellent agreement is found between experiment and theory, thus providing a verification of the validity of the NAPT-framework for these tritiated species. While the transition frequencies were corrected for ac-Stark shifts, the contributions of non-resonant background as well as quantum interference effects between resonant features in the nonlinear spectroscopy were quantitatively investigated, also leading to corrections to the transition frequencies. Methods of saturated CARS with the observation of Lamb dips, as well as the use of continuous-wave radiation for the Stokes frequency were explored, that might pave the way for future higher-accuracy CARS measurements.
physics
The development of quantum control methods is an essential task for emerging quantum technologies. In general, the process of optimizing quantum controls scales very unfavorably in system size due to the exponential growth of the Hilbert space dimension. Here, I present a scalable subsystem-based method for quantum optimal control on a large quantum system. The basic idea is to cast the original problem as a robust control problem on subsystems with requirement of robustness in respect of inter-subsystem couplings, thus enabling a drastic reduction of problem size. The method is in particular suitable for target quantum operations that are local, e.g., elementary quantum gates. As an illustrative example, I employ it to tackle the demanding task of pulse searching on a 12-spin coupled system, achieving substantial reduction of memory and time costs. This work has significant implications for coherent quantum engineering on near-term intermediate-scale quantum devices.
quantum physics
The CTR (Click-Through Rate) prediction plays a central role in the domain of computational advertising and recommender systems. There exists several kinds of methods proposed in this field, such as Logistic Regression (LR), Factorization Machines (FM) and deep learning based methods like Wide&Deep, Neural Factorization Machines (NFM) and DeepFM. However, such approaches generally use the vector-product of each pair of features, which have ignored the different semantic spaces of the feature interactions. In this paper, we propose a novel Tensor-based Feature interaction Network (TFNet) model, which introduces an operating tensor to elaborate feature interactions via multi-slice matrices in multiple semantic spaces. Extensive offline and online experiments show that TFNet: 1) outperforms the competitive compared methods on the typical Criteo and Avazu datasets; 2) achieves large improvement of revenue and click rate in online A/B tests in the largest Chinese App recommender system, Tencent MyApp.
computer science
It is known that covalently bonded materials undergo nonthermal structure transformations upon ultrafast excitation of an electronic system, whereas metals exhibit phonon hardening. Here we study how ionic bonds react to electronic excitation. Density-functional molecular dynamics predicts that ionic crystals may melt nonthermally, however, into an electronically insulating state, in contrast to covalent materials. We demonstrate that the band gap behavior during nonthermal transitions depends on a bonding type: it is harder to collapse the band gap in more ionic compounds, which is illustrated by transformations in Y2O3 vs. NaCl, LiF and KBr.
condensed matter
At second order in perturbation theory, we find the ground states of the $\phi^4$ double-well quantum field theory in 1+1 dimensions. The operators which create these ground states from the free vacuum are constructed explicitly at this order, as is the operator which interpolates between the ground states. As a warm up we perform the analogous calculation in quantum mechanics, where the true ground state is unique but in perturbation theory there are also two ground states.
high energy physics theory
The Pr-rich end of the alloy series Pr$_{1-x}$Nd$_x$Os$_4$Sb$_{12}$ has been studied using muon spin rotation and relaxation. The end compound PrOs$_4$Sb$_{12}$ is an unconventional heavy-fermion superconductor, which exhibits a spontaneous magnetic field in the superconducting phase associated with broken time-reversal symmetry. No spontaneous field is observed in the Nd-doped alloys for x $>$ 0.05. The superfluid density is insensitive to Nd concentration, and no Nd$^{3+}$ static magnetism is found down to the lowest temperatures of measurement. Together with the slow suppression of the superconducting transition temperature with Nd doping, these results suggest anomalously weak coupling between Nd spins and conduction-band states.
condensed matter
This paper proposes sparse and easy-to-interpret proximate factors to approximate statistical latent factors. Latent factors in a large-dimensional factor model can be estimated by principal component analysis (PCA), but are usually hard to interpret. We obtain proximate factors that are easier to interpret by shrinking the PCA factor weights and setting them to zero except for the largest absolute ones. We show that proximate factors constructed with only 5-10% of the data are usually sufficient to almost perfectly replicate the population and PCA factors without actually assuming a sparse structure in the weights or loadings. Using extreme value theory we explain why sparse proximate factors can be substitutes for non-sparse PCA factors. We derive analytical asymptotic bounds for the correlation of appropriately rotated proximate factors with the population factors. These bounds provide guidance on how to construct the proximate factors. In simulations and empirical analyses of financial portfolio and macroeconomic data we illustrate that sparse proximate factors are close substitutes for PCA factors with average correlations of around 97.5% while being interpretable.
statistics
L\'evy Flights are paradigmatic generalised random walk processes, in which the independent stationary increments---the "jump lengths"---are drawn from an $\alpha$-stable jump length distribution with long-tailed, power-law asymptote. As a result, the variance of L\'evy Flights diverges and the trajectory is characterised by occasional extremely long jumps. Such long jumps significantly decrease the probability to revisit previous points of visitation, rendering L\'evy Flights efficient search processes in one and two dimensions. To further quantify their precise property as random search strategies we here study the first-passage time properties of L\'evy Flights in one-dimensional semi-infinite and bounded domains for symmetric and asymmetric jump length distributions. To obtain the full probability density function of first-passage times for these cases we employ two complementary methods. One approach is based on the space-fractional diffusion equation for the probability density function, from which the survival probability is obtained for different values of the stable index $\alpha$ and the skewness (asymmetry) parameter $\beta$. The other approach is based on the stochastic Langevin equation with $\alpha$-stable driving noise. Both methods have their advantages and disadvantages for explicit calculations and numerical evaluation, and the complementary approach involving both methods will be profitable for concrete applications. We also make use of the Skorokhod theorem for processes with independent increments and demonstrate that the numerical results are in good agreement with the analytical expressions for the probability density function of the first-passage times.
condensed matter
The extreme Reissner-Nordstr\"om solution has a discrete conformal isometry that maps the future event horizon to future null infinity and vice versa, the Couch-Torrence inversion isometry. We study the dynamics of a probe Maxwell field on the extreme Reissner-Nordstr\"om solution in light of this symmetry. We present a gauge fixing that is compatible with the inversion symmetry. The gauge fixing allows us to relate the gauge parameter at the future horizon to future null infinity, which further allows us to study global charges for large gauge symmetries in the exterior of the extreme Reissner-Nordstr\"om black hole. Along the way, we construct Newman-Penrose and Aretakis like conserved quantities along future null infinity and the future event horizon, respectively, and relate them via the Couch-Torrence inversion symmetry.
high energy physics theory
In this work we address disentanglement of style and content in speech signals. We propose a fully convolutional variational autoencoder employing two encoders: a content encoder and a style encoder. To foster disentanglement, we propose adversarial contrastive predictive coding. This new disentanglement method does neither need parallel data nor any supervision. We show that the proposed technique is capable of separating speaker and content traits into the two different representations and show competitive speaker-content disentanglement performance compared to other unsupervised approaches. We further demonstrate an increased robustness of the content representation against a train-test mismatch compared to spectral features, when used for phone recognition.
electrical engineering and systems science
The European XFEL is an extremely brilliant Free Electron Laser Source with a very demanding pulse structure: trains of 2700 X-Ray pulses are repeated at 10 Hz. The pulses inside the train are spaced by 220 ns and each one contains up to $10^{12}$ photons of 12.4 keV, while being $\le 100$ fs in length. AGIPD, the Adaptive Gain Integrating Pixel Detector, is a hybrid pixel detector developed by DESY, PSI, and the Universities of Bonn and Hamburg to cope with these properties. It is a fast, low noise integrating detector, with single photon sensitivity (for $\text{E}_{\gamma} \ge 6$ keV) and a large dynamic range, up to $10^4$ photons at 12.4 keV. This is achieved with a charge sensitive amplifier with 3 adaptively selected gains per pixel. 352 images can be recorded at up to 6.5 MHz and stored in the in-pixel analogue memory and read out between pulse trains. The core component of this detector is the AGIPD ASIC, which consists of $64 \times 64$ pixels of $200 {\mu}\text{m} \times 200 {\mu}\text{m}$. Control of the ASIC's image acquisition and analogue readout is via a command based interface. FPGA based electronic boards, controlling ASIC operation, image digitisation and 10 GE data transmission interface AGIPD detectors to DAQ and control systems. An AGIPD 1 Mpixel detector has been installed at the SPB experimental station in August 2017, while a second one is currently commissioned for the MID endstation. A larger (4 Mpixel) AGIPD detector and one to employ Hi-Z sensor material to efficiently register photons up to $\text{E}_{\gamma} \approx 25$ keV are currently under construction.
physics
When we are interested in high-dimensional system and focus on classification performance, the $\ell_{1}$-penalized logistic regression is becoming important and popular. However, the Lasso estimates could be problematic when penalties of different coefficients are all the same and not related to the data. We proposed two types of weighted Lasso estimates depending on covariates by the McDiarmid inequality. Given sample size $n$ and dimension of covariates $p$, the finite sample behavior of our proposed methods with a diverging number of predictors is illustrated by non-asymptotic oracle inequalities such as $\ell_{1}$-estimation error and squared prediction error of the unknown parameters. We compare the performance of our methods with former weighted estimates on simulated data, then apply these methods to do real data analysis.
statistics
We consider a production-inventory control model with finite capacity and two different production rates, assuming that the cumulative process of customer demand is given by a compound Poisson process. It is possible at any time to switch over from the different production rates but it is mandatory to switch-off when the inventory process reaches the storage maximum capacity. We consider holding, production, shortage penalty and switching costs. This model was introduced by Doshi, Van Der Duyn Schouten and Talman in 1978. Our aim is to minimize the expected discounted cumulative costs up to infinity over all admissible switching strategies. We show that the optimal cost functions for the different production rates satisfy the corresponding Hamilton-Jacobi-Bellman system of equations in a viscosity sense and prove a verification theorem. The way in which the optimal cost functions solve the different variational inequalities gives the switching regions of the optimal strategy, hence it is stationary in the sense that depends only on the current production rate and inventory level. We define the notion of finite band strategies and derive, using scale functions, the formulas for the different costs of the band strategies with one or two bands. We also show that there are examples where the switching strategy presented by Doshi et al. is not the optimal strategy.
mathematics
The Atlantic Meridional Overturning Circulation (AMOC) transports substantial amounts of heat into the North Atlantic sector, and hence is of very high importance in regional climate projections. The AMOC has been observed to show multi-stability across a range of models of different complexity. The simplest models find a bifurcation associated with the AMOC `on' state losing stability that is a saddle node. Here we study a physically derived global oceanic model of Wood {\em et al} with five boxes, that is calibrated to runs of the FAMOUS coupled atmosphere-ocean general circulation model. We find the loss of stability of the `on' state is due to a subcritical Hopf for parameters from both pre-industrial and doubled CO${}_2$ atmospheres. This loss of stability via subcritical Hopf bifurcation has important consequences for the behaviour of the basin of attraction close to bifurcation. We consider various time-dependent profiles of freshwater forcing to the system, and find that rate-induced thresholds for tipping can appear, even for perturbations that do not cross the bifurcation. Understanding how such state transitions occur is important in determining allowable safe climate change mitigation pathways to avoid collapse of the AMOC.
physics
This work introduces a sticky-charge wall model as a simple and intuitive representation of charge regulation. Implemented within the mean-field level of description, the model modifies the boundary conditions without affecting the underlying Poisson-Boltzmann (PB) equation of an electrolyte. Employing various modified PB equations, we are able to assess how various structural details of an electrolyte influence charge regulation.
condensed matter
We introduce a general approach for modeling the dynamic of multivariate time series when the data are of mixed type (binary/count/continuous). Our method is quite flexible and conditionally on past values, each coordinate at time $t$ can have a distribution compatible with a standard univariate time series model such as GARCH, ARMA, INGARCH or logistic models whereas past values of the other coordinates play the role of exogenous covariates in the dynamic. The simultaneous dependence in the multivariate time series can be modeled with a copula. Additional exogenous covariates are also allowed in the dynamic. We first study usual stability properties of these models and then show that autoregressive parameters can be consistently estimated equation-by-equation using a pseudo-maximum likelihood method, leading to a fast implementation even when the number of time series is large. Moreover, we prove consistency results when a parametric copula model is fitted to the time series and in the case of Gaussian copulas, we show that the likelihood estimator of the correlation matrix is strongly consistent. We carefully check all our assumptions for two prototypical examples: a GARCH/INGARCH model and logistic/log-linear INGARCH model. Our results are illustrated with numerical experiments as well as two real data sets.
statistics
In this article, we consider the problem of periodic homogenization of a Feller process generated by a pseudo-differential operator, the so-called L\'evy-type process. Under the assumptions that the generator has rapidly periodically oscillating coefficients, and that it admits "small jumps" only (that is, the jump kernel has finite second moment), we prove that the appropriately centered and scaled process converges weakly to a Brownian motion with covariance matrix given in terms of the coefficients of the generator. The presented results generalize the classical and well-known results related to periodic homogenization of a diffusion process.
mathematics
Email breaches are commonplace, and they expose a wealth of personal, business, and political data that may have devastating consequences. The current email system allows any attacker who gains access to your email to prove the authenticity of the stolen messages to third parties -- a property arising from a necessary anti-spam / anti-spoofing protocol called DKIM. This exacerbates the problem of email breaches by greatly increasing the potential for attackers to damage the users' reputation, blackmail them, or sell the stolen information to third parties. In this paper, we introduce "non-attributable email", which guarantees that a wide class of adversaries are unable to convince any third party of the authenticity of stolen emails. We formally define non-attributability, and present two practical system proposals -- KeyForge and TimeForge -- that provably achieve non-attributability while maintaining the important protection against spam and spoofing that is currently provided by DKIM. Moreover, we implement KeyForge and demonstrate that that scheme is practical, achieving competitive verification and signing speed while also requiring 42% less bandwidth per email than RSA2048.
computer science
Let $F$ be a finite field of $char F = p$ and size $|F| = q$. Let $E$ be the unitary infinity dimensional Grassmann algebra. In this short note, we describe the $\mathbb{Z}_2 \times \mathbb{Z}_2$-graded identities of $E_{k^{*}}\otimes E$, where $E_{k^{*}}$ is the Grassmann algebra with a specific $\mathbb{Z}_{2}$-grading. In the end, we discuss about the $\mathbb{Z}_2 \times \mathbb{Z}_2$-graded GK-dimension of $E_{k^*}\otimes E$ in $m$ variables.
mathematics
Five-dimensional braneworld constructions in anti-de Sitter space naturally lead to dark sector scenarios in which parts of the dark sector vanish at high 4d momentum or temperature. In the language of modified gravity, such feature implies a new mechanism for hiding light scalars, as well as the possibility of UV-completing chameleon-like effective theories. In the language of dark matter phenomenology, the high-energy behaviour of the mediator sector changes dark matter observational complementarity. A multitude of signatures---including exotic ones---are present from laboratory to cosmologic scales, including long-range forces with non-integer behaviour, periodic signals at colliders, `soft bombs' events well-known from conformal theories, as well as a dark phase transition and a typically small amount of dark radiation.
high energy physics phenomenology
Monte Carlo dropout may effectively capture model uncertainty in deep learning, where a measure of uncertainty is obtained by using multiple instances of dropout at test time. However, Monte Carlo dropout is applied across the whole network and thus significantly increases the computational complexity, proportional to the number of instances. To reduce the computational complexity, at test time we enable dropout layers only near the output of the neural network and reuse the computation from prior layers while keeping, if any, other dropout layers disabled. Additionally, we leverage the side information about the ideal distributions for various input samples to do `error correction' on the predictions. We apply these techniques to the radio frequency (RF) transmitter classification problem and show that the proposed algorithm is able to provide better prediction uncertainty than the simple ensemble average algorithm and can be used to effectively identify transmitters that are not in the training data set while correctly classifying transmitters it has been trained on.
electrical engineering and systems science
We introduce Topic Grouper as a complementary approach in the field of probabilistic topic modeling. Topic Grouper creates a disjunctive partitioning of the training vocabulary in a stepwise manner such that resulting partitions represent topics. It is governed by a simple generative model, where the likelihood to generate the training documents via topics is optimized. The algorithm starts with one-word topics and joins two topics at every step. It therefore generates a solution for every desired number of topics ranging between the size of the training vocabulary and one. The process represents an agglomerative clustering that corresponds to a binary tree of topics. A resulting tree may act as a containment hierarchy, typically with more general topics towards the root of tree and more specific topics towards the leaves. Topic Grouper is not governed by a background distribution such as the Dirichlet and avoids hyper parameter optimizations. We show that Topic Grouper has reasonable predictive power and also a reasonable theoretical and practical complexity. Topic Grouper can deal well with stop words and function words and tends to push them into their own topics. Also, it can handle topic distributions, where some topics are more frequent than others. We present typical examples of computed topics from evaluation datasets, where topics appear conclusive and coherent. In this context, the fact that each word belongs to exactly one topic is not a major limitation; in some scenarios this can even be a genuine advantage, e.g.~a related shopping basket analysis may aid in optimizing groupings of articles in sales catalogs.
computer science
We compute the one-loop beta-functions for renormalisable quantum gravity coupled to scalars using the co-ordinate space approach and generalised Schwinger De Witt technique. We resolve apparent contradictions with the corresponding momentum space calculations, and indicate how our results also resolve similar inconsistencies in the fermion case.
high energy physics theory
Isolated millisecond pulsars (IMSPs) are a topic of academic contention. There are various models to explain their formation. We explore the formation of IMSP via quark novae (QN). During this formation process, low-mass X-ray binaries (LMXBs) are disrupted when the mass of the neutron star (NS) reaches 1.8 $\rm M_\odot$. Using population synthesis, this work estimates that the Galactic birthrate of QN-produced IMSPs lies between $\sim 9.5\times10^{-6}$ and $\sim1.7\times10^{-4}$ $\rm yr^{-1}$. The uncertainties shown in our experiment model is due to the QN's kick velocity. Furthermore, our findings not only show that QN-produced IMSPs are statistically more significant than those produced by mergers, but also that millisecond pulsar binaries with a high eccentricity may originate from LMXBs that have been involved in, yet not disrupted by, a QN.
astrophysics
In this paper, we investigate the combined effects of the cloud of strings and quintessence on the thermodynamics of a Reissner-Nordstr\"om-de Sitter black hole. Based on the equivalent thermodynamic quantities considering the correlation between the black hole horizon and the cosmological horizon, we extensively discuss the phase transitions of the space-time. Our analysis prove that similar to the case in AdS space-time, second-order phase transitions could take place under certain conditions, with the absence of first-order phase transition in the charged de Sitter black holes with cloud of string and quintessence. The effects of different thermodynamic quantities on the phase transitions are also quantitatively discussed, which provides a new approach to study the thermodynamic qualities of unstable dS space-time. Focusing on the entropy force generated by the interaction between the black hole horizon and the cosmological horizon, as well as the Lennard-Jones force between two particles, our results demonstrate the strong degeneracy between the entropy force of the two horizons and the ratio of the horizon positions, which follows the surprisingly similar law given the relation between the Lennard-Jones force and the ratio of two particle positions. Therefore, the study of the entropy force between two horizons, is not only beneficial to the deep exploration of the three modes of cosmic evolution, but also helpful to understand the correlation between the microstates of particles in black holes and those in ordinary thermodynamic systems.
high energy physics theory
We study the infrared phases of $\mathcal{N}\,=\,1$ QCD$_3$ with gauge group $SU(N_c)$ and with the superpotential containing the mass term and the baryon operator. We restrict ourselves to the cases, when the number of flavors is equal to the number of colours, such that baryon operator is neutral under the global $SU(N_F)$. We also focus on the cases with two, three and four colours, such that the theory is perturbatively renormalizable. On the $SU(2)$ phase diagram we find two distinct CFTs with $\mathcal{N}\,=\,2$ supersymmetry, already known from the phase diagram of $SU(2)$ theory with one flavor. As for the $SU(3)$ phase diagram, consistency arguments lead us to conjecture that also there supersymmetry enhancement to $\mathcal{N}\,=\,2$ must occur. Finally, similar arguments lead us to conclude that also for the $SU(4)$ case we should observe supersymmetry enhancement at the fixed point. Yet, we find that this conclusion would be in contradiction with the renormalization group flow analysis. We relate this inconsistency between the two kinds of arguments to the fact that the theory is not UV complete.
high energy physics theory
In this work, we develop Gaussian process regression (GPR) models of hyperelastic material behavior. First, we consider the direct approach of modeling the components of the Cauchy stress tensor as a function of the components of the Finger stretch tensor in a Gaussian process. We then consider an improvement on this approach that embeds rotational invariance of the stress-stretch constitutive relation in the GPR representation. This approach requires fewer training examples and achieves higher accuracy while maintaining invariance to rotations exactly. Finally, we consider an approach that recovers the strain-energy density function and derives the stress tensor from this potential. Although the error of this model for predicting the stress tensor is higher, the strain-energy density is recovered with high accuracy from limited training data. The approaches presented here are examples of physics-informed machine learning. They go beyond purely data-driven approaches by embedding the physical system constraints directly into the Gaussian process representation of materials models.
statistics
Heteroepitaxy on nanopatterned substrates is a means of defect reduction at semiconductor heterointerfaces by exploiting substrate compliance and enhanced elastic lattice relaxation resulting from reduced dimensions. We explore this possibility in the InAs/GaAs(111)A system using a combination of nanosphere lithography and reactive ion etching of the GaAs(111)A substrate for nano-patterning of the substrate, yielding pillars with honeycomb and hexagonal arrangements and varied nearest neighbor distances. Substrate patterning is followed by MBE growth of InAs at temperatures of 150 - 350 C and growth rates of 0.011 nm/s and 0.11 nm/s. InAs growth in the form of nano-islands on the pillar tops is achieved by lowering the adatom migration length by choosing a low growth temperature of 150 C at the growth rate 0.011 nm/s. The choice of a higher growth rate of 0.11 nm/s results in higher InAs island nucleation and the formation of hillocks concentrated at the pillar bases due to a further reduction of adatom migration length. A common feature of the growth morphology for all other explored conditions is the formation of merged hillocks or pyramids with well-defined facets due to the presence of a concave surface curvature at the pillar bases acting as adatom sinks.
condensed matter
Experimental developments in neutrino telescopes are drastically improving their ability to constrain the annihilation cross-section of dark matter. In this paper, we employ an angular power spectrum analysis method to probe the galactic and extra-galactic dark matter signals with neutrino telescopes. We first derive projections for a next generation of neutrino telescope that is inspired by KM3NeT. We emphasise that such analysis is much less sensitive to the choice of dark matter density profile. Remarkably, the projected sensitivity is improved by more than an order of magnitude with respect to the existing limits obtained by assuming the Burkert dark matter density profile describing the galactic halo. Second, we analyse minimal extensions to the Standard Model that will be maximally probed by the next generation of neutrino telescopes. As benchmark scenarios, we consider Dirac dark matter in $s$- and $t$-channel models with vector and scalar mediators. We follow a global approach by examining all relevant complementary experimental constraints. We find that neutrino telescopes will be able to competitively probe significant portions of parameter space. Interestingly, the anomaly-free $L_{\mu}-L_{\tau}$ model can potentially be explored in regions where the relic abundance is achieved through freeze-out mechanism.
high energy physics phenomenology
We have studied electrical transport as a function of carrier density, temperature and bias in multi-terminal devices consisting of hexagonal boron nitride (h-BN) encapsulated titanium trisulfide (TiS$_3$) sheets. Through the encapsulation with h-BN, we observe metallic behavior and high electron mobilities. Below $\sim$60 K an increase in the resistance, and non-linear transport with plateau-like features in the differential resistance are present, in line with the expected charge density wave (CDW) formation. Importantly, the critical temperature and the threshold field of the CDW phase can be controlled through the back-gate.
condensed matter
We propose a novel algorithm for solving convex, constrained and distributed optimization problems defined on multi-agent-networks, where each agent has exclusive access to a part of the global objective function. The agents are able to exchange information over a directed, weighted communication graph, which can be represented as a column-stochastic matrix. The algorithm combines an adjusted push-sum consensus protocol for information diffusion and a gradient descent-ascent on the local cost functions, providing convergence to the optimum of their sum. We provide results on a reformulation of the push-sum into single matrix-updates and prove convergence of the proposed algorithm to an optimal solution, given standard assumptions in distributed optimization. The algorithm is applied to a distributed economic dispatch problem, in which the constraints can be expressed in local and global subsets.
electrical engineering and systems science
We study the capabilities of a muon collider experiment to detect disappearing tracks originating when a heavy and electrically charged long-lived particle decays via $X^+ \to Y^+ Z^0$, where $X^+$ and $Z^0$ are two almost mass degenerate new states and $Y^+$ is a charged Standard Model particle. The backgrounds induced by the in-flight decays of the muon beams (BIB) can create detector hit combinations that mimic long-lived particle signatures, making the search a daunting task. We design a simple strategy to tame the BIB, based on a detector-hit-level selection exploiting timing information and hit-to-hit correlations, followed by simple requirements on the quality of reconstructed tracks. Our strategy allows us to reduce the number of tracks from BIB to an average of 0.08 per event, hence being able to design a cut-and-count analysis that shows that it is possible to cover weak doublets and triplets with masses close to $\sqrt{s}/2$ in the 0.1-10 ns range. In particular, this implies that a 10 TeV muon collider is able to probe thermal MSSM higgsinos and thermal MSSM winos, thus rivaling the FCC-hh in that respect, and further enlarging the physics program of the muon collider into the territory of WIMP dark matter and long-lived signatures. We also provide parton-to-reconstructed level efficiency maps, allowing an estimation of the coverage of disappearing tracks at muon colliders for arbitrary models.
high energy physics phenomenology
Displaced vertices at colliders, arising from the production and decay of long-lived particles, probe dark matter candidates produced via freeze-in. If one assumes a standard cosmological history, these decays happen inside the detector only if the dark matter is very light because of the relic density constraint. Here, we argue how displaced events could very well point to freeze-in within a non-standard early universe history. Focusing on the cosmology of inflationary reheating, we explore the interplay between the reheating temperature and collider signatures for minimal freeze-in scenarios. Observing displaced events at the LHC would allow to set an upper bound on the reheating temperature and, in general, to gather indirect information on the early history of the universe.
high energy physics phenomenology
Privacy preserving machine learning is an active area of research usually relying on techniques such as homomorphic encryption or secure multiparty computation. Recent novel encryption techniques for performing machine learning using deep neural nets on images have recently been proposed by Tanaka and Sirichotedumrong, Kinoshita, and Kiya. We present new chosen-plaintext and ciphertext-only attacks against both of these proposed image encryption schemes and demonstrate the attacks' effectiveness on several examples.
computer science
A rotation-vibration line list for the electronic ground state ($\tilde{X}^{1}A_{1}$) of SiH$_2$ is presented. The line list, named CATS, is suitable for temperatures up to 2000 K and covers the wavenumber range 0 - 10,000 cm$^{-1}$(wavelengths $>1.0$ $\mu$m) for states with rotational excitation up to $J=52$. Over 310 million transitions between 593 804 energy levels have been computed variationally with a new empirically refined potential energy surface, determined by refining to 75 empirical term values with $J\leq 5$ and a newly computed high-level ab initio dipole moment surface. This is the first, comprehensive high-temperature line list to be reported for SiH$_2$ and it is expected to aid the study of silylene in plasma physics, industrial processes and possible astronomical detection. Furthermore, we investigate the phenomenon of rotational energy level clustering in the spectrum of SiH$_2$. The CATS line list is available from the ExoMol database (www.exomol.com) and the CDS database.
physics
We derive two conditional expectation bounds, which we use to simplify cryptographic security proofs. The first bound relates the expectation of a bounded random variable and the average of its conditional expectations with respect to a set of i.i.d. random objects. It shows, under certain conditions, that the conditional expectation average has a small tail probability when the expectation of the random variable is sufficiently large. It is used to simplify the proof that the existence of weakly one-way functions implies the existence of strongly one-way functions. The second bound relaxes the independence requirement on the random objects to give a result that has applications to expander graph constructions in cryptography. It is used to simplify the proof that there is a security preserving reduction from weakly one-way functions to strongly one-way functions. To satisfy the hypothesis for this bound, we prove a hitting property for directed graphs that are expander-permutation hybrids.
mathematics
We present the first experimental demonstration of learned time-domain digital back-propagation (DBP), in 64-GBd dual-polarization 64-QAM signal transmission over 1014 km. Performance gains were comparable to those obtained with conventional, higher complexity, frequency-domain DBP.
electrical engineering and systems science
We study the transverse single-spin asymmetry (TSSA) in $p^{\uparrow}p \to J/\psi X$ reaction, incorporating both transverse-momentum and spin effects. To predict production cross section of prompt $J/\psi$ we use two different approaches, the non-relativistic QCD (NRQCD) factorization approach and the Improved Color Evaporation Model (ICEM), and show how the predicted results for TSSAs depend on choice of hadronization model. For initial-state factorization we consider two models: the standard Generalized Parton Model (GPM) and the Colour Gauge-Invariant version of it (CGI-GPM). We demonstrate that PHENIX collaboration data on TSSA in the process $p^{\uparrow}p \to J/\psi X$ constrain the gluon Sivers function of the proton and rule-out one of existing parameterizations. Estimates for the TSSAs in $p^{\uparrow}p \to J/\psi X$ process for the conditions of the future SPD NICA experiment are presented for the first time.
high energy physics phenomenology
One of the most important reasons of the existence of different types of files with media (audio or video) content, is achieving compression and less size, while preserving quality. In terms of fast transportation of files between equipment and networks and decrease of the required storage space, compression have always been under attention and action. Considering what was mentioned, in general, the concept of compression can be divided into two classes of lossy and lossless. In the lossy method, a part of data is omitted, but in the lossless methods, no data is omitted for compression. At the end of this article, a lossless compression method is presented using Imperialist competitive algorithm for compression. The proposed algorithm tries to achieve a more optimized color for the image color-map, so that it increases the compression rate. The simulation results indicate that the proposed algorithm can perform the compression by 43 percent and show its superiority compared to other similar methods.
electrical engineering and systems science
We consider some classical and quantum approximate optimization algorithms with bounded depth. First, we define a class of "local" classical optimization algorithms and show that a single step version of these algorithms can achieve the same performance as the single step QAOA on MAX-3-LIN-2. Second, we show that this class of classical algorithms generalizes a class previously considered in the literature, and also that a single step of the classical algorithm will outperform the single-step QAOA on all triangle-free MAX-CUT instances. In fact, for all but $4$ choices of degree, existing single-step classical algorithms already outperform the QAOA on these graphs, while for the remaining $4$ choices we show that the generalization here outperforms it. Finally, we consider the QAOA and provide strong evidence that, for any fixed number of steps, its performance on MAX-3-LIN-2 on bounded degree graphs cannot achieve the same scaling as can be done by a class of "global" classical algorithms. These results suggest that such local classical algorithms are likely to be at least as promising as the QAOA for approximate optimization.
quantum physics
Quantum computing is a disruptive paradigm widely believed to be capable of solving classically intractable problems. However, the route toward full-scale quantum computers is obstructed by immense challenges associated with the scalability of the platform, the connectivity of qubits, and the required fidelity of various components. One-way quantum computing is an appealing approach that shifts the burden from high-fidelity quantum gates and quantum memories to the generation of high-quality entangled resource states and high fidelity measurements. Cluster states are an important ingredient for one-way quantum computing, and a compact, portable, and mass producible platform for large-scale cluster states will be essential for the widespread deployment of one-way quantum computing. Here, we bridge two distinct fields---Kerr microcombs and continuous-variable (CV) quantum information---to formulate a one-way quantum computing architecture based on programmable large-scale CV cluster states. The architecture can accommodate hundreds of simultaneously addressable entangled optical modes multiplexed in the frequency domain and an unlimited number of sequentially addressable entangled optical modes in time domain. One-dimensional, two-dimensional, and three-dimensional CV cluster states can be deterministically produced. We note cluster states of at least three dimensions are required for fault-tolerant one-way quantum computing with known error-correction strategies. This architecture can be readily implemented with silicon photonics, opening a promising avenue for quantum computing at a large scale.
quantum physics
We present a causal view on the robustness of neural networks against input manipulations, which applies not only to traditional classification tasks but also to general measurement data. Based on this view, we design a deep causal manipulation augmented model (deep CAMA) which explicitly models possible manipulations on certain causes leading to changes in the observed effect. We further develop data augmentation and test-time fine-tuning methods to improve deep CAMA's robustness. When compared with discriminative deep neural networks, our proposed model shows superior robustness against unseen manipulations. As a by-product, our model achieves disentangled representation which separates the representation of manipulations from those of other latent causes.
computer science
Any observation of charged lepton flavor violation (CLFV) implies the existence of new physics beyond the SM in charged lepton sector. The muon magnetic moment anomaly between the SM prediction and the recent muon $g-2$ precision measurement at Fermilab provides an opportunity to reveal the nature of underlying CLFV. We consider the most general SM gauge invariant Lagrangian of $\Delta L=0$ bileptons with CLFV couplings to bilinear leptonic fields. The muon $g-2$ anomaly constrains the CLFV coupling $y^{\ell\ell'} (\ell\neq \ell')$ which induces the leptonic transition with $|\Delta(L_\ell-L_{\ell'})|=4$ at tree level. We investigate the implication of the muon magnetic moment anomaly for the CLFV couplings and other relevant constraints from low-energy precision experiments. The projected sensitivity of the future muonium-antimuonium conversion experiment and colliders is evaluated to the individual CLFV couplings satisfying these constraints.
high energy physics phenomenology
Immersive audio-visual perception relies on the spatial integration of both auditory and visual information which are heterogeneous sensing modalities with different fields of reception and spatial resolution. This study investigates the perceived coherence of audiovisual object events presented either centrally or peripherally with horizontally aligned/misaligned sound. Various object events were selected to represent three acoustic feature classes. Subjective test results in a simulated virtual environment from 18 participants indicate a wider capture region in the periphery, with an outward bias favoring more lateral sounds. Centered stimulus results support previous findings for simpler scenes.
electrical engineering and systems science
The type-I seesaw represents one of the most popular extensions of the Standard Model. Previous studies of this model have mostly focused on its ability to explain neutrino oscillations as well as on the generation of the baryon asymmetry via leptogenesis. Recently, it has been pointed out that the type-I seesaw can also account for the origin of the electroweak scale due to heavy-neutrino threshold corrections to the Higgs potential. In this paper, we show for the first time that all of these features of the type-I seesaw are compatible with each other. Integrating out a set of heavy Majorana neutrinos results in small masses for the Standard Model neutrinos; baryogenesis is accomplished by resonant leptogenesis; and the Higgs mass is entirely induced by heavy-neutrino one-loop diagrams, provided that the tree-level Higgs potential satisfies scale-invariant boundary conditions in the ultraviolet. The viable parameter space is characterized by a heavy-neutrino mass scale roughly in the range $10^{6.5\cdots7.0}$ GeV and a mass splitting among the nearly degenerate heavy-neutrino states up to a few TeV. Our findings have interesting implications for high-energy flavor models and low-energy neutrino observables. We conclude that the type-I seesaw sector might be the root cause behind the masses and cosmological abundances of all known particles. This statement might even extend to dark matter in the presence of a keV-scale sterile neutrino.
high energy physics phenomenology
We study effects of disorder on eigenstates of 1D two-component fermions with infinitely strong Hubbard repulsion. We demonstrate that the spin-independent (potential) disorder reduces the problem to the one-particle Anderson localization taking place at arbitrarily weak disorder. In contrast, a random magnetic field can cause reentrant many-body localization-delocalization transitions. Surprisingly weak magnetic field destroys one-particle localization caused by not too strong potential disorder, whereas at much stronger fields the states are many-body localized. We present numerical support of these conclusions.
condensed matter
In this paper we complete the computation of the two-loop master integrals relevant for Higgs plus one jet production initiated in arXiv:1609.06685, arXiv:1907.13156, arXiv:1907.13234. We compute the integrals by defining differential equations along contours in the kinematic space, and by solving them in terms of one-dimensional generalized power series. This method allows for the efficient evaluation of the integrals in all kinematic regions, with high numerical precision. We show the generality of our approach by considering both the top- and the bottom-quark contributions. This work along with arXiv:1609.06685, arXiv:1907.13156, arXiv:1907.13234 provides the full set of master integrals relevant for the NLO corrections to Higgs plus one jet production, and for the real-virtual contributions to the NNLO corrections to inclusive Higgs production in QCD in the full theory.
high energy physics phenomenology
Barium stars are one of the important probes to understand the origin and evolution of slow neutron-capture process elements in the Galaxy. These are extrinsic stars, where the observed s-process element abundances are believed to have an origin in the now invisible companions that produced these elements at their Asymptotic Giant Branch phase of evolution. We have attempted to understand the s-process nucleosynthesis, as well as the physical properties of the companion stars through a detailed comparison of observed elemental abundances of 10 barium stars with the predictions from AGB nucleosynthesis models, FRUITY. For these stars, we have presented estimates of abundances of several elements, C, N, O, Na, Al, $\alpha$-elements, Fe-peak elements and neutron-capture elements Rb, Sr, Y, Zr, Ba, La, Ce, Pr, Nd, Sm and Eu. The abundance estimates are based on high resolution spectral analysis. Observations of Rb in four of these stars have allowed us to put a limit to the mass of the companion AGB stars. Our analysis clearly shows that the former companions responsible for the surface abundance peculiarities of these stars are low-mass AGB stars. Kinematic analysis have shown the stars to be members of Galactic disk population.
astrophysics
We propose a protocol for quantum state tomography of nonclassical states in optomechanical systems. Using a parametric drive, the procedure overcomes the challenges of weak optomechanical coupling, poor detection efficiency, and thermal noise to enable high efficiency homodyne measurement. Our analysis is based on the analytic description of the generalized measurement that is performed when optomechanical position measurement competes with thermal noise and a parametric drive. The proposed experimental procedure is numerically simulated in realistic parameter regimes, which allows us to show that tomographic reconstruction of otherwise unverifiable nonclassical states is made possible.
quantum physics
We diagonalize the second-quantized Hamiltonian of a one-dimensional Bose gas with a nonpoint repulsive interatomic potential and zero boundary conditions. At weak coupling the solutions for the ground-state energy $E_{0}$ and the dispersion law $E(k)$ coincide with the Bogoliubov solutions for a periodic system. In this case, the single-particle density matrix $F_{1}(x,x^{\prime})$ at $T=0$ is close to the solution for a periodic system and, at $T>0$, is significantly different from it. We also obtain that the wave function $\langle \hat{\psi}(x,t) \rangle$ of the effective condensate is close to a constant $\sqrt{N_{0}/L}$ inside the system and vanishes on the boundaries (here, $N_{0}$ is the number of atoms in the effective condensate, and $L$ is the size of the system). We find the criterion of applicability of the method, according to which the method works for a finite system at very low temperature and with a weak coupling (a weak interaction or a large concentration).
condensed matter
Bayesian parameter inference depends on a choice of prior probability distribution for the parameters in question. The prior which makes the posterior distribution maximally sensitive to data is called the Jeffreys prior, and it is completely determined by the response of the likelihood to changes in parameters. Under the assumption that the likelihood is a Gaussian distribution, the Jeffreys prior is a constant, i.e. flat. However, if one parameter is constrained by physical considerations, the Gaussian approximation fails and the flat prior is no longer the Jeffreys prior. In this paper we compute the correct Jeffreys prior for a multivariate normal distribution constrained in one dimension, and we apply it to the sum of neutrino masses $\Sigma m_\nu$ and the tensor-to-scalar ratio $r$. We find that one-dimensional marginalised posteriors for these two parameters change considerably and that the 68% and 95% Bayesian upper limits increase by 9% and 4% respectively for $\Sigma m_\nu$ and 22% and 3% for $r$. Adding the prior to an existing chain can be done as a trivial importance sampling in the final step of the analysis proces.
astrophysics
A multipole expansion analysis is applied to 1420 MHz radio continuum images of supernova remnants (SNRs) in order to compare Type Ia and core collapse (CC) SNRs. Because the radio synchrotron emission is produced at the outer shock between the SNR and the ISM, we are investigating whether the ISM interaction of SNRs is different between Type Ia and CC SNRs. This is in contrast to previous investigations, which have shown that Type Ia and CC SNRs have different asymmetries in the X-ray emission from their ejecta. The sample consists of 19 SNRs which have been classified as either Type Ia or CC. The quadrupole and octupole moments normalized to their monopole moments (total emission) are used as a measure of asymmetry of the emission. A broad range (by a factor of ~1000) is found for both quadrupole and octupole normalized moments. The strongest correlation we find is that large quadrupole moments are associated with large octupole moments, indicating that both serve as similar indicators of asymmetry. The other correlation we find is that both moments increase with SNR age or radius. This indicates that interstellar medium structure is a strong contributor to asymmetries in the radio emission from SNRs. This does not seem to apply to molecular clouds, because we find that association of a SNR with a molecular cloud is not correlated with larger quadrupole or octupole moments.
astrophysics
This paper presents a real-time non-probabilistic detection mechanism to detect load-redistribution (LR) attacks against energy management systems (EMSs). Prior studies have shown that certain LR attacks can bypass conventional bad data detectors (BDDs) and remain undetectable, which implies that presence of a reliable and intelligent detection mechanism to flag LR attacks, is imperative. Therefore, in this study a detection mechanism to enhance the existing BDDs is proposed based on the fundamental knowledge of the physics laws in the electric grid. A greedy algorithm, which can optimize the core LR attack problems, is presented to enable a fast mechanism to identify the most sensitive locations for critical assets. The main contribution of this detection mechanism is leveraging of power systems domain insight to identify an underlying exploitable structure for the core problem of LR attack problems, which enables the prediction of the attackers' behavior. Additional contribution includes the ability to combine this approach with other detection mechanisms to increase their likelihood of detection. The proposed approach is applied to 2383-bus Polish test system to demonstrate the scalability of the greedy algorithm, and it solved the attacker's problem more than 10x faster than a traditional linear optimization approach.
electrical engineering and systems science
A planning domain, as any model, is never complete and inevitably makes assumptions on the environment's dynamic. By allowing the specification of just one domain model, the knowledge engineer is only able to make one set of assumptions, and to specify a single objective-goal. Borrowing from work in Software Engineering, we propose a multi-tier framework for planning that allows the specification of different sets of assumptions, and of different corresponding objectives. The framework aims to support the synthesis of adaptive behavior so as to mitigate the intrinsic risk in any planning modeling task. After defining the multi-tier planning task and its solution concept, we show how to solve problem instances by a succinct compilation to a form of non-deterministic planning. In doing so, our technique justifies the applicability of planning with both fair and unfair actions, and the need for more efforts in developing planning systems supporting dual fairness assumptions.
computer science
Macroscopic realism is a classical worldview that a macroscopic system is always determinately in one of the two or more macroscopically distinguishable states available to it, and so is never in a superposition of these states. The question of whether there is a fundamental limitation on the possibility to observe quantum phenomena at the macroscopic scale remains unclear. Here we implement a strict and simple protocol to test macroscopic realism in a light-matter interfaced system. We create a micro-macro entanglement with two macroscopically distinguishable solid-state components and rule out those theories which would deny coherent superpositions of up to 76 atomic excitations shared by 10^10 ions in two separated solids. These results provide a general method to enhance the size of superposition states of atoms by utilizing quantum memory techniques and to push the envelope of macroscopicity at higher levels.
quantum physics
For decades, a lot of work has been devoted to the problem of constructing a non-trivial quantum field theory in four-dimensional space time. This letter addresses the attempts to construct an algebraic quantum field theory in the framework of non-standard theories like hyperfunction or ultra-hyperfunction quantum field theory. For this purpose model theories of formally interacting neutral scalar fields are constructed and some of their characteristic properties like two-point functions are discussed. The formal self-couplings are obtained from local normally-ordered analytic redefinitions of the free scalar quantum field, mimicking a non-trivial structure of the resulting Lagrangians and equations of motion.
high energy physics theory
HAGAR is a system of seven Non-imaging Atmospheric Cherenkov Telescopes located at Hanle in the Ladakh region of the Indian Himalayas at an altitude of 4270 meters {\it amsl}. Since 2008, we have observed the Crab Nebula to assess the performance of the HAGAR telescopes. We describe the analysis technique for the estimation of $\gamma$-ray signal amidst cosmic ray background. The consolidated results spanning nine years of the Crab nebula observations show long term performance of the HAGAR telescopes. Based on about 219 hours of data, we report the detection of $\gamma$-rays from the Crab Nebula at a significance level of about 20$\sigma$, corresponding to a time averaged flux of (1.64$\pm$0.09) $\times10^{-10}$ photons cm$^{-2}$ sec$^{-1}$ above 230 GeV. Also, we perform a detailed study of possible systematic effects in our analysis method on data taken with the HAGAR telescopes.
astrophysics
We propose the model of massive spinning particle traveling in four-dimensional Minkowski space. The equations of motion of the particle follow from the fact that all the classical paths of the particle lie on a cylinder whose position in Minkowski space is determined by the particle's linear momentum and total angular momentum. All the paths on one and the same cylinder are gauge equivalent. The equations of motion are found in implicit form for general time-like paths, and they are non-Lagrangian. The explicit equations of motion are found for trajectories with small curvature and helices. The momentum and total angular momentum are expressed in terms of characteristics of the path in all the cases. The constructed model of the spinning particle has geometrical character, with no additional variables in the space of spin states being introduced.
high energy physics theory
Visual attention has proven to be effective in improving the performance of person re-identification. Most existing methods apply visual attention heuristically by learning an additional attention map to re-weight the feature maps for person re-identification. However, this kind of methods inevitably increase the model complexity and inference time. In this paper, we propose to incorporate the attention learning as additional objectives in a person ReID network without changing the original structure, thus maintain the same inference time and model size. Two kinds of attentions have been considered to make the learned feature maps being aware of the person and related body parts respectively. Globally, a holistic attention branch (HAB) makes the feature maps obtained by backbone focus on persons so as to alleviate the influence of background. Locally, a partial attention branch (PAB) makes the extracted features be decoupled into several groups and be separately responsible for different body parts (i.e., keypoints), thus increasing the robustness to pose variation and partial occlusion. These two kinds of attentions are universal and can be incorporated into existing ReID networks. We have tested its performance on two typical networks (TriNet and Bag of Tricks) and observed significant performance improvement on five widely used datasets.
computer science
Non-negative Matrix Factorization (NMF) is one of the most popular techniques for data representation and clustering, and has been widely used in machine learning and data analysis. NMF concentrates the features of each sample into a vector, and approximates it by the linear combination of basis vectors, such that the low-dimensional representations are achieved. However, in real-world applications, the features are usually with different importances. To exploit the discriminative features, some methods project the samples into the subspace with a transformation matrix, which disturbs the original feature attributes and neglects the diversity of samples. To alleviate the above problems, we propose the Feature weighted Non-negative Matrix Factorization (FNMF) in this paper. The salient properties of FNMF can be summarized as threefold: 1) it learns the weights of features adaptively according to their importances; 2) it utilizes multiple feature weighting components to preserve the diversity; 3) it can be solved efficiently with the suggested optimization algorithm. Performance on synthetic and real-world datasets demonstrate that the proposed method obtains the state-of-the-art performance.
electrical engineering and systems science
In circular accelerators, crossing the linear coupling resonance induces the exchange of the transverse emittances, provided the process is adiabatic. This has been considered in some previous works, where the description of the phenomenon has been laid down, and, more recently, where a possible explanation of the numerical results has been proposed. In this paper, we introduce a theoretical framework to analyze the crossing process, based on the theory of adiabatic invariance for Hamiltonian mechanics, which explains in detail various features of the emittance exchange process.
physics
In this article we employ the matching method to analytically investigate the properties of holographic superconductors in the framework of Maxwell electrodynamics taking into account the effects of back reaction on spacetime. The relationship between the critical temperature ($T_{c}$) and the charge density ($\rho$) has been obtained first. The influence of back reaction on Meissner like effect in this holographic superconductor is then studied. The results for the critical temperature indicate that the condensation gets harder to form when we include the effect of back reaction. The expression for the critical magnetic field ($B_{c}$) above which the superconducting phase vanishes is next obtained. It is observed from our investigation that the ratio of $B_{c}$ and $T_{c}^{2}$ increases with the increase in the back reaction parameter. However, the critical magnetic field $B_c$ decreases with increase in the back reaction parameter.
high energy physics theory
Quantum batteries harness the unique properties of quantum mechanics to enhance energy storage compared to conventional batteries. In particular, they are predicted to undergo superextensive charging, where batteries with larger capacity actually take less time to charge. Up until now however, they have not been experimentally demonstrated, due to the challenges in quantum coherent control. Here we implement an array of two-level systems coupled to a photonic mode to realise a Dicke quantum battery. Our quantum battery is constructed with a microcavity formed by two dielectric mirrors enclosing a thin film of a fluorescent molecular dye in a polymer matrix. We use ultrafast optical spectroscopy to time resolve the charging dynamics of the quantum battery at femtosecond resolution. We experimentally demonstrate superextensive increases in both charging power and storage capacity, in agreement with our theoretical modelling. We find that decoherence plays an important role in stabilising energy storage, analogous to the role that dissipation plays in photosynthesis. This experimental proof-of-concept is a major milestone towards the practical application of quantum batteries in quantum and conventional devices. Our work opens new opportunities for harnessing collective effects in light-matter coupling for nanoscale energy capture, storage, and transport technologies, including the enhancement of solar cell efficiencies.
quantum physics
This paper presents HoughNet, a one-stage, anchor-free, voting-based, bottom-up object detection method. Inspired by the Generalized Hough Transform, HoughNet determines the presence of an object at a certain location by the sum of the votes cast on that location. Votes are collected from both near and long-distance locations based on a log-polar vote field. Thanks to this voting mechanism, HoughNet is able to integrate both near and long-range, class-conditional evidence for visual recognition, thereby generalizing and enhancing current object detection methodology, which typically relies on only local evidence. On the COCO dataset, HoughNet's best model achieves $46.4$ $AP$ (and $65.1$ $AP_{50}$), performing on par with the state-of-the-art in bottom-up object detection and outperforming most major one-stage and two-stage methods. We further validate the effectiveness of our proposal in other visual detection tasks, namely, video object detection, instance segmentation, 3D object detection and keypoint detection for human pose estimation, and an additional ``labels to photo`` image generation task, where the integration of our voting module consistently improves performance in all cases. Code is available at \url{https://github.com/nerminsamet/houghnet}.
computer science
This work presents a simulation framework to generate human micro-Dopplers in WiFi based passive radar scenarios, wherein we simulate IEEE 802.11g complaint WiFi transmissions using MATLAB's WLAN toolbox and human animation models derived from a marker-based motion capture system. We integrate WiFi transmission signals with the human animation data to generate the micro-Doppler features that incorporate the diversity of human motion characteristics, and the sensor parameters. In this paper, we consider five human activities. We uniformly benchmark the classification performance of multiple machine learning and deep learning models against a common dataset. Further, we validate the classification performance using the real radar data captured simultaneously with the motion capture system. We present experimental results using simulations and measurements demonstrating good classification accuracy of $\geq$ 95\% and $\approx$ 90\%, respectively.
electrical engineering and systems science
We present the first systematic study on how distant weak interactions impact the dynamical evolution of merging binary black holes (BBHs) in dense stellar clusters. Recent studies indicate that dense clusters are likely to significantly contribute to the rate of merging BBHs observable through gravitational waves (GWs), and that many of these mergers will appear with notable eccentricities measurable in the LISA and LIGO sensitivity bands. This is highly interesting, as eccentricity can be used to distinguish between different astrophysical merger channels. However, all of these recent studies are based on various Monte Carlo (MC) techniques that only include strong interactions for the dynamical evolution of BBHs, whereas any binary generally undergoes orders-of-magnitude more weak interactions than strong. It is well known that weak interactions primarily lead to a change in the binary's eccentricity, which for BBHs implies that weak interactions can change their GW inspiral time and thereby their merger probability. With this motivation, we perform MC simulations of BBHs evolving in dense clusters under the influence of both weak and strong interactions. We find that including weak interactions leads to a notable increase in the number of BBHs that merge inside their cluster, which correspondingly leads to a higher number of eccentric LISA sources. These preliminary results illustrate the importance of including weak interactions for accurately modeling how BBHs merge in clusters, and how to link their emitted GW signals to their astrophysical environment.
astrophysics
The functions of the Takagi exponential class are similar in construction to the continuous, nowhere differentiable Takagi function described in 1901. They have one real parameter $v\in (-1;1)$ and at points $x\in{\mathbb R}$ are defined by the series $T_v(x) = \sum_{n=0}^\infty v^n T_0(2^nx)$, where $T_0(x)$ is the distance between $x$ and the nearest integer point. If $v=1/2$ then $T_v$ coincides with Takagi's function. In this paper, for different values of the parameter $v$, we study the global extremes of the functions $T_v$, as well as the sets of extreme points. All functions of $T_v$ have a period of $1$, so they are investigated only on the segment $[0;1]$. This study is based on the properties of consistent and anti-consistent polynomials and series, which the first half of the work is devoted to.
mathematics
Image dehazing without paired haze-free images is of immense importance, as acquiring paired images often entails significant cost. However, we observe that previous unpaired image dehazing approaches tend to suffer from performance degradation near depth borders, where depth tends to vary abruptly. Hence, we propose to anneal the depth border degradation in unpaired image dehazing with cyclic perceptual-depth supervision. Coupled with the dual-path feature re-using backbones of the generators and discriminators, our model achieves $\mathbf{20.36}$ Peak Signal-to-Noise Ratio (PSNR) on NYU Depth V2 dataset, significantly outperforming its predecessors with reduced Floating Point Operations (FLOPs).
electrical engineering and systems science
Black hole superradiance is a powerful tool in the search for ultra-light bosons. Constraints on the existence of such particles have been derived from the observation of highly spinning black holes, absence of continuous gravitational-wave signals, and of the associated stochastic background. However, these constraints are only strictly speaking valid in the limit where the boson's interactions can be neglected. In this work we investigate the extent to which the superradiant growth of an ultra-light dark photon can be quenched via scattering processes with ambient electrons. For dark photon masses $m_{\gamma^\prime} \gtrsim 10^{-17}\,{\rm eV}$, and for reasonable values of the ambient electron number density, we find superradiance can be quenched prior to extracting a significant fraction of the black-hole spin. For sufficiently large $m_{\gamma^\prime}$ and small electron number densities, the in-medium suppression of the kinetic mixing can be efficiently removed, and quenching occurs for mixings $\chi_0 \gtrsim \mathcal{O}(10^{-8})$; at low masses, however, in-medium effects strongly inhibit otherwise efficient scattering processes from dissipating energy. Intriguingly, this quenching leads to a time- and energy-oscillating electromagnetic signature, with luminosities potentially extending up to $\sim 10^{57}\,{\rm erg / s}$, suggesting that such events should be detectable with existing telescopes. As a byproduct we also show that superradiance cannot be used to constrain a small mass for the Standard Model photon.
high energy physics phenomenology
Face alignment algorithms locate a set of landmark points in images of faces taken in unrestricted situations. State-of-the-art approaches typically fail or lose accuracy in the presence of occlusions, strong deformations, large pose variations and ambiguous configurations. In this paper we present 3DDE, a robust and efficient face alignment algorithm based on a coarse-to-fine cascade of ensembles of regression trees. It is initialized by robustly fitting a 3D face model to the probability maps produced by a convolutional neural network. With this initialization we address self-occlusions and large face rotations. Further, the regressor implicitly imposes a prior face shape on the solution, addressing occlusions and ambiguous face configurations. Its coarse-to-fine structure tackles the combinatorial explosion of parts deformation. In the experiments performed, 3DDE improves the state-of-the-art in 300W, COFW, AFLW and WFLW data sets. Finally, we perform cross-dataset experiments that reveal the existence of a significant data set bias in these benchmarks.
computer science
In this paper we consider relativistic effects of rotation in the magnetospheres of $\gamma$-ray pulsars. The paper reviews the progress achieved in this field during the last three decades. For this purpose we examine direct centrifugal acceleration of particles and corresponding limiting factors: constraints due to the curvature radiation and the inverse Compton scattering of electrons against soft photons. Based on the obtained results generation of parametrically excited Langmuir waves and the corresponding Landau-Langmuir-centrifugal drive is studied.
astrophysics
The position and momentum probability densities of a multidimensional quantum system are fully characterized by means of the radial expectation values $\langle r^\alpha \rangle$ and $\left\langle p^\alpha \right\rangle$, respectively. These quantities, which describe and/or are closely related to various fundamental properties of realistic systems, have not been calculated in an analytical and effective manner up until now except for a number of three-dimensional hydrogenic states. In this work we explicitly show these expectation values for all discrete stationary $D$-dimensional hydrogenic states in terms of the dimensionality $D$, the strength of the Coulomb potential (i.e., the nuclear charge) and the $D$ state's hyperquantum numbers. Emphasis is placed on the momentum expectation values (mostly unknown, specially the ones with odd order) which are obtained in a closed compact form. Applications are made to circular, $S$-wave, high-energy (Rydberg) and high-dimensional (pseudo-classical) states of three- and multidimensional hydrogenic atoms. This has been possible because of the analytical algebraic and asymptotical properties of the special functions (orthogonal polynomials, hyperspherical harmonics) which control the states' wavefunctions. Finally, some Heisenberg-like uncertainty inequalities satisfied by these dispersion quantities are also given and discussed.
quantum physics
How to implement a computation task efficiently is the central problem in quantum computation science. For a quantum circuit, the multi-control unitary operations are the very important components. We present an extremely efficient approach to implement multiple multi-control unitary operations directly without any decompositions to CNOT gates and single-photon gates. The necessary two-photon operations could be reduced from $\mathcal{O}(n^3)$ with the traditional decomposition approach to $\mathcal{O}(n)$ with the present approach, which will greatly relax the requirement resources and make this approach much feasible for large scale quantum computation. Moreover, the potential application to the ($n$-$k$)-uniform hypergraph state generation is proposed.
quantum physics
Hadron production at low transverse momenta in semi-inclusive deep inelastic scattering can be described by transverse momentum dependent (TMD) factorization. This formalism has also been widely used to study the Drell-Yan process and back-to-back hadron pair production in $e^+e^-$ collisions. These processes are the main ones for extractions of TMD parton distribution functions and TMD fragmentation functions, which encode important information about nucleon structure and hadronization. One of the most widely used TMD factorization formalism in phenomenology formulates TMD observables in coordinate $b_\perp$-space, the conjugate space of the transverse momentum. The Fourier transform from $b_\perp$-space back into transverse momentum space is sufficiently complicated due to oscillatory integrands that it requires a careful and computationally intensive numerical treatment in order to avoid potentially large numerical errors. Within the TMD formalism, the azimuthal angular dependence is analytically integrated and the two-dimensional $b_\perp$ integration reduces to a one-dimensional integration over the magnitude $b_\perp$. In this paper we develop a fast numerical Hankel transform algorithm for such a $b_\perp$-integration that improves the numerical accuracy of TMD calculations in all standard processes. Libraries for this algorithm are implemented in Python 2.7 and 3, C++, as well as FORTRAN77. All packages are made available open source.
high energy physics phenomenology
A black hole is superentropic if it violates the reverse isoperimetric inequality. Recently, some studies have indicated that some four dimensional superentropic black holes have the null hypersurface caustics (NHC) outside their event horizons. In this paper, we extend to explore whether or not the NHC exists in the cases of high dimensional superentropic black holes. We consider the singly rotating Kerr-AdS superentropic black holes in arbitrary dimensions and the singly rotating charged superentropic black hole in five-dimensional minimal gauged supergravity, and find that the NHC exists outside the event horizons for these superentropic black holes. Furthermore, the spacetime dimensions and other parameters of black hole, such as the electric charge, have important impact on the NHC inside the horizon. Our results indicate that when the superentropic black hole has the Cauchy horizon, the NHC will also exist inside the Cauchy horizon.
high energy physics theory
The entangled Schrodinger cat state obtained immediately upon measurement of a superposed two-state quantum system is often considered paradoxical because it appears to predict two macroscopically different outcomes, such as an alive and dead cat. However, nonlocal interferometry experiments testing momentum-entangled photon pairs over all phases demonstrate that the cat state does not fit this description and is not paradoxical. Both experiment and theory imply that it instead represents a superposition of two nonlocally coherent (i.e. phase-dependent) statistical correlations between its sub-systems. This is not paradoxical. Standard quantum theory rigorously predicts the experimentally-observed outcomes of this state. Neither sub-system is superposed; rather, the correlations between the states of the subsystems are superposed. This resolves the problem of definite outcomes. The nonlocal properties of entanglement then ensure that only one outcome occurs while the other outcome simultaneously does not occur, resolving a problem posed by Einstein in 1927. The single outcome that occurs then triggers an irreversible process leading to macroscopic registration of the outcome. This resolves the quantum measurement problem. Collapse occurs because of entanglement and does not require a special collapse postulate. Collapse is a consequence of standard quantum physics and the irreversible nature of the macroscopic registration.
quantum physics
Ischemic stroke lesion segmentation from Computed Tomography Perfusion (CTP) images is important for accurate diagnosis of stroke in acute care units. However, it is challenged by low image contrast and resolution of the perfusion parameter maps, in addition to the complex appearance of the lesion. To deal with this problem, we propose a novel framework based on synthesized pseudo Diffusion-Weighted Imaging (DWI) from perfusion parameter maps to obtain better image quality for more accurate segmentation. Our framework consists of three components based on Convolutional Neural Networks (CNNs) and is trained end-to-end. First, a feature extractor is used to obtain both a low-level and high-level compact representation of the raw spatiotemporal Computed Tomography Angiography (CTA) images. Second, a pseudo DWI generator takes as input the concatenation of CTP perfusion parameter maps and our extracted features to obtain the synthesized pseudo DWI. To achieve better synthesis quality, we propose a hybrid loss function that pays more attention to lesion regions and encourages high-level contextual consistency. Finally, we segment the lesion region from the synthesized pseudo DWI, where the segmentation network is based on switchable normalization and channel calibration for better performance. Experimental results showed that our framework achieved the top performance on ISLES 2018 challenge and: 1) our method using synthesized pseudo DWI outperformed methods segmenting the lesion from perfusion parameter maps directly; 2) the feature extractor exploiting additional spatiotemporal CTA images led to better synthesized pseudo DWI quality and higher segmentation accuracy; and 3) the proposed loss functions and network structure improved the pseudo DWI synthesis and lesion segmentation performance.
electrical engineering and systems science
In this paper we investigate the power suppressed contributions from two-particle and three-particle twist-4 light-cone distribution amplitudes (LCDAs) of photon within the framework of light-cone sum rules. Compared with leading twist LCDA result, the contribution from three-particle twist-4 LCDAs is not suppressed in the expansion by $1/Q^2$, so that the power corrections considered in this work can give rise to a sizable contribution, especially at low $Q^2$ region. According to our result, the power suppressed contributions should be included in the determination of the Gegenbauer moments of pion LCDAs with the pion transition form factor.
high energy physics phenomenology
The recent development of online recommender systems has a focus on collaborative ranking from implicit feedback, such as user clicks and purchases. Different from explicit ratings, which reflect graded user preferences, the implicit feedback only generates positive and unobserved labels. While considerable efforts have been made in this direction, the well-known pairwise and listwise approaches have still been limited by various challenges. Specifically, for the pairwise approaches, the assumption of independent pairwise preference is not always held in practice. Also, the listwise approaches cannot efficiently accommodate "ties" due to the precondition of the entire list permutation. To this end, in this paper, we propose a novel setwise Bayesian approach for collaborative ranking, namely SetRank, to inherently accommodate the characteristics of implicit feedback in recommender system. Specifically, SetRank aims at maximizing the posterior probability of novel setwise preference comparisons and can be implemented with matrix factorization and neural networks. Meanwhile, we also present the theoretical analysis of SetRank to show that the bound of excess risk can be proportional to $\sqrt{M/N}$, where $M$ and $N$ are the numbers of items and users, respectively. Finally, extensive experiments on four real-world datasets clearly validate the superiority of SetRank compared with various state-of-the-art baselines.
computer science
This paper leverages the graph-to-sequence method in neural text-to-speech (GraphTTS), which maps the graph embedding of the input sequence to spectrograms. The graphical inputs consist of node and edge representations constructed from input texts. The encoding of these graphical inputs incorporates syntax information by a GNN encoder module. Besides, applying the encoder of GraphTTS as a graph auxiliary encoder (GAE) can analyse prosody information from the semantic structure of texts. This can remove the manual selection of reference audios process and makes prosody modelling an end-to-end procedure. Experimental analysis shows that GraphTTS outperforms the state-of-the-art sequence-to-sequence models by 0.24 in Mean Opinion Score (MOS). GAE can adjust the pause, ventilation and tones of synthesised audios automatically. This experimental conclusion may give some inspiration to researchers working on improving speech synthesis prosody.
electrical engineering and systems science
It has become increasingly challenging to distinguish real faces from their visually realistic fake counterparts, due to the great advances of deep learning based face manipulation techniques in recent years. In this paper, we introduce a deep learning method to detect face manipulation. It consists of two stages: feature extraction and binary classification. To better distinguish fake faces from real faces, we resort to the triplet loss function in the first stage. We then design a simple linear classification network to bridge the learned contrastive features with the real/fake faces. Experimental results on public benchmark datasets demonstrate the effectiveness of this method, and show that it generates better performance than state-of-the-art techniques in most cases.
computer science
Both continuous and discontinuous precipitation is known to occur in CuAg alloys. The precipitation of Ag-rich phase has been experimentally investigated by atom probe tomography and transmission electron microscopy after ageing treatment of Cu-5%wtAg at 440$^\circ$C during 30'. Both continuously and discontinuously formed precipitates have been observed. The precipitates located inside the grains exhibit two different faceted shapes: tetrahedral and platelet-shaped precipitates. Dislocations accommodating the high misfit at the interface between the two phases have also been evidenced. Based on these experimental observations, we examine the thermodynamic effect of these dislocations on the nucleation barrier and show that the peculiar shapes are due to the interfacial anisotropy. The appropriate number of misfit dislocations relaxes the elastic stress and lead to energetically favorable precipitates. However, due to the large misfit between the parent and precipitate phases, discontinuous precipitation that is often reported for CuAg alloys can be a lower energetic path to transform the supersaturated solid solution. We suggest that the presence of vacancy clusters may assist intragranular nucleation and decrease
condensed matter
The glass transition temperature (Tg) is the temperature, after which the supercooled liquid undergoes a dynamical arrest. Usually, the glass network modifiers (e.g., Na2O) affect the behavior of Tg. However, in aluminosilicate glasses, the effect of different modifiers on Tg is still unclear and show an anomalous behavior. Here, based on molecular dynamics simulations, we show that the glass transition temperature decreases with increasing charge balancing cations field strength (FS) in the aluminosilicate glasses, which is an anomalous behavior as compared to other oxide glasses. The results show that the origins of this anomaly come from the dynamics of the supercooled liquid above Tg, which in turn is correlated to pair excess entropy. Our results deepen our understanding of the effect of different modifiers on the properties of the aluminosilicate glasses.
condensed matter
Urban morphology and socioeconomic aspects of cities have been explored by analysing urban street network. To analyse the network, several variations of the centrality indices are often used. However, its nature has not yet been widely studied, thus leading to an absence of robust visualisation method of urban road network characteristics. To fill this gap, we propose to use a set of local betweenness centrality and a new simple and robust visualisation method. By analysing 30 European cities, we found that our method illustrates common structures of the cities: road segments important for long-distance transportations are concentrated along larger streets while those for short range transportations form clusters around CBD, historical, or residential districts. Quantitative analysis has corroborated these findings. Our findings are useful for urban planners and decision-makers to understand the current situation of the city and make informed decisions.
physics
The aim of this paper is to introduce a new design of experiment method for A/B tests in order to balance the covariate information in all treatment groups. A/B tests (or "A/B/n tests") refer to the experiments and the corresponding inference on the treatment effect(s) of a two-level or multi-level controllable experimental factor. The common practice is to use a randomized design and perform hypothesis tests on the estimates. However, such estimation and inference are not always accurate when covariate imbalance exists among the treatment groups. To overcome this issue, we propose a discrepancy-based criterion and show that the design minimizing this criterion significantly improves the accuracy of the treatment effect(s) estimates. The discrepancy-based criterion is model-free and thus makes the estimation of the treatment effect(s) robust to the model assumptions. More importantly, the proposed design is applicable to both continuous and categorical response measurements. We develop two efficient algorithms to construct the designs by optimizing the criterion for both offline and online A/B tests. Through simulation study and a real example, we show that the proposed design approach achieves good covariate balance and accurate estimation.
statistics
The star formation activity of the host galaxies of active galactic nuclei (AGNs) provides valuable insights into the complex interconnections between black hole growth and galaxy evolution. A major obstacle arises from the difficulty of estimating accurate star formation rates in the presence of a strong AGN. Analyzing the $1-500\, \mu m$ spectral energy distributions and high-resolution mid-infrared spectra of low-redshift ($z < 0.5$) Palomar-Green quasars with bolometric luminosity $\sim 10^{44.5}-10^{47.5}\rm\,erg\,s^{-1}$, we find, from comparison with an independent star formation rate indicator based on [Ne II] 12.81$\, \mu m$ and [Ne III] 15.56$\, \mu m$, that the torus-subtracted, total infrared ($8-1000\, \mu m$) emission yields robust star formation rates in the range $\sim 1-250\,M_\odot\,{\rm yr^{-1}}$. Combined with available stellar mass estimates, the vast majority ($\sim 75\%-90\%$) of the quasars lie on or above the main sequence of local star-forming galaxies, including a significant fraction ($\sim 50\%-70\%$) that would qualify as starburst systems. This is further supported by the high star formation efficiencies derived from the gas content inferred from the dust masses. Inspection of high-resolution Hubble Space Telescope images reveals a wide diversity of morphological types, including a number of starbursting hosts that have not experienced significant recent dynamical perturbations. The origin of the high star formation efficiency is unknown.
astrophysics
In this paper, we give a comparison version of Pythagorean Theorem to judge the lower or upper bound of the curvature of Alexandrov spaces (including Riemannian manifolds).
mathematics
Cosmological evolution and particle creation in $R^2$-modified gravity are considered for the case of the dominant decay of the scalaron into a pair of gauge bosons due to conformal anomaly. It is shown that in the process of thermalization superheavy dark matter with the coupling strength typical for the GUT SUSY can be created. Such dark matter would have the proper cosmological density if the particle mass is close to $10^{12}$ GeV.
high energy physics phenomenology
We compare light-front quantization and instant-time quantization both at the level of operators and at the level of their Feynman diagram matrix elements. At the level of operators light-front quantization and instant-time quantization lead to equal light-front time commutation (or anticommutation) relations that appear to be quite different from equal instant-time commutation (or anticommutation) relations. Despite this we show that at unequal times instant-time and light-front commutation (or anticommutation) relations actually can be transformed into each other, with it only being the restriction to equal times that makes the commutation (or anticommutation) relations appear to be so different. While our results are valid for both bosons and fermions, for fermions there are subtleties associated with tip of the light cone contributions that need to be taken care of. At the level of Feynman diagrams we show for non-vacuum Feynman diagrams that the pole terms in four-dimensional light-front Feynman diagrams reproduce the three-dimensional light-front on-shell Hamiltonian Fock space formulation in which the light-front energy and light-front momentum are on shell. However, because of circle at infinity contributions we show that this equivalence fails for four-dimensional light-front vacuum tadpole diagrams. Then, precisely because of these circle at infinity contributions, light-front vacuum tadpole diagrams are not only nonzero, they are actually equal to instant-time vacuum tadpole diagrams. Light-front vacuum diagrams are not correctly describable by the on-shell Hamiltonian formalism, and thus not by the closely related infinite momentum frame prescription either. With the transformation from instant-time fields to light-front fields being a spacetime translation, not only are instant-time quantization and light-front quantization equivalent, they are unitarily equivalent.
high energy physics phenomenology
Let g be a G-invariant Einstein metric on a compact homogeneous space M=G/K. We use a formula for the Lichnerowicz Laplacian of g at G-invariant TT-tensors to study the stability type of g as a critical point of the scalar curvature function. The case when g is naturally reductive is studied in special detail.
mathematics
The multiplicity properties of massive stars are one of the important outstanding issues in stellar evolution. Quantifying the binary statistics of all evolutionary phases is essential to paint a complete picture of how and when massive stars interact with their companions, and to determine the consequences of these interactions. We investigate the multiplicity of an almost complete census of red supergiant stars (RSGs) in NGC 330, a young massive cluster in the SMC. Using a combination of multi-epoch HARPS and MUSE spectroscopy, we estimate radial velocities and assess the kinematic and multiplicity properties of 15 RSGs in NGC 330. Radial velocities are estimated to better than +/-100 m/s for the HARPS data. The line-of-sight velocity dispersion for the cluster is estimated as 3.20 +0.69-0.52 km/s. When virial equilibrium is assumed, the dynamical mass of the cluster is log (M{dyn} /M{sun}) = 5.20+/-0.17, in good agreement with previous upper limits. We detect significant radial velocity variability in our multi-epoch observations and distinguish between variations caused by atmospheric activity and those caused by binarity. The binary fraction of NGC 330 RSGs is estimated by comparisons with simulated observations of systems with a range of input binary fractions. In this way, we account for observational biases and estimate the intrinsic binary fraction for RSGs in NGC 330 as f{RSG} = 0.3+/-0.1 for orbital periods in the range 2.3< log P [days] <4.3, with q>0.1. Using the distribution of the luminosities of the RSG population, we estimate the age of NGC 330 to be 45+/-5 Myr and estimate a red straggler fraction of 50%. We estimate the binary fraction of RSGs in NGC 330 and conclude that it appears to be lower than that of main-sequence massive stars, which is expected because interactions between an RSG and a companion are assumed to effectively strip the RSG envelope.
astrophysics
We employ the rank-2 {\em contour} Minkowski Tensor in two dimensions to probe length and time scales of ionized bubbles during the epoch of reionization. We demonstrate that the eigenvalues of this tensor provide excellent probes of the distribution of the sizes of ionized bubbles, and from it the characteristic bubble sizes, at different redshifts. We show that ionized bubbles are not circular, and hence not spherical in three dimensions, as is often assumed for simplified analytic arguments. We quantify their shape anisotropy by using the ratio of the two eigenvalues. The shape parameter provides the characteristic time epochs when bubble mergers begin and end. Our method will be very useful to reconstruct the reionization history using data of the brightness temperature field.
astrophysics
In order to drive effectively, a driver must be aware of how they can expect other vehicles' behaviour to be affected by their decisions, and also how they are expected to behave by other drivers. One common family of methods for addressing this problem of interaction are those based on Game Theory. Such approaches often make assumptions about leaders and followers in an interaction which can result in conflicts arising when vehicles do not agree on the hierarchy, resulting in sub-optimal behaviour. In this work we define a measurement for the incidence of conflicts, Area of Conflict (AoC), for a given interactive decision-making model. Furthermore, we propose a novel decision-making method that reduces this value compared to an existing approach for incorporating altruistic behaviour. We verify our theoretical analysis empirically using a simulated lane-change scenario.
computer science
We consider a D5-brane solution in AdS black hole spacetime. This is a defect solution moving in subspace of AdS5 x S5. This non-local object is realized by the probe D5-brane moving in black hole spacetime. We found this probe brane does not penetrate the black hole horizon. We also found the solution does not depend on the motion on S5 subspace.
high energy physics theory
Research on two-dimensional materials has expanded over the past two decades to become a central theme in condensed matter research today. Significant advances have been made in the synthesis and subsequent reassembly of these materials using mechanical methods into a vast array of hybrid structures with novel properties and ever-increasing potential applications. The key hurdles in realizing this potential are the challenges in controlling the atomic structure of these layered hybrid materials and the difficulties in harnessing their unique functionality with existing semiconductor nanofabrication techniques. Here we report on high-quality van der Waals epitaxial growth and characterization of a layered topological insulator on freestanding monolayer graphene transferred to different mechanical supports. This templated synthesis approach enables direct interrogation of interfacial atomic structure of these as-grown hybrid structures and opens a route towards creating device structures with more traditional semiconductor nanofabrication techniques.
condensed matter
How to contain the spread of the COVID-19 virus is a major concern for most countries. As the situation continues to change, various countries are making efforts to reopen their economies by lifting some restrictions and enforcing new measures to prevent the spread. In this work, we review some approaches that have been adopted to contain the COVID-19 virus such as contact tracing, clusters identification, movement restrictions, and status validation. Specifically, we classify available techniques based on some characteristics such as technology, architecture, trade-offs (privacy vs utility), and the phase of adoption. We present a novel approach for evaluating privacy using both qualitative and quantitative measures of privacy-utility assessment of contact tracing applications. In this new method, we classify utility at three (3) distinct levels: no privacy, 100% privacy, and at k where k is set by the system providing the utility or privacy.
computer science
HAWC has developed new energy algorithms using an artificial neural network for event-by-event reconstruction of Very High Energy (VHE) primary gamma ray energies. Unlike previous estimation methods for HAWC photons, these estimate photon energies with good energy precision and accuracy in a range from 1 TeV to greater than 100 TeV. Photon emission at the highest energies is of interest in understanding acceleration mechanisms of astrophysical sources and where the acceleration might cut off. We apply the new HAWC reconstruction to present the preliminary measurement of the highest energies at which photons are emitted by the Crab Nebula and by six additional sources in the galactic plane which emit above 50 TeV. We have observed photons above 200 TeV at 95% confidence. We also compare fits to the HAWC Crab spectrum with other measurements and theoretical models of the Crab spectrum.
astrophysics
We investigate the combined effects of boundaries and topology on the vacuum expectation values (VEVs) of the charge and current densities for a massive 2D fermionic field confined on a conical ring threaded by a magnetic flux. Different types of boundary conditions on the ring edges are considered for fields realizing two inequivalent irreducible representations of the Clifford algebra. The related bound states and zero energy fermionic modes are discussed. The edge contributions to the VEVs of the charge and azimuthal current densities are explicitly extracted and their behavior in various asymptotic limits is considered. On the ring edges the azimuthal current density is equal to the charge density or has an opposite sign. We show that the absolute values of the charge and current densities increase with increasing planar angle deficit. Depending on the boundary conditions, the VEVs are continuous or discontinuous at half-integer values of the ratio of the effective magnetic flux to the flux quantum. The discontinuity is related to the presence of the zero energy mode. By combining the results for the fields realizing the irreducible representations of the Clifford algebra, the charge and current densities are studied in parity and time-reversal symmetric fermionic models. If the boundary conditions and the phases in quasiperiodicity conditions for separate fields are the same the total charge density vanishes. Applications are given to graphitic cones with edges (conical ribbons).
high energy physics theory
We analyze a cavity optomechanical setup, in which position of an oscillator modulates optical loss. We show that in such setup quantum limited position measurements can be performed if the external cavity coupling rate matches the optical loss rate, a condition known as "critical coupling". Additionally, under this condition the setup exhibits a number of potential benefits for practical operation including the complete absence of dynamical backaction, and hence optomechanical instability, and rejection of classical laser noise and thermal fluctuations of cavity frequency from the measurement record. We propose two implementations of this scheme: one based on signal-recycled Michelson-type interferometer and the other on a tilted membrane inside Fabry-Perot cavity.
quantum physics
Renormalization group techniques are widely used in modern physics to describe the low energy relevant aspects of systems involving a large number of degrees of freedom. Those techniques are thus expected to be a powerful tool to address open issues in data analysis when data sets are also very larges. Signal detection and recognition for covariance matrix having a nearly continuous spectra is currently one of these opened issues. First investigations in this direction has been proposed in Journal of Statistical Physics, 167, Issue 3-4, pp 462-475, (2017) and Arxiv:2002.10574, from an analogy between coarse-graining and principal component analysis (PCA), regarding separation of sampling noise modes as a UV cut-off for small eigenvalues of the covariance matrix. The field theoretical framework proposed in this paper is a synthesis of these complementary point of views, aiming to be a general and operational framework, both for theoretical investigations and for experimental detection. Our investigations focus on signal detection, and we exhibits experimental evidences in favor of a connection between symmetry breaking and the existence of an intrinsic detection threshold.
high energy physics theory