text
stringlengths
11
9.77k
label
stringlengths
2
104
The Matrix Element Method is a promising multi-variate analysis tool which offers an optimal approach to compare theory and experiment according to the Neyman-Pearson lemma. However, until recently its usage has been limited by the fact that only leading-order predictions could be employed. The imperfect approximation of the underlying probability distribution can introduce a significant bias into the analysis which requires a major calibration for the method when applied to parameter determination. Moreover, estimating theoretical uncertainties by scale variation may yield unreliable results. We present the extension of the Matrix Element Method to next-to-leading order in QCD applicable to LHC data defined by common jet algorithms. The accuracy gain is illustrated by simulating a top-quark mass determination from single top-quark events generated with POWHEG+PYTHIA. Additionally, the method's potential for BSM parameter determination is demonstrated by simulating the extraction of a CP-violating Top-Yukawa coupling from events of single top-quarks in association with a Higgs boson.
high energy physics phenomenology
The leading account of several salient observable features of our universe today is provided by the theory of cosmic inflation. But an important and thus far intractable question is whether inflation is generic, or whether it is finely tuned---requiring very precisely specified initial conditions. In this paper I argue that a recent, model-independent characterization of inflation, known as the 'effective field theory (EFT) of inflation', promises to address this question in a thoroughly modern and significantly more comprehensive way than in the existing literature. To motivate and provide context for this claim, I distill three core problems with the theory of inflation, which I dub the permissiveness problem, the initial conditions problem, and the multiverse problem. I argue that the initial conditions problem lies within the scope of EFTs of inflation as they are currently conceived, whereas the other two problems remain largely intractable: their solution must await a more complete description of the very early universe. I highlight recent work that addresses the initial conditions problem within the context of a dynamical systems analysis of a specific (state-of-the-art) EFT of inflation, and conclude with a roadmap for how such work might be extended to realize the promise claimed above.
physics
Negative screening is one method to avoid interactions with inappropriate entities. For example, financial institutions keep investment exclusion lists of inappropriate firms that have environmental, social, and government (ESG) problems. They create their investment exclusion lists by gathering information from various news sources to keep their portfolios profitable as well as green. International organizations also maintain smart sanctions lists that are used to prohibit trade with entities that are involved in illegal activities. In the present paper, we focus on the prediction of investment exclusion lists in the finance domain. We construct a vast heterogeneous information network that covers the necessary information surrounding each firm, which is assembled using seven professionally curated datasets and two open datasets, which results in approximately 50 million nodes and 400 million edges in total. Exploiting these vast datasets and motivated by how professional investigators and journalists undertake their daily investigations, we propose a model that can learn to predict firms that are more likely to be added to an investment exclusion list in the near future. Our approach is tested using the negative news investment exclusion list data of more than 35,000 firms worldwide from January 2012 to May 2018. Comparing with the state-of-the-art methods with and without using the network, we show that the predictive accuracy is substantially improved when using the vast information stored in the heterogeneous information network. This work suggests new ways to consolidate the diffuse information contained in big data to monitor dominant firms on a global scale for better risk management and more socially responsible investment.
computer science
In this article we develop a tractable procedure for testing strict stationarity in a double autoregressive model and formulate the problem as testing if the top Lyapunov exponent is negative. Without strict stationarity assumption, we construct a consistent estimator of the associated top Lyapunov exponent and employ a random weighting approach for its variance estimation, which in turn are used in a t-type test. We also propose a GLAD estimation for parameters of interest, relaxing key assumptions on the commonly used QMLE. All estimators, except for the intercept, are shown to be consistent and asymptotically normal in both stationary and explosive situations. The finite-sample performance of the proposed procedures is evaluated via Monte Carlo simulation studies and a real dataset of interest rates is analyzed.
mathematics
In this work we provide a triple master action interpolating among three self-dual descriptions of massive spin-3/2 particles in $D=2+1$ dimensions. Such result generalizes a master action previously suggested in the literature. We also show that, surprisingly a shorthand notation in terms of differential operators applied in the bosonic cases of spins 2 and 3, can also be defined to the fermionic case. With the help of projection operators, we have also obtained the propagator and analyzed unitarity in $D$ dimensions of a second order spin-3/2 doublet model. Once we demonstrate that this doublet model is free of ghosts, we provide a master action interpolating such model with a fourth order theory which has several similarities with the spin-2 linearized New Massive Gravity theory.
high energy physics theory
A software vulnerability could be exploited without any visible symptoms. When no source code is available, although such silent program executions could cause very serious damage, the general problem of analyzing silent yet harmful executions is still an open problem. In this work, we propose a graph neural network (GNN) assisted data flow analysis method for spotting silent buffer overflows in execution traces. The new method combines a novel graph structure (denoted DFG+) beyond data-flow graphs, a tool to extract {\tt DFG+} from execution traces, and a modified Relational Graph Convolutional Network as the GNN model to be trained. The evaluation results show that a well-trained model can be used to analyze vulnerabilities in execution traces (of previously-unseen programs) without support of any source code. Our model achieves 94.39\% accuracy on the test data and successfully locates 29 out of 30 real-world silent buffer overflow vulnerabilities. Leveraging deep learning, the proposed method is, to our best knowledge, the first general-purpose analysis method for silent buffer overflows. It is also the first method to spot silent buffer overflows in global variables, stack variables, or heap variables without crossing the boundary of allocated chunks.
computer science
In the realm of games research, Artificial General Intelligence algorithms often use score as main reward signal for learning or playing actions. However this has shown its severe limitations when the point rewards are very rare or absent until the end of the game. This paper proposes a new approach based on event logging: the game state triggers an event every time one of its features changes. These events are processed by an Event-value Function (EF) that assigns a value to a single action or a sequence. The experiments have shown that such approach can mitigate the problem of scarce point rewards and improve the AI performance. Furthermore this represents a step forward in controlling the strategy adopted by the artificial agent, by describing a much richer and controllable behavioural space through the EF. Tuned EF are able to neatly synthesise the relevance of the events in the game. Agents using an EF show more robust when playing games with several opponents.
computer science
Recent advances on quantum computing hardware have pushed quantum computing to the verge of quantum supremacy. Random quantum circuits are outstanding candidates to demonstrate quantum supremacy, which could be implemented on a quantum device that supports nearest-neighbour gate operations on a two-dimensional configuration. Here we show that using the Projected Entangled-Pair States algorithm, a tool to study two-dimensional strongly interacting many-body quantum systems, we can realize an effective general-purpose simulator of quantum algorithms. This technique allows to quantify precisely the memory usage and the time requirements of random quantum circuits, thus showing the frontier of quantum supremacy. With this approach we can compute the full wave-function of the system, from which single amplitudes can be sampled with unit fidelity. Applying this general quantum circuit simulator we measured amplitudes for a $7\times 7$ lattice of qubits with depth $1+40+1$ and double-precision numbers in 31 minutes using less than $93$ TB memory on the Tianhe-2 supercomputer.
quantum physics
Named Entity Recognition has been extensively investigated in many fields. However, the application of sensitive entity detection for production systems in financial institutions has not been well explored due to the lack of publicly available, labeled datasets. In this paper, we use internal and synthetic datasets to evaluate various methods of detecting NPI (Nonpublic Personally Identifiable) information commonly found within financial institutions, in both unstructured and structured data formats. Character-level neural network models including CNN, LSTM, BiLSTM-CRF, and CNN-CRF are investigated on two prediction tasks: (i) entity detection on multiple data formats, and (ii) column-wise entity prediction on tabular datasets. We compare these models with other standard approaches on both real and synthetic data, with respect to F1-score, precision, recall, and throughput. The real datasets include internal structured data and public email data with manually tagged labels. Our experimental results show that the CNN model is simple yet effective with respect to accuracy and throughput and thus, is the most suitable candidate model to be deployed in the production environment(s). Finally, we provide several lessons learned on data limitations, data labelling and the intrinsic overlap of data entities.
computer science
The spectral properties of a set of local gauge-invariant composite operators are investigated in the $U(1)$ Higgs model quantized in the 't Hooft $R_{\xi}$ gauge. These operators enable us to give a gauge-invariant description of the spectrum of the theory, thereby surpassing certain incommodities when using the standard elementary fields. The corresponding two-point correlation functions are evaluated at one-loop order and their spectral functions are obtained explicitly. As expected, the above mentioned correlation functions are independent from the gauge parameter $\xi$, while exhibiting positive spectral densities as well as gauge-invariant pole masses corresponding to the massive photon and Higgs physical excitations.
high energy physics theory
This paper presents the use of spike-and-slab (SS) priors for discovering governing differential equations of motion of nonlinear structural dynamic systems. The problem of discovering governing equations is cast as that of selecting relevant variables from a predetermined dictionary of basis variables and solved via sparse Bayesian linear regression. The SS priors, which belong to a class of discrete-mixture priors and are known for their strong sparsifying (or shrinkage) properties, are employed to induce sparse solutions and select relevant variables. Three different variants of SS priors are explored for performing Bayesian equation discovery. As the posteriors with SS priors are analytically intractable, a Markov chain Monte Carlo (MCMC)-based Gibbs sampler is employed for drawing posterior samples of the model parameters; the posterior samples are used for variable selection and parameter estimation in equation discovery. The proposed algorithm has been applied to four systems of engineering interest, which include a baseline linear system, and systems with cubic stiffness, quadratic viscous damping, and Coulomb damping. The results demonstrate the effectiveness of the SS priors in identifying the presence and type of nonlinearity in the system. Additionally, comparisons with the Relevance Vector Machine (RVM) - that uses a Student's-t prior - indicate that the SS priors can achieve better model selection consistency, reduce false discoveries, and derive models that have superior predictive accuracy. Finally, the Silverbox experimental benchmark is used to validate the proposed methodology.
statistics
A droplet bouncing on the surface of a vertically vibrating liquid bath can walk horizontally, guided by the waves it generates on each impact. This results in a self-propelled classical particle-wave entity. By using a one-dimensional theoretical pilot-wave model with a generalized wave form, we investigate the dynamics of this particle-wave entity. We employ different spatial wave forms to understand the role played by both wave oscillations and spatial wave decay in the walking dynamics. We observe steady walking motion as well as unsteady motions such as oscillating walking, self-trapped oscillations and irregular walking. We explore the dynamical and statistical aspects of irregular walking and show an equivalence between the droplet dynamics and the Lorenz system, as well as making connections with the Langevin equation and deterministic diffusion.
physics
We consider the problem of optimizing an unknown (typically non-convex) function with a bounded norm in some Reproducing Kernel Hilbert Space (RKHS), based on noisy bandit feedback. We consider a novel variant of this problem in which the point evaluations are not only corrupted by random noise, but also adversarial corruptions. We introduce an algorithm Fast-Slow GP-UCB based on Gaussian process methods, randomized selection between two instances labeled "fast" (but non-robust) and "slow" (but robust), enlarged confidence bounds, and the principle of optimism under uncertainty. We present a novel theoretical analysis upper bounding the cumulative regret in terms of the corruption level, the time horizon, and the underlying kernel, and we argue that certain dependencies cannot be improved. We observe that distinct algorithmic ideas are required depending on whether one is required to perform well in both the corrupted and non-corrupted settings, and whether the corruption level is known or not.
statistics
The nitrogen-vacancy (N-V) center in diamond is a widely-used platform for quantum information processing and metrology. The electron-spin state of N-V center could be initialized and readout optically, and manipulated by resonate microwave fields. In this work, we analyze the dependence of electron-spin initialization on widths of laser pulses. We build a numerical model to simulate this process and verify the simulation results in experiment. Both simulations and experiments reveal a fact that shorter laser pulses are helpful to the electron-spin polarization. We therefore propose to use extremely-short laser pulses for electron-spin initialization. In this new scheme, the spin-state contrast could be improved about 10% in experiment by using laser pulses as short as 4 ns in width. Furthermore, we provide a mechanism to explain this effect which is due to the occupation time in the meta-stable spin-singlet states of N-V center. Our new scheme is applicable in a broad range of NV-based applications in the future.
quantum physics
Our goal is to enable machine learning systems to be trained interactively. This requires models that perform well and train quickly, without large amounts of hand-labeled data. We take a step forward in this direction by borrowing from weak supervision (WS), wherein models can be trained with noisy sources of signal instead of hand-labeled data. But WS relies on training downstream deep networks to extrapolate to unseen data points, which can take hours or days. Pre-trained embeddings can remove this requirement. We do not use the embeddings as features as in transfer learning (TL), which requires fine-tuning for high performance, but instead use them to define a distance function on the data and extend WS source votes to nearby points. Theoretically, we provide a series of results studying how performance scales with changes in source coverage, source accuracy, and the Lipschitzness of label distributions in the embedding space, and compare this rate to standard WS without extension and TL without fine-tuning. On six benchmark NLP and video tasks, our method outperforms WS without extension by 4.1 points, TL without fine-tuning by 12.8 points, and traditionally-supervised deep networks by 13.1 points, and comes within 0.7 points of state-of-the-art weakly-supervised deep networks-all while training in less than half a second.
statistics
Considering the strong field approximation we compute the hard thermal loop pressure at finite temperature and chemical potential of hot and dense deconfined QCD matter in lowest Landau level in one-loop order. We consider the anisotropic pressure in the presence of the strong magnetic field i.e., longitudinal and transverse pressure along parallel and perpendicular to the magnetic field direction. As a first effort, we compute and discuss the anisotropic quark number susceptibility of deconfined QCD matter in lowest Landau level. The longitudinal quark number susceptibility is found to increase with the temperature whereas the transverse one decreases with the temperature. We also compute the quark number susceptibility in the weak field approximation. We find that the thermomagnetic correction to the quark number susceptibility is very marginal in the weak field approximation.
high energy physics phenomenology
Mid- to far-infrared (IR) lines are suited to study dust obscured regions in galaxies, because IR spectroscopy allows us to explore the most hidden regions where heavily obscured star formation as well as accretion onto supermassive black-holes occur. This is mostly important at redshifts of 1<z<3, when most of the baryonic mass in galaxies has been assembled. We provide reliable calibrations of the mid- to far-IR ionic fine structure lines, the brightest H2 pure rotational lines and the Polycyclic Aromatic Hydrocarbons (PAHs) features, that will be used to analyse current and future observations in the mm/submm range from the ground, as well as mid-IR spectroscopy from the upcoming James Webb Space Telescope. We use three samples of galaxies observed in the local Universe: star forming galaxies, AGN and low-metallicity dwarf galaxies. For each population we derive different calibrations of the observed line luminosities versus the total IR luminosities. We derive spectroscopic measurements of SFR and BHAR using mid- and far-IR fine structure lines, H2 pure rotational lines and PAH features. We derive robust star-formation tracers based on the [CII]158 $\mu$m line; the sum of the [OI]63$\mu$m and [OIII]88$\mu$m lines; a combination of the neon and sulfur mid-IR lines; the bright PAH features at 6.2 and 11.3 $\mu$m, and the H2 rotational lines at 9.7, 12.3 and 17 $\mu$m. We propose the [CII]158$\mu$m line, the combination of two neon lines and, for solar-like metallicity galaxies that may harbor an AGN, the PAH11.3$\mu$m feature as the best SFR tracers. A reliable measure of the BHAR can be obtained using the [OIV]25.9 $\mu$m and the [NeV]14.3 and 24.3 $\mu$m lines. For the most commonly observed fine-structure lines in the far-IR we compare our calibration with the existing ALMA observations of high redshift galaxies finding overall a good agreement with local results.
astrophysics
Causal mediation analysis has historically been limited in two important ways: (i) a focus has traditionally been placed on binary treatments and static interventions, and (ii) direct and indirect effect decompositions have been pursued that are only identifiable in the absence of intermediate confounders affected by treatment. We present a theoretical study of an (in)direct effect decomposition of the population intervention effect, defined by stochastic interventions jointly applied to the treatment and mediators. In contrast to existing proposals, our causal effects can be evaluated regardless of whether a treatment is categorical or continuous and remain well-defined even in the presence of intermediate confounders affected by treatment. Our (in)direct effects are identifiable without a restrictive assumption on cross-world counterfactual independencies, allowing for substantive conclusions drawn from them to be validated in randomized controlled trials. Beyond the novel effects introduced, we provide a careful study of nonparametric efficiency theory relevant for the construction of flexible, multiply robust estimators of our (in)direct effects, while avoiding undue restrictions induced by assuming parametric models of nuisance parameter functionals. To complement our nonparametric estimation strategy, we introduce inferential techniques for constructing confidence intervals and hypothesis tests, and discuss open source software implementing the proposed methodology.
statistics
We determine the escape velocity from the Milky Way (MW) at a range of Galactocentric radii in the context of Modified Newtonian Dynamics (MOND). Due to its non-linear nature, escape is possible if the MW is considered embedded in a constant external gravitational field (EF) from distant objects. We model this situation using a fully self-consistent method based on a direct solution of the governing equations out to several thousand disk scale lengths. We try out a range of EF strengths and mass models for the MW in an attempt to match the escape velocity measurements of Williams et al. (2017). A reasonable match is found if the EF on the MW is ${\sim 0.03 a_{_0}}$, towards the higher end of the range considered. Our models include a hot gas corona surrounding the MW, but our results suggest that this should have a very low mass of ${\sim 2 \times 10^{10} M_\odot}$ to avoid pushing the escape velocity too high. Our analysis favours a slightly lower baryonic disk mass than the ${\sim 7 \times 10^{10} M_\odot}$ required to explain its rotation curve in MOND. However, given the uncertainties, MOND is consistent with both the locally measured amplitude of the MW rotation curve and its escape velocity over Galactocentric distances of 8$-$50 kpc.
astrophysics
Standard Quantum Limit (SQL) of a classical mechanical force detection results from quantum back action impinged by the meter on a probe mechanical transducer perturbed by the force of interest. In this paper we introduce a technique of continuous \vy{broadband} back action avoiding measurements for the case of a resonant signal force acting on a linear mechanical oscillator supporting one of mirrors of an optical Michelson-Sagnac Interferometer (MSI). The interferometer with the movable mirror is an opto-mechanical transducer able to support polychromatic probe field. The method involves a dichromatic optical probe resonant with the MSI modes and having frequency separation equal to the mechanical frequency. We show that analyzing each of the harmonics of the probe reflected from the mechanical system separately and postprocessing the measurement results allows excluding the back action in a broad frequency band and measuring the force with sensitivity better than SQL.
quantum physics
Measurement-device-independent quantum key distribution (MDI-QKD) removes all detector side channels and enables secure QKD with an untrusted relay. It is suitable for building a star-type quantum access network, where the complicated and expensive measurement devices are placed in the central untrusted relay and each user requires only a low-cost transmitter, such as an integrated photonic chip. Here, we experimentally demonstrate a 1.25 GHz silicon photonic chip-based MDI-QKD system using polarization encoding. The photonic chip transmitters integrate the necessary encoding components for a standard QKD source. We implement random modulations of polarization states and decoy intensities, and demonstrate a finite-key secret rate of 31 bps over 36 dB channel loss (or 180 km standard fiber). This key rate is higher than state-of-the-art MDI-QKD experiments. The results show that silicon photonic chip-based MDI-QKD, benefiting from miniaturization, low-cost manufacture and compatibility with CMOS microelectronics, is a promising solution for future quantum secure networks.
quantum physics
With the advent of autonomous driving technologies, traffic control at intersections is expected to experience revolutionary changes. Various novel intersection control methods have been proposed in the existing literature, and they can be roughly divided into two categories: vehicle-based traffic control and phase-based traffic control. Phase-based traffic control can be treated as updated versions of the current intersection signal control with the incorporation of the performance of autonomous vehicle functions. Meanwhile, vehicle-based traffic control utilizes some brand-new methods, mostly in real-time fashion, to organize traffic at intersections for safe and efficient vehicle passages. However, to date, no systematic comparison between these two control categories has been performed to suggest their advantages and disadvantages. This paper conducts a series of numerical simulations under various traffic scenarios to perform a fair comparison of their performances. Specifically, we allow trajectory adjustments of incoming vehicles under phasebased traffic control, while for its vehicle-based counterpart, we implement two strategies, i.e., the first-come-first-serve strategy and the conflict-point based rolling-horizon optimization strategy. Overall, the simulation results show that vehicle-based traffic control generally incurs a negligible delay when traffic demand is low but lead to an excessive queuing time as the traffic volume becomes high. However, performance of vehicle-based traffic control may benefit from reduction in conflicting vehicle pairs. We also discovered that when autonomous driving technologies are not mature, the advantages of phase-based traffic control are much more distinct.
electrical engineering and systems science
Parameterized quantum evolution is the main ingredient in variational quantum algorithms for near-term quantum devices. In digital quantum computing, it has been shown that random parameterized quantum circuits are able to express complex distributions intractable by a classical computer, leading to the demonstration of quantum supremacy. However, their chaotic nature makes parameter optimization challenging in variational approaches. Evidence of similar classically-intractable expressibility has been recently demonstrated in analog quantum computing with driven many-body systems. A thorough investigation of trainability of such analog systems is yet to be performed. In this work, we investigate how the interplay between external driving and disorder in the system dictates the trainability and expressibility of interacting quantum systems. We show that if the system thermalizes, the training fails at the expense of the a large expressibility, while the opposite happens when the system enters the many-body localized (MBL) phase. From this observation, we devise a protocol using quenched MBL dynamics which allows accurate trainability while keeping the overall dynamics in the quantum supremacy regime. Our work shows the fundamental connection between quantum many-body physics and its application in machine learning. We conclude our work with an example application in generative modeling employing a well studied analog many-body model of a driven Ising spin chain. Our approach can be implemented with a variety of available quantum platforms including cold ions, atoms and superconducting circuits
quantum physics
Primitive inflation tilings of the real line with finitely many tiles of natural length and a Pisot--Vijayaraghavan unit as inflation factor are considered. We present an approach to the pure point part of their diffraction spectrum on the basis of a Fourier matrix cocycle in internal space. This cocycle leads to a transfer matrix equation and thus to a closed expression of matrix Riesz product type for the Fourier transforms of the windows for the covering model sets. In general, these windows are complicated Rauzy fractals and thus difficult to handle. Equivalently, this approach permits a construction of the (always continuously representable) eigenfunctions for the translation dynamical system induced by the inflation rule. We review and further develop the underlying theory, and illustrate it with the family of Pisa substitutions, with special emphasis on the Tribonacci case.
mathematics
Maximum-likelihood estimation (MLE) is arguably the most important tool for statisticians, and many methods have been developed to find the MLE. We present a new inequality involving posterior distributions of a latent variable that holds under very general conditions. It is related to the EM algorithm and has a clear potential for being used in a similar fashion.
mathematics
Rigorous electrodynamical simulations based on the nonlinear Drude model are performed to investigate the influence of strong coupling on high harmonic generation by periodic metal gratings. It is shown that a thin dispersive material with a third order nonlinearity strongly coupled to surface plasmon-polaritons significantly affects even harmonics generated solely by the metal. The physical nature of this effect is explained using a simple analytical model and further supported by numerical simulations. Furthermore, the behavior of the second and third harmonics is investigated as a function of various physical parameters of the model material system, revealing highly complex dynamics. The nonlinear optical response of 2D few-layer WS2 with both second and third order susceptibilities coupled to a periodic plasmonic grating is shown to have a significant effect on the second harmonic generation of the metal.
condensed matter
In this paper we introduce a new algorithm for solving perturbed nonlinear functional equations which admit a right-invertible linearization, but with an inverse that loses derivatives and may blow up when the perturbation parameter $\epsilon$ goes to zero. These equations are of the form $F\_\epsilon(u)=v$ with $F\_\epsilon(0)=0$, $v$ small and given, $u$ small and unknown. The main difference with the by now classical Nash-Moser algorithm is that, instead of using a regularized Newton scheme, we solve a sequence of Galerkin problems thanks to a topological argument. As a consequence, in our estimates there are no quadratic terms. For problems without perturbation parameter, our results require weaker regularity assumptions on $F$ and $v$ than earlier ones, such as those of Hormander. For singularly perturbed functionals, we allow $v$ to be larger than in previous works. To illustrate this, we apply our method to a nonlinear Schrodinger Cauchy problem with concentrated initial data studied by Texier-Zumbrun, and we show that our result improves significantly on theirs.
mathematics
Extra virgin olive oil (EVOO) is the highest quality of olive oil and is characterized by highly beneficial nutritional properties. The large increase in both consumption and fraud, for example through adulteration, creates new challenges and an increasing demand for developing new quality assessment methodologies that are easier and cheaper to perform. As of today, the determination of olive oil quality is performed by producers through chemical analysis and organoleptic evaluation. The chemical analysis requires the advanced equipment and chemical knowledge of certified laboratories, and has therefore a limited accessibility. In this work a minimalist, portable and low-cost sensor is presented, which can perform olive oil quality assessment using fluorescence spectroscopy. The potential of the proposed technology is explored by analyzing several olive oils of different quality levels, EVOO, virgin olive oil (VOO), and lampante olive oil (LOO). The spectral data were analyzed using a large number of machine learning methods, including artificial neural networks. The analysis performed in this work demonstrates the possibility of performing classification of olive oil in the three mentioned classes with an accuracy of 100$\%$. These results confirm that this minimalist low-cost sensor has the potential of substituting expensive and complex chemical analysis.
electrical engineering and systems science
The hybrid optical pumping spin exchange relaxation free (HOPSERF) atomic co-magnetometers make ultrahigh sensitivity measurement of inertia achievable. The wall relaxation rate has a big effect on the polarization and fundamental sensitivity for the co-magnetometer, but it is often neglected in the experiments. However, there is almost no work about the systematic analysis of the influence factors on the polarization and the fundamental sensitivity of the HOPSERF co-magnetometers. Here, we systematically study the polarization and the fundamental sensitivity of 39K-85Rb-21Ne and 133Cs-85Rb-21Ne HOPSERF co-magnetometers with low polarization limit and the wall relaxation rate. The 21Ne number density, the power density and wavelength of pump beam will affect the polarization greatly by affecting the pumping rate of pump beam. We obtain a general formula on the fundamental sensitivity of the HOPSERF co-magnetometers due to shot-noise and the fundamental sensitivity changes with multiple systemic parameters, where the suitable number density of buffer gas and quench gas make the fundamental sensitivity highest. The fundamental sensitivity $7.5355\times10^{-11}$ $\rm rad/s/Hz^{1/2}$ of 133Cs-85Rb-21Ne co-magnetometer is higher than the ultimate theoretical sensitivity $2\times10^{-10}$ $\rm rad/s/Hz^{1/2}$ of K-21Ne co-magnetometer.
physics
Bosonization in curved spacetime maps massive Thirring model (self-interacting Dirac fermions) to a generalized sine-Gordon model, both living in $1+1$-dimensional curved spacetime. Applying this duality we have shown that the Thirring model fermion, in non-relativistic limit, gets identified with the soliton of non-linear Scrodinger model with Kerr form of non-linearity. We discuss one particular optical soliton in the latter model and relate it with the Thirring model fermion.
high energy physics theory
Graph-based semi-supervised learning is one of the most popular methods in machine learning. Some of its theoretical properties such as bounds for the generalization error and the convergence of the graph Laplacian regularizer have been studied in computer science and statistics literatures. However, a fundamental statistical property, the consistency of the estimator from this method has not been proved. In this article, we study the consistency problem under a non-parametric framework. We prove the consistency of graph-based learning in the case that the estimated scores are enforced to be equal to the observed responses for the labeled data. The sample sizes of both labeled and unlabeled data are allowed to grow in this result. When the estimated scores are not required to be equal to the observed responses, a tuning parameter is used to balance the loss function and the graph Laplacian regularizer. We give a counterexample demonstrating that the estimator for this case can be inconsistent. The theoretical findings are supported by numerical studies.
statistics
We provide a novel method for large volatility matrix prediction with high-frequency data by applying eigen-decomposition to daily realized volatility matrix estimators and capturing eigenvalue dynamics with ARMA models. Given a sequence of daily volatility matrix estimators, we compute the aggregated eigenvectors and obtain the corresponding eigenvalues. Eigenvalues in the same relative magnitude form a time series and the ARMA models are further employed to model the dynamics within each eigenvalue time series to produce a predictor. We predict future large volatility matrix based on the predicted eigenvalues and the aggregated eigenvectors, and demonstrate the advantages of the proposed method in volatility prediction and portfolio allocation problems.
statistics
PolarLight is a space-borne X-ray polarimeter that measures the X-ray polarization via electron tracking in an ionization chamber. It is a collimated instrument and thus suffers from the background on the whole detector plane. The majority of background events are induced by high energy charged particles and show ionization morphologies distinct from those produced by X-rays of interest. Comparing on-source and off-source observations, we find that the two datasets display different distributions on image properties. The boundaries between the source and background distributions are obtained and can be used for background discrimination. Such a means can remove over 70% of the background events measured with PolarLight. This approaches the theoretical upper limit of the background fraction that is removable and justifies its effectiveness. For observations with the Crab nebula, the background contamination decreases from 25% to 8% after discrimination, indicative of a polarimetric sensitivity of around 0.2 Crab for PolarLight. This work also provides insights into future X-ray polarimetric telescopes.
astrophysics
Convergence theory is an extension of general topology. In contrast with topology, it is closed under some important operations, like exponentiation. With all its advantages, convergence theory remains rather unknown. It is an aim of this paper to make it more familiar to the mathematical community.
mathematics
Crystal orientation mapping experiments typically measure orientations that are similar within grains and misorientations that are similar along grain boundaries. Such (mis)orientation data will cluster in (mis)orientation space and clusters are more pronounced if preferred orientations or special orientation relationships are present. Here, cluster analysis of (mis)orientation data is described and demonstrated using distance metrics incorporating crystal symmetry and the density-based clustering algorithm DBSCAN. Frequently measured (mis)orientations are identified as corresponding to grains, grain boundaries or orientation relationships, which are visualised both spatially and in three-dimensional (mis)orientation spaces. A new open-source python library, orix, is also reported.
condensed matter
Let $X$ be a compact K\"ahler manifold of dimension $n$ and $\omega$ a K\"ahler form on $X$. We consider the complex Monge-Amp\`ere equation $(dd^c u+\omega)^n=\mu$, where $\mu$ is a given positive measure on $X$ of suitable mass and $u$ is an $\omega$-plurisubharmonic function. We show that the equation admits a H\"older continuous solution {\it if and only if} the measure $\mu$, seen as a functional on a complex Sobolev space $W^*(X)$, is H\"older continuous. A similar result is also obtained for the complex Monge-Amp\`ere equations on domains of $\mathbb{C}^n$.
mathematics
The compression of the resolvent of a non-self-adjoint Schr\"odinger operator $-\Delta+V$ onto a subdomain $\Omega\subset\mathbb R^n$ is expressed in a Krein-Naimark type formula, where the Dirichlet realization on $\Omega$, the Dirichlet-to-Neumann maps, and certain solution operators of closely related boundary value problems on $\Omega$ and $\mathbb R^n\setminus\overline\Omega$ are being used. In a more abstract operator theory framework this topic is closely connected and very much inspired by the so-called coupling method that has been developed for the self-adjoint case by Henk de Snoo and his coauthors.
mathematics
CMOS sensors were successfully implemented in the STAR tracker [1]. LHC experiments have shown that efficient b tagging, reconstruction of displaced vertices and identification of disappearing tracks are necessary. An improved vertex detector is justified for the ILC. To achieve a point-to-point (spatial single layer) resolution below the one-{\mu}m range while improving other characteristics (radiation tolerance and eventually time resolution) we will need the use of 1-micron pitch pixels. Therefore, we propose a single MOS transistor that acts as an amplifying device and a detector with a buried charge-collecting gate. Device simulations both classical and quantum, have led to the proposed DoTPiX structure. With the evolution of silicon processes, far well below 100 nm line feature, this pixel should be feasible. We will present this pixel detector and the present status of its development in both our institution (IRFU) and in other collaborating labs (CNRS/C2N).
physics
Network analysis is becoming increasingly relevant in the historical investigation of scientific communities and their knowledge circulation process, because it offers the opportunity to explore and visualize connections amog scientific actors on a scale qualitatively different from traditional historical methods. Temporal networks are especially suitable for this task, as they allow to investigate the evolution of scientific communities over time. In this paper we will rely on the analytical tools provided by temporal networks to examine the technical comission on agriculture (1913 - 1947) established by the International Meteorological Organization (IMO). By using the membership data available on this commission, we will investigate how this scientific community evolved over the decades, who were its key members, which national groups were represented, and how historical events, such as the two world wars, impacted on the work of this organization. This will give us an insight into the knowledge circulation process of this scientific body, as the IMO was an international organization based on voluntary cooperation and its work was first and foremost the immediate consequence of the interaction amog its members. In our paper we will rely on centrality measures (eigenvector, joint, and conditional centrality) to understand the structure of the comission's network, and we will constantly point out the strengths and weaknesses of temporal networks in the analysis of historical data.
physics
Quantum curves arise from Seiberg-Witten curves associated to 4d $\mathcal{N}=2$ gauge theories by promoting coordinates to non-commutative operators. In this way the algebraic equation of the curve is interpreted as an operator equation where a Hamiltonian acts on a wave-function with zero eigenvalue. We find that this structure generalises when one considers torus-compactified 6d $\mathcal{N}=(1,0)$ SCFTs. The corresponding quantum curves are elliptic in nature and hence the associated eigenvectors/eigenvalues can be expressed in terms of Jacobi forms. In this paper we focus on the class of 6d SCFTs arising from M5 branes transverse to a $\mathbb{C}^2/\mathbb{Z}_k$ singularity. In the limit where the compactified 2-torus has zero size, the corresponding 4d $\mathcal{N}=2$ theories are known as class $\mathcal{S}_k$. We explicitly show that the eigenvectors associated to the quantum curve are expectation values of codimension 2 surface operators, while the corresponding eigenvalues are codimension 4 Wilson surface expectation values.
high energy physics theory
We investigate the production of the fully - heavy tetraquark states $T_{4Q}$ in the $\gamma \gamma$ interactions present in proton-proton, proton-nucleus and nucleus-nucleus collisions at the CERN Large Hadron Collider (LHC). We focus on the $\gamma \gamma \rightarrow {\cal{Q}}{\cal{Q}}$ (${\cal{Q}} = J/\psi,\, \Upsilon$) subprocess, mediated by the $T_{4Q}$ resonance in the $s$ - channel, and present predictions for the hadronic cross sections considering the kinematical ranges probed by the ALICE and LHCb Collaborations. Our results demonstrate that the experimental study of this process is feasible and can be used to investigate the existence and properties of the $T_{4c}(6900)$ and $T_{4b}(19000)$ states.
high energy physics phenomenology
In this work we study kinklike structures, which are localized solutions that appear in models described by real scalar fields. The model to be considered is characterized by two real scalar fields and includes a function of one of the two fields that modifies the kinematics associated to the other field. The investigation brings to light a first order framework that minimizes the energy of the solutions by introducing an auxiliary function that directly contributes to describe the system. We explore an interesting route, in which one field acts independently, entrapping the other field, inducing important modifications in the profile of the localized structure. The procedure may make the solution to spring up as a kinklike configuration with internal structure, engendering the important feature that also appears directly connected with issues of current interest at the nanometric scale, in particular in the electronic transport in molecules in the presence of vibrational degrees of freedom.
high energy physics theory
Quadratic unconstrained binary optimization (QUBO) is the mathematical formalism for phrasing and solving a class of optimization problems that are combinatorial in nature. Due to their natural equivalence with the two dimensional Ising model for ferromagnetism in statistical mechanics, problems from the QUBO class can be solved on quantum annealing hardware. In this paper, we report a QUBO formatting of the problem of optimal control of time-dependent traffic signals on an artificial grid-structured road network so as to ease the flow of traffic, and the use of D-Wave Systems' quantum annealer to solve it. Since current-generation D-Wave annealers have a limited number of qubits and limited inter-qubit connectivity, we adopt a hybrid (classical/quantum) approach to this problem. As traffic flow is a continuous and evolving phenomenon, we address this time-dependent problem by adopting a workflow to generate and solve multiple problem instances periodically.
quantum physics
The Maximum Mean Discrepancy (MMD) has found numerous applications in statistics and machine learning, most recently as a penalty in the Wasserstein Auto-Encoder (WAE). In this paper we compute closed-form expressions for estimating the Gaussian kernel based MMD between a given distribution and the standard multivariate normal distribution. This formula reveals a connection to the Baringhaus-Henze-Epps-Pulley (BHEP) statistic of the Henze-Zirkler test and provides further insights about the MMD. We introduce the standardized version of MMD as a penalty for the WAE training objective, allowing for a better interpretability of MMD values and more compatibility across different hyperparameter settings. Next, we propose using a version of batch normalization at the code layer; this has the benefits of making the kernel width selection easier, reducing the training effort, and preventing outliers in the aggregate code distribution. Our experiments on synthetic and real data show that the analytic formulation improves over the commonly used stochastic approximation of the MMD, and demonstrate that code normalization provides significant benefits when training WAEs.
statistics
We describe the implementation of sophisticated numerical techniques for general-relativistic magnetohydrodynamics simulations in the Athena++ code framework. Improvements over many existing codes include the use of advanced Riemann solvers and of staggered-mesh constrained transport. Combined with considerations for computational performance and parallel scalability, these allow us to investigate black hole accretion flows with unprecedented accuracy. The capability of the code is demonstrated by exploring magnetically arrested disks.
astrophysics
This paper considers a high dimensional linear regression model with corrected variables. A variety of methods have been developed in recent years, yet it is still challenging to keep accurate estimation when there are complex correlation structures among predictors and the response. We propose an adaptive and "reversed" penalty for regularization to solve this problem. This penalty doesn't shrink variables but focuses on removing the shrinkage bias and encouraging grouping effect. Combining the l_1 penalty and the Minimax Concave Penalty (MCP), we propose two methods called Smooth Adjustment for Correlated Effects (SACE) and Generalized Smooth Adjustment for Correlated Effects (GSACE). Compared with the traditional adaptive estimator, the proposed methods have less influence from the initial estimator and can reduce the false negatives of the initial estimation. The proposed methods can be seen as linear functions of the new penalty's tuning parameter, and are shown to estimate the coefficients accurately in both extremely highly correlated variables situation and weakly correlated variables situation. Under mild regularity conditions we prove that the methods satisfy certain oracle property. We show by simulations and applications that the proposed methods often outperforms other methods.
statistics
Relief based algorithms have often been claimed to uncover feature interactions. However, it is still unclear whether and how interaction terms will be differentiated from marginal effects. In this paper, we propose IMMIGRATE algorithm by including and training weights for interaction terms. Besides applying the large margin principle, we focus on the robustness of the contributors of margin and consider local and global information simultaneously. Moreover, IMMIGRATE has been shown to enjoy attractive properties, such as robustness and combination with Boosting. We evaluate our proposed method on several tasks, which achieves state-of-the-art results significantly.
statistics
We consider the swampland distance and de Sitter conjectures, of respective order one parameters $\lambda$ and $c$. Inspired by the recent Trans-Planckian Censorship conjecture (TCC), we propose a generalization of the distance conjecture, which bounds $\lambda$ to be a half of the TCC bound for $c$, i.e. $\lambda \geq \frac{1}{2}\sqrt{\frac{2}{3}}$ in 4d. In addition, we propose a correspondence between the two conjectures, relating the tower mass $m$ on the one side to the scalar potential $V$ on the other side schematically as $m\sim |V|^{\frac{1}{2}}$, in the large distance limit. These proposals suggest a generalization of the scalar weak gravity conjecture, and are supported by a variety of examples. The lower bound on $\lambda$ is verified explicitly in many cases in the literature. The TCC bound on $c$ is checked as well on ten different no-go theorems, which are worked-out in detail, and $V$ is analysed in the asymptotic limit. In particular, new results on 4d scalar potentials from type II compactifications are obtained.
high energy physics theory
We consider confining strings in pure gluodynamics and its extensions with adjoint (s)quarks. We argue that there is a direct map between the set of bulk fields and the worldsheet degrees of freedom. This suggests a close link between the worldsheet $S$-matrix and parton scattering amplitudes. We report an amusing relation between the Polchinski--Strominger amplitude responsible for the breakdown of integrability on the string worldsheet and the Yang--Mills $\beta$-function \[ b_0={D_{cr}-D_{ph}\over 6}\;. \] Here $b_0=11/3$ is the one-loop $\beta$-function coefficient in the pure Yang--Mills theory, $D_{cr}=26$ is the critical dimension of bosonic strings and $D_{ph}=4$ is the dimensionality of the physical space-time we live in. A natural extension of this relation continues to hold in the presence of adjoint (s)quarks, connecting two of the most celebrated anomalies---the scale anomaly in quantum chromodynamics (QCD) and the Weyl anomaly in string theory.
high energy physics theory
Prior work has demonstrated that recurrent neural network architectures show promising improvements over other machine learning architectures when processing temporally correlated inputs, such as wireless communication signals. Additionally, recurrent neural networks typically process data on a sequential basis, enabling the potential for near real-time results. In this work, we investigate the novel usage of "just enough" decision making metrics for making decisions during inference based on a variable number of input symbols. Since some signals are more complex than others, due to channel conditions, transmitter/receiver effects, etc., being able to dynamically utilize just enough of the received symbols to make a reliable decision allows for more efficient decision making in applications such as electronic warfare and dynamic spectrum sharing. To demonstrate the validity of this concept, four approaches to making "just enough" decisions are considered in this work and each are analyzed for their applicability to wireless communication machine learning applications.
electrical engineering and systems science
Taking into account that glaucoma is the leading cause of blindness worldwide, we propose in this paper three different learning methodologies for glaucoma detection in order to elucidate that traditional machine-learning techniques could outperform deep-learning algorithms, especially when the image data set is small. The experiments were performed on a private database composed of 194 glaucomatous and 198 normal B-scans diagnosed by expert ophthalmologists. As a novelty, we only considered raw circumpapillary OCT images to build the predictive models, without using other expensive tests such as visual field and intraocular pressure measures. The results ratify that the proposed hand-driven learning model, based on novel descriptors, outperforms the automatic learning. Additionally, the hybrid approach consisting of a combination of both strategies reports the best performance, with an area under the ROC curve of 0.85 and an accuracy of 0.82 during the prediction stage.
electrical engineering and systems science
This note is a critical examination of the argument of Frauchiger and Renner (Nature Communications 9:3711 (2018)), in which they claim to show that three reasonable assumptions about the use of quantum mechanics jointly lead to a contradiction. It is shown that further assumptions are needed to establish the contradiction, and that each of these assumptions is invalid in some version of quantum mechanics.
quantum physics
This paper develops a data-driven toolkit for traffic forecasting using high-resolution (a.k.a. event-based) traffic data. This is the raw data obtained from fixed sensors in urban roads. Time series of such raw data exhibit heavy fluctuations from one time step to the next (typically on the order of 0.1-1 second). Short-term forecasts (10-30 seconds into the future) of traffic conditions are critical for traffic operations applications (e.g., adaptive signal control). But traffic forecasting tools in the literature deal predominantly with 3-5 minute aggregated data, where the typical signal cycle is on the order of 2 minutes. This renders such forecasts useless at the operations level. To this end, we model the traffic forecasting problem as a matrix completion problem, where the forecasting inputs are mapped to a higher dimensional space using kernels. The formulation allows us to capture both nonlinear dependencies between forecasting inputs and outputs but also allows us to capture dependencies among the inputs. These dependencies correspond to correlations between different locations in the network. We further employ adaptive boosting to enhance the training accuracy and capture historical patterns in the data. The performance of the proposed methods is verified using high-resolution data obtained from a real-world traffic network in Abu Dhabi, UAE. Our experimental results show that the proposed method outperforms other state-of-the-art algorithms.
electrical engineering and systems science
Tidal interactions can play an important role as compact white dwarf (WD) binaries are driven together by gravitational waves (GWs). This will modify the strain evolution measured by future space-based GW detectors and impact the potential outcome of the mergers. Surveys now and in the near future will generate an unprecedented population of detached WD binaries to constrain tidal interactions. Motivated by this, I summarize the deviations between a binary evolving under the influence of only GW emission and a binary that is also experiencing some degree of tidal locking. I present analytic relations for the first and second derivative of the orbital period and braking index. Measurements of these quantities will allow the inference of tidal interactions, even when the masses of the component WDs are not well constrained. Finally, I discuss tidal heating and how it can provide complimentary information.
astrophysics
In this present paper, the recurrence equations of an Ising model with three coupling constants on a third-order Cayley tree are obtained. Paramagnetic and ferromagnetic phases associated with the Ising model are characterized. Types of phases and partition functions corresponding to the model are rigorously studied. Exact solutions of the mentioned model are compared with the numerical results given in Ganikhodjaev et al. [J. Concr. Appl. Math., 2011, 9, No. 1, 26-34].
condensed matter
Variational quantum eigensolver (VQE) is promising to show quantum advantage on near-term noisy-intermediate-scale quantum (NISQ) computers. One central problem of VQE is the effect of noise, especially the physical noise on realistic quantum computers. We study systematically the effect of noise for the VQE algorithm, by performing numerical simulations with various local noise models, including the amplitude damping, dephasing, and depolarizing noise. We show that the ground state energy will deviate from the exact value as the noise probability increase and normally noise will accumulate as the circuit depth increase. We build a noise model to capture the noise in a real quantum computer. Our numerical simulation is consistent with the quantum experiment results on IBM Quantum computers through Cloud. Our work sheds new light on the practical research of noisy VQE. The deep understanding of the noise effect of VQE may help to develop quantum error mitigation techniques on near team quantum computers.
quantum physics
Maker-Breaker games are played on a hypergraph $(X,\mathcal{F})$, where $\mathcal{F} \subseteq 2^X$ denotes the family of winning sets. Both players alternately claim a predefined amount of edges (called bias) from the board $X$, and Maker wins the game if she is able to occupy any winning set $F \in \mathcal{F}$. These games are well studied when played on the complete graph $K_n$ or on a random graph $G_{n,p}$. In this paper we consider Maker-Breaker games played on randomly perturbed graphs instead. These graphs consist of the union of a deterministic graph $G_\alpha$ with minimum degree at least $\alpha n$ and a binomial random graph $G_{n,p}$. Depending on $\alpha$ and Breaker's bias $b$ we determine the order of the threshold probability for winning the Hamiltonicity game and the $k$-connectivity game on $G_{\alpha}\cup G_{n,p}$, and we discuss the $H$-game when $b=1$. Furthermore, we give optimal results for the Waiter-Client versions of all mentioned games.
mathematics
We present a highly efficient proximal Markov chain Monte Carlo methodology to perform Bayesian computation in imaging problems. Similarly to previous proximal Monte Carlo approaches, the proposed method is derived from an approximation of the Langevin diffusion. However, instead of the conventional Euler-Maruyama approximation that underpins existing proximal Monte Carlo methods, here we use a state-of-the-art orthogonal Runge-Kutta-Chebyshev stochastic approximation that combines several gradient evaluations to significantly accelerate its convergence speed, similarly to accelerated gradient optimisation methods. The proposed methodology is demonstrated via a range of numerical experiments, including non-blind image deconvolution, hyperspectral unmixing, and tomographic reconstruction, with total-variation and $\ell_1$-type priors. Comparisons with Euler-type proximal Monte Carlo methods confirm that the Markov chains generated with our method exhibit significantly faster convergence speeds, achieve larger effective sample sizes, and produce lower mean square estimation errors at equal computational budget.
statistics
Partition functions of a canonical ensemble of non-interacting bound electrons are a key ingredient of the super-transition-array approach to the computation of radiative opacity. A few years ago, we published a robust and stable recursion relation for the calculation of such partition functions. In this paper, we propose an optimization of the latter method and explain how to implement it in practice. The formalism relies on the evaluation of elementary symmetric polynomials, which opens the way to further improvements.
condensed matter
This work has been carried out to simulate a Resistive Plate Chamber and corroborate it with experimental measurements in order to develop a numerical tool for studying the performance of the device for any gas mixture. This will allow us to explore the feasibility of operating these chambers in their avalanche mode within the Iron Calorimeter setup at India-based Neutrino Observatory with any eco-friendly substitute. The simulation has considered a hydrodynamic model of charge transport to emulate the electronic and ionic growths in the device as a function of the applied voltage which determines its working mode as either of the avalanche or streamer. In order to validate, the simulation result has been compared with compatible experimental data available in the literature.
physics
One of the last unexplored windows to the cosmos, the Dark Ages and Cosmic Dawn, can be opened using a simple low frequency radio telescope from the stable, quiet lunar farside to measure the Global 21-cm spectrum. This frontier remains an enormous gap in our knowledge of the Universe. Standard models of physics and cosmology are untested during this critical epoch. The messenger of information about this period is the 1420 MHz (21-cm) radiation from the hyperfine transition of neutral hydrogen, Doppler-shifted to low radio astronomy frequencies by the expansion of the Universe. The Global 21-cm spectrum uniquely probes the cosmological model during the Dark Ages plus the evolving astrophysics during Cosmic Dawn, yielding constraints on the first stars, on accreting black holes, and on exotic physics such as dark matter-baryon interactions. A single low frequency radio telescope can measure the Global spectrum between ~10-110 MHz because of the ubiquity of neutral hydrogen. Precise characterizations of the telescope and its surroundings are required to detect this weak, isotropic emission of hydrogen amidst the bright foreground Galactic radiation. We describe how two antennas will permit observations over the full frequency band: a pair of orthogonal wire antennas and a 0.3-m$^3$ patch antenna. A four-channel correlation spectropolarimeter forms the core of the detector electronics. Technology challenges include advanced calibration techniques to disentangle covariances between a bright foreground and a weak 21-cm signal, using techniques similar to those for the CMB, thermal management for temperature swings of >250C, and efficient power to allow operations through a two-week lunar night. This simple telescope sets the stage for a lunar farside interferometric array to measure the Dark Ages power spectrum.
astrophysics
The need for resiliency of electricity supply is increasing due to increasing frequency of natural disasters---such as hurricanes---that disrupt supply from the power grid. Rooftop solar photovoltaic (PV) panels together with batteries can provide resiliency in many scenarios. Without intelligent and automated decision making that can trade off conflicting requirements, a large PV system and a large battery is needed to provide meaningful resiliency. By using forecast of solar generation and household demand, an intelligent decision maker can operate the equipment (battery and critical loads) to ensure that the critical loads are serviced to the maximum duration possible. With the aid of such an intelligent control system, a smaller (and thus lower cost) system can service the primary loads for the same duration that a much larger system will be needed to service otherwise. In this paper we propose such an intelligent control system. A model predictive control (MPC) architecture is used that uses available measurements and forecasts to make optimal decisions for batteries and critical loads in real time. The optimization problem is formulated as a MILP (mixed integer linear program) due to the on/off decisions for the loads. Performance is compared with a non-intelligent baseline controller, for a PV-battery system chosen carefully for a single family house in Florida. Simulations are conducted for a one week period during hurricane Irma in 2017. Simulations show that the cost of the PV+battery system to provide a certain resiliency performance, duration the primary load can be serviced successfully, can be halved by the proposed control system.
electrical engineering and systems science
Many research questions involve time-to-event outcomes that can be prevented from occurring due to competing events. In these settings, we must be careful about the causal interpretation of classical statistical estimands. In particular, estimands on the hazard scale, such as ratios of cause specific or subdistribution hazards, are fundamentally hard to be interpret causally. Estimands on the risk scale, such as contrasts of cumulative incidence functions, do have a causal interpretation, but they only capture the total effect of the treatment on the event of interest; that is, effects both through and outside of the competing event. To disentangle causal treatment effects on the event of interest and competing events, the separable direct and indirect effects were recently introduced. Here we provide new results on the estimation of direct and indirect separable effects in continuous time. In particular, we derive the nonparametric influence function in continuous time and use it to construct an estimator that has certain robustness properties. We also propose a simple estimator based on semiparametric models for the two cause specific hazard functions. We describe the asymptotic properties of these estimators, and present results from simulation studies, suggesting that the estimators behave satisfactorily in finite samples. Finally, we re-analyze the prostate cancer trial from Stensrud et al (2020).
statistics
Medical errors are a major public health concern and a leading cause of death worldwide. Many healthcare centers and hospitals use reporting systems where medical practitioners write a preliminary medical report and the report is later reviewed, revised, and finalized by a more experienced physician. The revisions range from stylistic to corrections of critical errors or misinterpretations of the case. Due to the large quantity of reports written daily, it is often difficult to manually and thoroughly review all the finalized reports to find such errors and learn from them. To address this challenge, we propose a novel ranking approach, consisting of textual and ontological overlaps between the preliminary and final versions of reports. The approach learns to rank the reports based on the degree of discrepancy between the versions. This allows medical practitioners to easily identify and learn from the reports in which their interpretation most substantially differed from that of the attending physician (who finalized the report). This is a crucial step towards uncovering potential errors and helping medical practitioners to learn from such errors, thus improving patient-care in the long run. We evaluate our model on a dataset of radiology reports and show that our approach outperforms both previously-proposed approaches and more recent language models by 4.5% to 15.4%.
computer science
This work studies an explicit embedding of the set of probability measures into a Hilbert space, defined using optimal transport maps from a reference probability density. This embedding linearizes to some extent the 2-Wasserstein space, and enables the direct use of generic supervised and unsupervised learning algorithms on measure data. Our main result is that the embedding is (bi-)H\"older continuous, when the reference density is uniform over a convex set, and can be equivalently phrased as a dimension-independent H\"older-stability results for optimal transport maps.
statistics
The nanoscale interaction between single emitters and plasmonic structures is traditionally studied by relying on near-perfect, deterministic, nanoscale-control. This approach is ultra-low throughput thus rendering systematic studies difficult to impossible. Here, we show that super resolution microscopy in combination with data-driven statistical analysis allows studying near-field interactions of single molecules with resonant nanoantennas. We systematically tune the antennas' spectral resonances and show that emitters can be separated according to their coupling strength with said structures which ultimately allows the reconstruction of 2D interaction maps around individual nanoantennas.
physics
A low-amplitude periodic signal in the radial velocity (RV) time series of Barnard's Star was recently attributed to a planetary companion with a minimum mass of ${\sim}$3.2 $M_\oplus$ at an orbital period of $\sim$233 days. The relatively long orbital period and the proximity of Barnard's Star to the Sun raises the question whether the true mass of the planet can be constrained by accurate astrometric measurements. By combining the assumption of an isotropic probability distribution of the orbital orientation with the RV-analysis results, we calculated the probability density function of the astrometric signature of the planet. In addition, we reviewed the astrometric capabilities and limitations of current and upcoming astrometric instruments. We conclude that Gaia and the Hubble Space Telescope (HST) are currently the best-suited instruments to perform the astrometric follow-up observations. Taking the optimistic estimate of their single-epoch accuracy to be $\sim$30 $\mu$as, we find a probability of $\sim$10% to detect the astrometric signature of Barnard's Star b with $\sim$50 individual-epoch observations. In case of no detection, the implied mass upper limit would be $\sim$8 $M_\oplus$, which would place the planet in the super-Earth mass range. In the next decade, observations with the Wide-Field Infrared Space Telescope (WFIRST) may increase the prospects of measuring the true mass of the planet to $\sim$99%.
astrophysics
Cell-cell adhesion is an inherently nonlocal phenomenon. Numerous partial differential equation models with nonlocal term have been recently presented to describe this phenomenon, yet the mathematical properties of nonlocal adhesion model are not well understood. Here we consider a model with two kinds of nonlocal cell-cell adhesion, satisfying no-flux conditions in a multidimensional bounded domain. We show global-in-time well-posedness of the solution to this model and obtain the uniform boundedness of solution.
mathematics
A notable challenge of planet formation is to find a path to directly form planetesimals from small particles. We aim to understand how drifting pebbles pile up in a protoplanetary disk with a non-uniform turbulence structure. We consider a disk structure in which the midplane turbulence viscosity is increasing with radius in protoplanetary disks as in the outer region of a dead zone. We perform 1D diffusion-advection simulations of pebbles that include back-reaction (the inertia) to radial drift and vertical/radial diffusion of pebbles for a given pebble-to-gas mass flux. We report a new mechanism, the "no-drift" runaway pile-up, leading to a runaway accumulation of pebbles in disks, thus favoring the formation of planetesimals by streaming and/or gravitational instabilities. This occurs when pebbles drifting in from the outer disk and entering a dead zone experience a decrease in vertical turbulence. The scale height of the pebble subdisk then decreases, and for small enough values of the turbulence in the dead zone and high values of the pebble to gas flux ratio, the back-reaction of pebbles on gas leads to a significant decrease in their drift velocity and thus their progressive accumulation. This occurs when the ratio of the flux of pebbles to that of the gas is large enough so that the effect dominates over any Kelvin-Helmholtz shear instability. This process is independent of the existence of a pressure bump.
astrophysics
We explore the chemodynamical properties of a sample of barred galaxies in the Auriga magneto-hydrodynamical cosmological zoom-in simulations, which form boxy/peanut (b/p) bulges, and compare these to the Milky Way (MW). We show that the Auriga galaxies which best reproduce the chemodynamical properties of stellar populations in the MW bulge have quiescent merger histories since redshift $z\sim3.5$: their last major merger occurs at $t_{\rm lookback}>12\,\rm Gyrs$, while subsequent mergers have a stellar mass ratio of $\leq$1:20, suggesting an upper limit of a few percent for the mass ratio of the recently proposed Gaia Sausage/Enceladus merger. These Auriga MW-analogues have a negligible fraction of ex-situ stars in the b/p region ($<1\%$), with flattened, thick disc-like metal-poor stellar populations. The average fraction of ex-situ stars in the central regions of all Auriga galaxies with b/p's is 3% -- significantly lower than in those which do not host a b/p or a bar. While the central regions of these barred galaxies contain the oldest populations, they also have stars younger than 5Gyrs (>30%) and exhibit X-shaped age and abundance distributions. Examining the discs in our sample, we find that in some cases a star-forming ring forms around the bar, which alters the metallicity of the inner regions of the galaxy. Further out in the disc, bar-induced resonances lead to metal-rich ridges in the $V_{\phi}-r$ plane -- the longest of which is due to the Outer Lindblad Resonance. Our results suggest the Milky Way has an uncommonly quiet merger history, which leads to an essentially in-situ bulge, and highlight the significant effects the bar can have on the surrounding disc.
astrophysics
The recent observation of interstellar objects, 1I/Oumuamua and 2I/Borisov cross the solar system opened new opportunities for planetary science and planetary defense. As the first confirmed objects originating outside of the solar system, there are myriads of origin questions to explore and discuss, including where they came from, how did they get here and what are they composed of. Besides, there is a need to be cognizant especially if such interstellar objects pass by the Earth of potential dangers of impact. Specifically, in the case of Oumuamua, which was detected after its perihelion, passed by the Earth at around 0.2 AU, with an estimated excess speed of 60 km/s relative to the Earth. Without enough forewarning time, a collision with such high-speed objects can pose a catastrophic danger to all life Earth. Such challenges underscore the importance of detection and exploration systems to study these interstellar visitors. The detection system can include a spacecraft constellation with zenith-pointing telescope spacecraft. After an event is detected, a spacecraft swarm can be deployed from Earth to flyby past the visitor. The flyby can then be designed to perform a proximity operation of interest. This work aims to develop algorithms to design these swarm missions through the IDEAS (Integrated Design Engineering & Automation of Swarms) architecture. Specifically, we develop automated algorithms to design an Earth-based detection constellation and a spacecraft swarm that generates detailed surface maps of the visitor during the rendezvous, along with their heliocentric cruise trajectories.
astrophysics
In this paper, we study the dynamics of fluids in porous media governed by Darcy's law: the Muskat problem. We consider the setting of two immiscible fluids of different densities and viscosities under the influence of gravity in which one fluid is completely surrounded by the other. This setting is gravity unstable because along a portion of the interface, the denser fluid must be above the other. Surprisingly, even without capillarity, the circle-shaped bubble is a steady state solution moving with vertical constant velocity determined by the density jump between the fluids. Taking advantage of our discovery of this steady state, we are able to prove global in time existence and uniqueness of dynamic bubbles of nearly circular shapes under the influence of surface tension. We prove this global existence result for low regularity initial data. Moreover, we prove that these solutions are instantly analytic and decay exponentially fast in time to the circle.
mathematics
Bi-periodic patterns of waves that propagate in the x direction with amplitude variation in the y direction are generated in a laboratory. The amplitude variation in the y direction is studied within the framework of the vector (vNLSE) and scalar (sNLSE) nonlinear Schrodinger equations using the uniform-amplitude, Stokes-like solution of the vNLSE and the Jacobi elliptic sine function solution of the sNLSE. The wavetrains are generated using the Stokes-like solution of vNLSE; however, a comparison of both predictions shows that while they both do a reasonably good job of predicting the observed amplitude variation in y, the comparison with the elliptic function solution of the sNLSE has significantly less error. Additionally, for agreement with the vNLSE solution, a third harmonic in y term from a Stokes-type expansion of interacting, symmetric wavetrains must be included. There is no evidence of instability growth in the x-direction, consistent with the work of Segur and colleagues, who showed that dissipation stabilizes the modulational instability. There is some extra amplitude variation in y, which is examined via a qualitative stability calculation that allows symmetry breaking in that direction.
physics
We review some recent work by Carone, Erlich and Vaman on composite gravitons in metric-independent quantum field theories, with the aim of clarifying a number of basic issues. Focusing on a theory of scalar fields presented previously in the literature, we clarify the meaning of the tunings required to obtain a massless graviton. We argue that this formulation can be interpreted as the massless limit of a theory of massive composite gravitons in which the graviton mass term is not of Pauli-Fierz form. We then suggest closely related theories that can be defined without such a limiting procedure (and hence without worry about possible ghosts). Finally, we comment on the importance of finding a compelling ultraviolet completion for models of this type, and discuss some possibilities.
high energy physics theory
Machine learning models are increasingly used in many engineering fields thanks to the widespread digital data, growing computing power, and advanced algorithms. Artificial neural networks (ANN) is the most popular machine learning model in recent years. Although many ANN models have been used in the design and analysis of composite materials and structures, there are still some unsolved issues that hinder the acceptance of ANN models in the practical design and analysis of composite materials and structures. Moreover, the emerging machine learning techniques are posting new opportunities and challenges in the data-based design paradigm. This paper aims to give a state-of-the-art literature review of ANN models in the nonlinear constitutive modeling, multiscale surrogate modeling, and design optimization of composite materials and structures. This review has been designed to focus on the discussion of the general frameworks and benefits of ANN models to the above problems. Moreover, challenges and opportunities in each key problem are identified and discussed. This paper is expected to open the discussion of future research scope and new directions to enable efficient, robust, and accurate data-driven design and analysis of composite materials and structures.
condensed matter
We introduce a 3-Higgs Doublet Model (3HDM) with two Inert (or dark) scalar doublets and an active Higgs one, hence termed I(2+1)HDM, in the presence of a discrete $Z_3$ symmetry acting upon the three doublet fields. We show that such a construct yields a Dark Matter (DM) sector with two mass-degenerate states of opposite CP parity, both of which contribute to DM dynamics, which we call \textit{Hermaphrodite DM}, distinguishable from a (single) complex DM candidate. We show that the relic density contributions of both states are equal, saturating the observed relic density compliant with (in)direct searches for DM as well as other experimental data impinging on both the dark and Higgs sectors of the model, chiefly, in the form of Electro-Weak Precision Observables (EWPOs), Standard Model (SM)-like Higgs boson measurements at the Large Hadron Collider (LHC) and void searches for additional (pseudo)scalar states at the CERN machine and previous colliders.
high energy physics phenomenology
Recently, Chi Xu et al. predicted the phase-filling singularities (PFS) in the optical dielectric function (ODF) of the highly doped $n$-type Ge and confirmed in experiment the PFS associated $E_{1}+\Delta_{1}$ transition by advanced \textit{in situ} doping technology [Phys. Rev. Lett. 118, 267402 (2017)], but the strong overlap between $E_{1}$ and $E_{1}+\Delta_{1}$ optical transitions made the PFS associated $E_{1}$ transition that occurs at the high doping concentration unobservable in their measurement. In this work, we investigate the PFS of the highly doped n-type Ge in the presence of the uniaxial and biaxial tensile strain along [100], [110] and [111] crystal orientation. Compared with the relaxed bulk Ge, the tensile strain along [100] increases the energy separation between the $E_{1}$ and $E_{1}+\Delta_{1}$ transition, making it possible to reveal the PFS associated $E_{1}$ transition in optical measurement. Besides, the application of tensile strain along [110] and [111] offers the possibility of lowering the required doping concentration for the PFS to be observed, resulting in new additional features associated with $E_{1}+\Delta_{1}$ transition at inequivalent $L$-valleys. These theoretical predications with more distinguishable optical transition features in the presence of the uniaxial and biaxial tensile strain can be more conveniently observed in experiment, providing new insights into the excited states in heavily doped semiconductors.
condensed matter
Gas Electron Multipliers (GEMs) can be produced in large foils and molded in different shapes. The possibility to create cylindrical layers has opened the opportunity to use such detector as internal tracker at collider experiments. One crucial item is to have low material budget in the active area, so the supporting structure of anode and cathode must be light. KLOE2 collaboration has built the first Cylindrical GEM detector with honeycomb material with carbon fiber skins produced at high temperature. BESIII is developing an innovative CGEM detector with charge and time readout. Among several innovative features, the mechanical structure was designed to be a sandwich of Rohacell and Kapton, a PMI foam. After the transportation of a first production of the detectors from the construction site in Italy to the Institute of High Energy Physics in Beijing, some malfunctions have been observed in some of them, compatible with GEMs deformation inside the detector. We have performed a detailed study by means of an industrial CT scan available in IHEP laboratory and autopsy to the damaged detectors. In this talk, we will review the construction process, the shipment, the findings of the investigation. A new supporting structure of carbon fiber and honeycomb, assembled at room temperature, has been designed and developed. The thickness of the carbon fiber is small enough to keep the material budget of a single detector layer below 0.5$\%$ of a radiation length, while the mechanical robustness results beyond the purpose of a detector for HEP. A first detector with such a mechanical structure has been built and shipped to IHEP, preliminary results from operation (e.g. current stability, discharges, temperature and humidity correlation) of the detectors will be also presented in this talk.
physics
Several works have shown that perturbation stable instances of the MAP inference problem in Potts models can be solved exactly using a natural linear programming (LP) relaxation. However, most of these works give few (or no) guarantees for the LP solutions on instances that do not satisfy the relatively strict perturbation stability definitions. In this work, we go beyond these stability results by showing that the LP approximately recovers the MAP solution of a stable instance even after the instance is corrupted by noise. This "noisy stable" model realistically fits with practical MAP inference problems: we design an algorithm for finding "close" stable instances, and show that several real-world instances from computer vision have nearby instances that are perturbation stable. These results suggest a new theoretical explanation for the excellent performance of this LP relaxation in practice.
statistics
Quantum many body physics simulations with Matrix Product States can often be accelerated if the quantum symmetries present in the system are explicitly taken into account. Conventionally, quantum symmetries have to be determined before hand when constructing the tensors for the Matrix Product States algorithm. In this work, we present a Matrix Product States algorithm with an adaptive $U(1)$ symmetry. This algorithm can take into account of, or benefit from, $U(1)$ or $Z_2$ symmetries when they are present, or analyze the non-symmetric scenario when the symmetries are broken without any external alteration of the code. To give some concrete examples we consider an XYZ model and show the insight that can be gained by (i) searching the ground state and (ii) evolving in time after a symmetry-changing quench. To show the generality of the method, we also consider an interacting bosonic system under the effect of a symmetry-breaking dissipation.
quantum physics
With the explosive growth of transaction activities in online payment systems, effective and realtime regulation becomes a critical problem for payment service providers. Thanks to the rapid development of artificial intelligence (AI), AI-enable regulation emerges as a promising solution. One main challenge of the AI-enabled regulation is how to utilize multimedia information, i.e., multimodal signals, in Financial Technology (FinTech). Inspired by the attention mechanism in nature language processing, we propose a novel cross-modal and intra-modal attention network (CIAN) to investigate the relation between the text and transaction. More specifically, we integrate the text and transaction information to enhance the text-trade jointembedding learning, which clusters positive pairs and push negative pairs away from each other. Another challenge of intelligent regulation is the interpretability of complicated machine learning models. To sustain the requirements of financial regulation, we design a CIAN-Explainer to interpret how the attention mechanism interacts the original features, which is formulated as a low-rank matrix approximation problem. With the real datasets from the largest online payment system, WeChat Pay of Tencent, we conduct experiments to validate the practical application value of CIAN, where our method outperforms the state-of-the-art methods.
electrical engineering and systems science
Context: Instrumental profile (IP) is the basic property of a spectrograph. Accurate IP characterisation is the prerequisite of accurate wavelength solution. It also facilitates new spectral acquisition methods such as the forward modeling and deconvolution. Aims: We investigate an IP modeling method for the fibre-fed echelle spectrograph with the emission lines of the ThAr lamp, and explore the method to evaluate the accuracy of IP characterisation. Methods: The backbone-residual (BR) model is put forward and tested on the fibre-fed High Resolution Spectrograph (HRS) at the Chinese Xinglong 2.16-m Telescope, which is the sum of the backbone function and the residual function. The backbone function is a bell-shaped function to describe the main component and the spatial variation of IP. The residual function, which is expressed as the cubic spline function, accounts for the difference between the bell-shaped function and the actual IP. The method of evaluating the accuracy of IP characterisation is based on the spectral reconstruction and Monte Carlo simulation. Results: The IP of HRS is characterised with the BR model, and the accuracy of the characterised IP reaches 0.006 of the peak value of the backbone function. This result demonstrates that the accurate IP characterisation has been achieved on HRS with the BR model, and the BR model is an excellent choice for accurate IP characterisation of fibre-fed echelle spectrographs.
astrophysics
The Ireland and Northern Ireland power system is in a period of rapid transition from conventional generation to renewable generation and has seen a rapid increase in large demand sites requiring connection into the backbone transmission system. The role of EirGrid and SONI as Transmission System Operator in Ireland and Northern Ireland is to operate, maintain and develop the electricity transmission network. EirGrid ensures that new transmission projects are developed in a way that balances technical, economic, community and other stakeholder considerations. This has resulted in much more detailed evaluation of planning options to maximize utilisation of the existing network which may include but are not limited to uprating the capacity of the existing transmission system mainly through the thermal or voltage uprate of existing circuits. Another method to increase network utilisation is to strategically deploy Power Flow Control devices to relieve system overloads and maximise network transfer capacities. EirGrid has developed a new grid development strategy that places particular emphasis on identifying technologies to help resolve network issues. This paper presents the study findings for the application of a power flow controller (PFC) to relieve system issues.
electrical engineering and systems science
This paper addresses the so-called inverse problem which consists in searching for (possibly multiple) parent target Hamiltonian(s), given a single quantum state as input. Starting from $\Psi_0$, an eigenstate of a given local Hamiltonian $\mathcal{H}_0$, we ask whether or not there exists another parent Hamiltonian $\mathcal{H}_\mathrm{P}$ for $\Psi_0$, with the same local form as $\mathcal{H}_0$. Focusing on one-dimensional quantum disordered systems, we extend the recent results obtained for Bose-glass ground states [M. Dupont and N. Laflorencie, Phys. Rev. B 99, 020202(R) (2019)] to Anderson localization, and the many-body localization (MBL) physics occurring at high energy. We generically find that any localized eigenstate is a very good approximation for an eigenstate of a distinct parent Hamiltonian, with an energy variance $\sigma_\mathrm{P}^2(L)=\langle\mathcal{H}_\mathrm{P}^2\rangle_{\Psi_0}-\langle\mathcal{H}_\mathrm{P}\rangle_{\Psi_0}^2$ vanishing as a power law of system size $L$. This decay is microscopically related to a chain-breaking mechanism, also signalled by bottlenecks of vanishing entanglement entropy. A similar phenomenology is observed for both Anderson and MBL. In contrast, delocalized ergodic many-body eigenstates uniquely encode the Hamiltonian in the sense that $\sigma_\mathrm{P}^2(L)$ remains finite at the thermodynamic limit, i.e., $L\to+\infty$. As a direct consequence, the ergodic-MBL transition can be very well captured from the scaling of $\sigma_\mathrm{P}^2(L)$.
condensed matter
The nonperturbative renormalization group has been considered as a solid framework to investigate fixed point and critical exponents for matrix and tensor models, expected to correspond with the so-called double scaling limit. In this paper, we focus on matrix models and address the question of the compatibility between the approximations used to solve the exact renormalization group equation and the modified Ward identities coming from the regulator. We show in particular that standard local potential approximation strongly violates the Ward identities, especially in the vicinity of the interacting fixed point. Extending the theory space including derivative couplings, we recover an interacting fixed point with a critical exponent not so far from the exact result, but with a nonzero value for derivative couplings, evoking a strong dependence concerning the regulator. Finally, we consider a modified regulator, allowing to keep the flow not so far from the ultralocal region and recover the results of the literature up to a slight improvement.
high energy physics theory
This article aims at applying the approaches peculiar to analytic philosophy to the question about representation of the concept of time as a symbol which can reflect the bases of the modern natural sciences, social sciences and humanities. The main methods, which the author of this article uses, are speculative analysis and modeling. The symbolic meaning of the concept of time demonstrates preconditions for the organization of the bases of the natural sciences, and social and humanitarian knowledge as well. Judgments for the meaning of time reveal the essence of the problem in two aspects of discussion on the dissociation of the foundations in the modern philosophy of physics and the philosophical analysis of the humanities as well. 1) The formation of the image of human nature in contemporary philosophy reveals the special role of the concept of time in epistemology and philosophy of science. 2) This research reveals the perspective of understanding natural and cultural processes, which is based on the unification of branches of science. As a result, the research shows the basics of communication of the natural sciences with the social science, and humanitarian knowledge as well. This way, the problem of whether time is a natural process or it is only a human invention can be solved. In general, time is not exhausted by the meaning of either of the understandings, and acts as one of the artificial measures people apply. In the same way, the solution of another problem is reached: the modern discrepancy between separate bases of science decreases considerably, if not disappears completely.
physics
The blossoming field of joint gravitational wave and electromagnetic (GW-EM) astronomy is one of the most promising in astronomy. The first, and only, joint GW-EM event GW170817 provided remarkable science returns that still continue to this day. Continued growth in this field requires increasing the sample size of joint GW-EM detections. In this white paper, we outline the case for using some percentage of LSST survey time for dedicated target-of-opportunity follow up of GW triggers in order to efficiently and rapidly identify optical counterparts. We show that the timeline for the LSST science survey is well matched to the planned improvements to ground based GW detectors in the next decade. LSST will become particularly crucial in the later half of the 2020s as more and more distant GW sources are detected. Lastly, we highlight some of the key science goals that can be addressed by a large sample of joint GW-EM detections.
astrophysics
We investigate a family of quasiperiodic continuous elastic beams, the topological properties of their vibrational spectra, and their relation to the existence of localized modes. We specifically consider beams featuring arrays of ground springs at locations determined by projecting from a circle onto an underlying periodic system. A family of periodic and quasiperiodic structures is obtained by smoothly varying a parameter defining such projection. Numerical simulations show the existence of vibration modes that first localize at a boundary, and then migrate into the bulk as the projection parameter is varied. Explicit expressions predicting the change in the density of states of the bulk define topological invariants that quantify the number of modes spanning a gap of a finite structure. We further demonstrate how modulating the phase of the ground springs distribution causes the topological states to undergo an edge-to-edge transition. The considered configurations and topological studies provide a framework for inducing localized modes in continuous elastic structural components through globally spanning, deterministic perturbations of periodic patterns defined by the considered projection operations.
condensed matter
Localization can be achieved by different sensors and techniques such as a global positioning system (GPS), wifi, ultrasonic sensors, and cameras. In this paper, we focus on the laser-based localization method for unmanned aerial vehicle (UAV) applications in a GPS denied environment such as a deep tunnel system. Other than a low-cost 2D LiDAR for the planar axes, a single axis Lidar for the vertical axis as well as an inertial measurement unit (IMU) device is used to increase the reliability and accuracy of the localization performance. We present a comparative analysis of the three selected laser-based simultaneous localization and mapping(SLAM) approaches:(i) Hector SLAM; (ii) Gmapping; and(iii) Cartographer. These algorithms have been implemented and tested through real-world experiments. The results are compared with the ground truth data and the experiments are available at https://youtu.be/kQc3mJjw_mw.
computer science
Representations in the form of Symmetric Positive Definite (SPD) matrices have been popularized in a variety of visual learning applications due to their demonstrated ability to capture rich second-order statistics of visual data. There exist several similarity measures for comparing SPD matrices with documented benefits. However, selecting an appropriate measure for a given problem remains a challenge and in most cases, is the result of a trial-and-error process. In this paper, we propose to learn similarity measures in a data-driven manner. To this end, we capitalize on the \alpha\beta-log-det divergence, which is a meta-divergence parametrized by scalars \alpha and \beta, subsuming a wide family of popular information divergences on SPD matrices for distinct and discrete values of these parameters. Our key idea is to cast these parameters in a continuum and learn them from data. We systematically extend this idea to learn vector-valued parameters, thereby increasing the expressiveness of the underlying non-linear measure. We conjoin the divergence learning problem with several standard tasks in machine learning, including supervised discriminative dictionary learning and unsupervised SPD matrix clustering. We present Riemannian gradient descent schemes for optimizing our formulations efficiently, and show the usefulness of our method on eight standard computer vision tasks.
computer science
The three-loop QED mass-dependent contributions to the $g-2$ of each of the charged leptons with two internal closed fermion loops, sometimes called $A^{(6)}_3\left(\frac{m_1}{m_2}, \frac{m_1}{m_3}\right)$ in the $g-2$ literature, is revisited using the Mellin-Barnes (MB) representation technique. Results for the muon and $\tau$ lepton anomalous magnetic moments $A^{(6)}_{3,\mu}$ and $A^{(6)}_{3,\tau}$, which were known as series expansions in the lepton mass ratios up to the first few terms only, are extended to their exact expressions. The contribution to the anomalous magnetic moment of the electron $A^{(6)}_{3,e}$ is also explicitly given in closed form. In addition to this, we show that the different series representations derived from the MB representation collectively converge for all possible values of the masses. Such unexpected behavior is related to the fact that these series bring into play double hypergeometric series that belong to a class of Kamp\'e de F\'eriet series which we prove to have the same simple convergence and analytic continuation properties as the Appell $F_1$ double hypergeometric series.
high energy physics phenomenology
Robustness to adversarial attacks is an important concern due to the fragility of deep neural networks to small perturbations and has received an abundance of attention in recent years. Distributionally Robust Optimization (DRO), a particularly promising way of addressing this challenge, studies robustness via divergence-based uncertainty sets and has provided valuable insights into robustification strategies such as regularization. In the context of machine learning, the majority of existing results have chosen $f$-divergences, Wasserstein distances and more recently, the Maximum Mean Discrepancy (MMD) to construct uncertainty sets. We extend this line of work for the purposes of understanding robustness via regularization by studying uncertainty sets constructed with Integral Probability Metrics (IPMs) - a large family of divergences including the MMD, Total Variation and Wasserstein distances. Our main result shows that DRO under \textit{any} choice of IPM corresponds to a family of regularization penalties, which recover and improve upon existing results in the setting of MMD and Wasserstein distances. Due to the generality of our result, we show that other choices of IPMs correspond to other commonly used penalties in machine learning. Furthermore, we extend our results to shed light on adversarial generative modelling via $f$-GANs, constituting the first study of distributional robustness for the $f$-GAN objective. Our results unveil the inductive properties of the discriminator set with regards to robustness, allowing us to give positive comments for several penalty-based GAN methods such as Wasserstein-, MMD- and Sobolev-GANs. In summary, our results intimately link GANs to distributional robustness, extend previous results on DRO and contribute to our understanding of the link between regularization and robustness at large.
statistics
The absolute/relative debate on the nature of space and time is ongoing for thousands of years. Here we attempt to investigate space and time from the information theoretic point of view to understand spatial and temporal correlations under the relative assumption. Correlations, as a measure of relationship between two quantities, do not distinguish space and time in classical probability theory; quantum correlations in space are well-studied but temporal correlations are not well understood. The thesis investigates quantum correlations in space-time, by treating temporal correlations equally in form as spatial correlations and unifying quantum correlations in space and time. In particular, we follow the pseudo-density matrix formalism in which quantum states in spacetime are properly defined by correlations from measurements. We first review classical correlations, quantum correlations in space and time, to motivate the pseudo-density matrix formalism in finite dimensions. Next we generalise the pseudo-density matrix formulation to the Gaussian case, general continuous variables via Wigner representations, and general measurement processes like weak measurements. Then we compare the pseudo-density matrix formalism with other spacetime formulations: indefinite causal structures, consistent histories, generalised non-local games, out-of-time-order correlation functions, and path integrals. We argue that in non-relativistic quantum mechanics, different spacetime formulations are closely related via quantum correlations, except path integrals. Finally, we apply the pseudo-density matrix formulation to time crystals. By defining time crystals as long-range order in time, we analyse continuous and discrete time translation symmetry as well as discuss the existence of time crystals from an algebraic point of view. Finally, we summarise our work and provide the outlook for future directions.
quantum physics
Tetrahex-carbon is a recently predicted two dimensional (2D) carbon allotrope which is composed of tetragonal and hexagonal rings. Unlike flat graphene, this new 2D carbon structure is buckled, possesses a direct band gap ~ 2.6 eV and high carrier mobility with anisotropic feature. In this work, we employ first-principles density-functional theory calculations to explore mechanical properties of tetrahex-C under uniaxial tensile strain. We find that tetrahex-C demonstrates ultrahigh ideal strength, outperforming both graphene and penta-graphene. It shows superior ductility and sustains uniaxial tensile strain up to 20% (16%) till phonon instability occurs, and the corresponding maximal strength is 38.3 N/m (37.8 N/m) in the zigzag (armchair) direction. It shows intrinsic negative Poisson's ratio. This exotic in-plane Poisson's ratio takes place when axial strain reaches a threshold value of 7% (5%) in the zigzag (armchair) direction. We also find that tetrahex-C holds a direct band gap of 2.64 eV at the center of Brillouin zone. This direct-band-gap feature maintains intact upon strain application with no direction-indirect gap transition. The ultrahigh ideal strength, negative Poisson's ratio and integrity of direct-gap under strain in tetrahex-C suggest it may have potential applications in nanomechanics and nanoelectronics.
condensed matter
The recent discovery of spin-current transmission through antiferromagnetic (AFM) insulating materials opens up unprecedented opportunities for fundamental physics and spintronics applications. The great mystery currently surrounding this topic is: how could THz AFM magnons mediate a GHz spin current? This mis-match of frequencies becomes particularly critical for the case of coherent ac spin-current, raising the fundamental question of whether a GHz ac spin-current can ever keep its coherence inside an AFM insulator and so drive the spin precession of another FM layer coherently? Utilizing element- and time-resolved x-ray pump-probe measurements on Py/Ag/CoO/Ag/Fe75Co25/MgO(001) heterostructures, we demonstrate that a coherent GHz ac spin current pumped by the permalloy (Py) ferromagnetic resonance (FMR) can transmit coherently across an antiferromagnetic CoO insulating layer to drive a coherent spin precession of the FM Fe75Co25 layer. Further measurement results favor thermal magnons rather than evanescent spin waves as the mediator of the coherent ac spin current in CoO.
physics
We propose a new dynamical method to connect equilibrium quantum phase transitions and quantum coherence using out-of-time-order correlations (OTOCs). Adopting the iconic Lipkin-Meshkov-Glick and transverse-field Ising models as illustrative examples, we show that an abrupt change in coherence and entanglement of the ground state across a quantum phase transition is observable in the spectrum of multiple quantum coherence (MQC) intensities, which are a special type of OTOC. We also develop a robust protocol to obtain the relevant OTOCs using quasi-adiabatic quenches through the ground state phase diagram. Our scheme allows for the detection of OTOCs without time-reversal of coherent dynamics, making it applicable and important for a broad range of current experiments where time-reversal cannot be achieved by inverting the sign of the underlying Hamiltonian.
quantum physics
We present the IMS-Speech, a web based tool for German and English speech transcription aiming to facilitate research in various disciplines which require accesses to lexical information in spoken language materials. This tool is based on modern open source software stack, advanced speech recognition methods and public data resources and is freely available for academic researchers. The utilized models are built to be generic in order to provide transcriptions of competitive accuracy on a diverse set of tasks and conditions.
computer science
Since the outbreak of Coronavirus Disease 2019 (COVID-19), most of the impacted patients have been diagnosed with high fever, dry cough, and soar throat leading to severe pneumonia. Hence, to date, the diagnosis of COVID-19 from lung imaging is proved to be a major evidence for early diagnosis of the disease. Although nucleic acid detection using real-time reverse-transcriptase polymerase chain reaction (rRT-PCR) remains a gold standard for the detection of COVID-19, the proposed approach focuses on the automated diagnosis and prognosis of the disease from a non-contrast chest computed tomography (CT)scan for timely diagnosis and triage of the patient. The prognosis covers the quantification and assessment of the disease to help hospitals with the management and planning of crucial resources, such as medical staff, ventilators and intensive care units (ICUs) capacity. The approach utilises deep learning techniques for automated quantification of the severity of COVID-19 disease via measuring the area of multiple rounded ground-glass opacities (GGO) and consolidations in the periphery (CP) of the lungs and accumulating them to form a severity score. The severity of the disease can be correlated with the medicines prescribed during the triage to assess the effectiveness of the treatment. The proposed approach shows promising results where the classification model achieved 93% accuracy on hold-out data.
electrical engineering and systems science
We extend a previously proposed rotation and truncation scheme to optimize quantum Anderson impurity calculations with exact diagonalization [PRB 90, 085102 (2014)] to density-matrix renormalization group (DMRG) calculations. The method reduces the solution of a full impurity problem with virtually unlimited bath sites to that of a small subsystem based on a natural impurity orbital basis set. The later is solved by DMRG in combination with a restricted-active-space truncation scheme. The method allows one to compute Green's functions directly on the real frequency or time axis. We critically test the convergence of the truncation scheme using a one-band Hubbard model solved in the dynamical mean-field theory. The projection is exact in the limit of both infinitely large and small Coulomb interactions. For all parameter ranges the accuracy of the projected solution converges exponentially to the exact solution with increasing subsystem size.
condensed matter
In this note we first introduce the theoretical formalism of the unitarization of the two-body scattering both in the infinite and finite volumes. Then we apply this formalism to the study of the $Z_c(3900)$ resonance in the $J/\psi\,\pi$ and $D\bar{D}^{*}$ coupled-channel scattering. The recent lattice finite-volume spectra are confronted with our predictions.
high energy physics phenomenology