text
stringlengths
11
9.77k
label
stringlengths
2
104
The morphological attributes of retinal vessels, such as length, width, tortuosity and branching pattern and angles, play an important role in diagnosis, screening, treatment, and evaluation of various cardiovascular and ophthalmologic diseases such as diabetes, hypertension and arteriosclerosis. The crucial step before extracting these morphological characteristics of retinal vessels from retinal fundus images is vessel segmentation. In this work, we propose a method for retinal vessel segmentation based on fully convolutional networks. Thousands of patches are extracted from each retinal image and then fed into the network, and data argumentation is applied by rotating extracted patches. Two architectures of fully convolutional networks, U-Net and LadderNet, are used for vessel segmentation. The performance of our method is evaluated on three public datasets: DRIVE, STARE, and CHASE\_DB1. Experimental results of our method show superior performance compared to recent state-of-the-art methods.
electrical engineering and systems science
Stephen Fienberg's affinity for contingency table problems and reinterpreting models with a fresh look gave rise to a new approach for hypothesis testing of network models that are linear exponential families. We outline his vision and influence in this fundamental problem, as well as generalizations to multigraphs and hypergraphs.
statistics
A description in terms of transition rates among cells is used to analyze self-diffusion of hard spheres in the fluid phase. Cell size is assumed much larger than the mean free path. Transition state theory is used to obtain an equation that matches numerical results previously obtained by other authors. Two regimes are identified. For small packing fraction $\xi$, diffusion is limited by free volume; and, for large $\xi$, diffusion is limited by velocity autocorrelation. The expressions obtained in each regime do not require adjustable parameters.
condensed matter
We classify bosonic $\mathcal{N}=(2,2)$ supersymmetric Wilson loops on arbitrary backgrounds with vector-like R-symmetry. These can be defined on any smooth contour and come in two forms which are universal across all backgrounds. We show that these Wilson loops, thanks to their cohomological properties, are all invariant under smooth deformations of their contour. At genus zero they can always be mapped to local operators and computed exactly with supersymmetric localisation. Finally, we find the precise map, under two-dimensional Seiberg-like dualities, of correlators of supersymmetric Wilson loops.
high energy physics theory
The $M_x \text{Bi}_2 \text{Se}_3$ family are candidates for topological superconductors, where $M$ could be Cu, Sr, or Nb. Two-fold anisotropy has been observed in various experiments, prompting the interpretation that the superconducting state is nematic. However, it has since been recognized in the literature that a two-fold anisotropy in the upper critical field $H_{c2}$ is incompatible with the na\"{i}ve nematic hypothesis. In this paper we study the Ginzburg-Landau theory of a nematic order parameter coupled with an applied stress, and classify possible phase diagrams. Assuming that the $H_{c2}$ puzzle is explained by a pre-existing "pinning field", we indicate how a stress can be applied to probe an extended region of the phase diagram, and verify if the superconducting order parameter is indeed nematic. We also explore the Josephson tunneling between the proposed nematic superconducting state and an s-wave superconductor. The externally applied stress is predicted to serve as an on/off switch to the tunneling current, and in certain regime the temperature dependence of the critical current can be markedly different from that between two conventional s-wave superconductors.
condensed matter
We study three different measures of quantum correlations -- entanglement spectrum, entanglement entropy, and logarithmic negativity -- for (1+1)-dimensional massive scalar field in flat spacetime. The entanglement spectrum for the discretized scalar field in the ground state indicates a cross-over in the zero-mode regime, which is further substantiated by an analytical treatment of both entanglement entropy and logarithmic negativity. The exact nature of this cross-over depends on the boundary conditions used -- the leading order term switches from a $\log$ to $\log-\log$ behavior for the Periodic and Neumann boundary conditions. In contrast, for Dirichlet, it is the parameters within the leading $\log-\log$ term that are switched. We show that this cross-over manifests as a change in the behavior of the leading order divergent term for entanglement entropy and logarithmic negativity close to the zero-mode limit. We thus show that the two regimes have fundamentally different information content. For the reduced state of a single oscillator, we show that this cross-over occurs in the region $Nam_f\sim \mathscr{O}(1)$.
high energy physics theory
The mixing of neutral mesons is sensitive to some of the highest scales probed in laboratory experiments. In light of the planned LHCb Upgrade II, a possible upgrade of Belle II, and the broad interest in flavor physics in the tera-$Z$ phase of the proposed FCC-ee program, we study constraints on new physics contributions to $B_d$ and $B_s$ mixings which can be obtained in these benchmark scenarios. We explore the limitations of this program, and identify the measurement of $|V_{cb}|$ as one of the key ingredients in which progress beyond current expectations is necessary to maximize future sensitivity. We speculate on possible solutions to this bottleneck. Given the current tension with the standard model (SM) in semileptonic $B$ decays, we explore how its resolution may impact the search for new physics in mixing. Even if new physics has the same CKM and loop suppressions of flavor changing processes as the SM, the sensitivity will reach 2 TeV, and it can be much higher if any SM suppressions are lifted. We illustrate the discovery potential of this program.
high energy physics phenomenology
In this paper we introduce a robust to outliers Wilcoxon change-point testing procedure, for distinguishing between short-range dependent time series with a change in mean at unknown time and stationary long-range dependent time series. We establish the asymptotic distribution of the test statistic under the null hypothesis for $L_1$ near epoch dependent processes and show its consistency under the alternative. The Wilcoxon-type testing procedure similarly as the CUSUM-type testing procedure of Berkes, Horv\'ath, Kokoszka and Shao (2006), requires estimation of the location of a possible change-point, and then using pre- and post-break subsamples to discriminate between short and long-range dependence. A simulation study examines the empirical size and power of the Wilcoxon-type testing procedure in standard cases and with disturbances by outliers. It shows that in standard cases the Wilcoxon-type testing procedure behaves equally well as the CUSUM-type testing procedure but outperforms it in presence of outliers. We also apply both testing procedure to hydrologic data.
statistics
Vector autoregression (VAR) models are widely used to analyze the interrelationship between multiple variables over time. Estimation and inference for the transition matrices of VAR models are crucial for practitioners to make decisions in fields such as economics and finance. However, when the number of variables is larger than the sample size, it remains a challenge to perform statistical inference of the model parameters. In this article, we propose the de-biased Lasso and two bootstrap de-biased Lasso methods to construct confidence intervals for the elements of the transition matrices of high-dimensional VAR models. We show that the proposed methods are asymptotically valid under appropriate sparsity and other regularity conditions. To implement our methods, we develop feasible and parallelizable algorithms, which save a large amount of computation required by the nodewise Lasso and bootstrap. A simulation study illustrates that our methods perform well in finite samples. Finally, we apply our methods to analyze the price data of stocks in the S&P 500 index in 2019. We find that some stocks, such as the largest producer of gold in the world, Newmont Corporation, have significant predictive power over the most stocks.
statistics
Steganography is a method that can improve network security and make communications safer. In this method, a secret message is hidden in content like audio signals that should not be perceptible by listening to the audio or seeing the signal waves. Also, it should be robust against different common attacks such as noise and compression. In this paper, we propose a new speech steganography method based on a combination of Discrete Wavelet Transform, Graph-based Transform, and Singular Value Decomposition (SVD). In this method, we first find voiced frames based on energy and zero-crossing counts of the frames and then embed a binary message into voiced frames. Experimental results on the NOIZEUS database show that the proposed method is imperceptible and also robust against Gaussian noise, re-sampling, re-quantization, high pass filter, and low pass filter. Also, it is robust against MP3 compression and scaling for watermarking applications.
computer science
After showing that the neutrino mass matrix in all Majorana models can be described by a general master formula, we will present a master parametrization for the Yukawa matrices, also valid for all Majorana models, that automatically ensures agreement with neutrino oscillation data. The application of the master parametrization will be illustrated in an example model.
high energy physics phenomenology
We present two fully probabilistic Euler schemes, one explicit and one implicit, for the simulation of McKean-Vlasov Stochastic Differential Equations (MV-SDEs) with drifts of super-linear growth and random initial condition. We provide a pathwise propagation of chaos result and show strong convergence for both schemes on the consequent particle system. The explicit scheme attains the standard $1/2$ rate in stepsize. From a technical point of view, we successfully use stopping times to prove the convergence of the implicit method although we avoid them altogether for the explicit one. The combination of particle interactions and random initial condition makes the proofs technically more involved. Numerical tests recover the theoretical convergence rates and illustrate a computational complexity advantage of the explicit over the implicit scheme. A comparative analysis is carried out on a stylized non-Lipschitz MV-SDE and a mean-field model for FitzHugh-Nagumo neurons. We provide numerical tests illustrating "particle corruption" effect where one single particle diverging can "corrupt" the whole particle system. Moreover, the more particles in the system the more likely this divergence is to occur.
mathematics
The giant $\{ \mathrm{Mn}_{70} \}$ and $\{ \mathrm{Mn}_{84} \}$ wheels are the largest nuclearity single-molecule magnets synthesized to date and understanding their magnetic properties poses a challenge to theory. Starting from first principles calculations, we explore the magnetic properties and excitations in these wheels using effective spin Hamiltonians. We find that the unusual geometry of the superexchange pathways leads to weakly coupled $\{ \mathrm{Mn}_{7} \}$ subunits carrying an effective $S=2$ spin. The spectrum exhibits a hierarchy of energy scales and massive degeneracies, with the lowest energy excitations arising from Heisenberg-ring-like excitations of the $\{ \mathrm{Mn}_{7} \}$ subunits around the wheel, at energies consistent with the observed temperature dependence of the magnetic susceptibility. We further suggest an important role for weak longer-range couplings in selecting the precise spin ground-state of the $\mathrm{Mn}$ wheels out of the nearly degenerate ground-state band.
physics
A quickest change detection problem is considered in a sensor network with observations whose statistical dependency structure across the sensors before and after the change is described by a decomposable graphical model (DGM). Distributed computation methods for this problem are proposed that are capable of producing the optimum centralized test statistic. The DGM leads to the proper way to collect nodes into local groups equivalent to cliques in the graph, such that a clique statistic which summarizes all the clique sensor data can be computed within each clique. The clique statistics are transmitted to a decision maker to produce the optimum centralized test statistic. In order to further improve communication efficiency, an ordered transmission approach is proposed where transmissions of the clique statistics to the fusion center are ordered and then adaptively halted when sufficient information is accumulated. This procedure is always guaranteed to provide the optimal change detection performance, despite not transmitting all the statistics from all the cliques. A lower bound on the average number of transmissions saved by ordered transmissions is provided and for the case where the change seldom occurs the lower bound approaches approximately half the number of cliques provided a well behaved distance measure between the distributions of the sensor observations before and after the change is sufficiently large. We also extend the approach to the case when the graph structure is different under each hypothesis. Numerical results show significant savings using the ordered transmission approach and validate the theoretical findings.
electrical engineering and systems science
We present next-to-next-to-next-to-leading order (N$^3$LO) QCD predictions for the Higgs boson pair production via gluon fusion at hadron colliders in the infinite top-quark mass limit. Besides the inclusive total cross sections at various collision energies, we also provide the invariant mass distribution of the Higgs boson pair. Our results show that the N$^3$LO QCD corrections enhance the next-to-next-to-leading order cross section by $3.0\%$ ($2.7\%$) at $\sqrt{s}=13~(100)$ TeV, while the scale uncertainty is reduced substantially below $3\%$ ($2\%$). We also find that a judicious scale choice can significantly improve the perturbative convergence. For the invariant mass distribution, our calculation demonstrates that the N$^3$LO corrections improve the scale dependence but almost do not change the shape.
high energy physics phenomenology
Computer simulations are invaluable tools for scientific discovery. However, accurate simulations are often slow to execute, which limits their applicability to extensive parameter exploration, large-scale data analysis, and uncertainty quantification. A promising route to accelerate simulations by building fast emulators with machine learning requires large training datasets, which can be prohibitively expensive to obtain with slow simulations. Here we present a method based on neural architecture search to build accurate emulators even with a limited number of training data. The method successfully accelerates simulations by up to 2 billion times in 10 scientific cases including astrophysics, climate science, biogeochemistry, high energy density physics, fusion energy, and seismology, using the same super-architecture, algorithm, and hyperparameters. Our approach also inherently provides emulator uncertainty estimation, adding further confidence in their use. We anticipate this work will accelerate research involving expensive simulations, allow more extensive parameters exploration, and enable new, previously unfeasible computational discovery.
statistics
An essential input of annuity pricing is the future retiree mortality. From observed age-specific mortality data, modeling and forecasting can be taken place in two routes. On the one hand, we can first truncate the available data to retiree ages and then produce mortality forecasts based on a partial age-range model. On the other hand, with all available data, we can first apply a full age-range model to produce forecasts and then truncate the mortality forecasts to retiree ages. We investigate the difference in modeling the logarithmic transformation of the central mortality rates between a partial age-range and a full age-range model, using data from mainly developed countries in the Human Mortality Database (2020). By evaluating and comparing the short-term point and interval forecast accuracies, we recommend the first strategy by truncating all available data to retiree ages and then produce mortality forecasts. However, when considering the long-term forecasts, it is unclear which strategy is better since it is more difficult to find a model and parameters that are optimal. This is a disadvantage of using methods based on time series extrapolation for long-term forecasting. Instead, an expectation approach, in which experts set a future target, could be considered, noting that this method has also had limited success in the past.
statistics
Among the ODEs peculiarities -- specially those of Mechanics -- besides the problem of leading them to quadratures and to solve them either in series or in closed form, one is faced with the inversion. E.g. when one wishes to pass from time as function of lagrangian coordinates to these last as functions of time. This paper solves in almost closed form the system of non linear ODEs of the 2D-motion (say, co-ordinates $\theta$ and $\psi$) of a gravity-free double pendulum (GFDP) not subjected to any force. In such a way its movement is ruled by initial conditions only. The relevant strongly non linear ODEs, have been put back to hyper-elliptic quadratures which, through the Integral Representation Theorem (hereinafter IRT) have been driven to the Lauricella hypergeometric functions $F_D^{(j)}, j=3, 4, 5, 6 $. The IRT has been applied after a change of variable which improves their use and accelerates the series convergence. The $\psi$ is given in terms of $F_D^{(4)}$ -- which is inverted by means of the Fourier Series tool and put as an argument inside the $F_D^{(5)}$ -- in such a way allowing the $\theta$ computations. We succeed in a insight knowledge of time laws and trajectories of both bobs forming the GFDP, which -- after the inversion -- is therefore completely solved in explicit closed form. Suitable sample problems of the three possible cases of motion are carried out and their analysis closes the work. The Lauricella functions employed here to solve the differential equations -- in lack of specific SW packages -- have been implemented thanks to some reduction theorems which will form the object of a next paper. To our best knowledge, this work adds a new contribution as it concerns detection and inversion of solutions of nonlinear hamiltonian systems.
physics
In the Gel'fand inverse problem, one aims to determine the topology, differential structure and Riemannian metric of a compact manifold $M$ with boundary from the knowledge of the boundary $\partial M$, the Neumann eigenvalues $\lambda_j$ and the boundary values of the eigenfunctions $\varphi_j|_{\partial M}$. We show that this problem has a stable solution with quantitative stability estimates in a class of manifolds with bounded geometry. More precisely, we show that finitely many eigenvalues and the boundary values of corresponding eigenfunctions, known up to small errors, determine a metric space that is close to the manifold in the Gromov-Hausdorff sense. We provide an algorithm to construct this metric space. This result is based on an explicit estimate on the stability of the unique continuation for the wave operator. We hope our results may have applications, in particular, to medicine.
mathematics
In this self-contained paper we prove that Voevodsky's smooth blowup triangle of motives generalises to a smooth blowup triangle of motives with modulus.
mathematics
We study the interaction between a single two-level atom and a single-photon probe pulse in a guided mode of a nanofiber. We examine the situation of chiral interaction, where the atom has a dipole rotating in the meridional plane of the nanofiber, and the probe pulse is quasilinearly polarized along the radial direction of the atom position in the fiber transverse plane. We show that the atomic excitation probability, the photon transmission flux, and the photon transmission probability depend on the propagation direction of the probe pulse along the fiber axis. In contrast, the reflection flux and the reflection probability do not depend on the propagation direction of the probe pulse. We find that the asymmetry parameter for the atomic excitation probability does not vary in time and does not depend on the probe pulse shape.
quantum physics
Some 50~years ago, physicists, and after them the entire world, started to found their time reference on atomic properties instead of motions of the Earth that have been in use since the origin. Far from being an arrival point, this decision marked the beginning of an adventure characterized by a 6 orders of magnitude improvement in the uncertainty of realization of atomic frequency and time references. Ever progressing atomic frequency standards and time references derived from them are key resources for science and for society. We will describe how the unit of time is realized with a fractional accuracy approaching $10^{-16}$ and how it is delivered to users via the elaboration of the international atomic time. We will describe the tremendous progress of optical frequency metrology over the last 20~years which led to a novel generation of optical frequency standards with fractional uncertainties of $10^{-18}$. We will describe work toward a possible redefinition of the SI second based on such standards. We will describe existing and emerging applications of atomic frequency standards in science.
physics
Semiconductor Rashba nanowires are quasi-one dimensional systems that have large spin-orbit (SO) coupling arising from a broken inversion symmetry due to an external electric field. There exist parametrized multiband models that can describe accurately this effect. However, simplified single band models are highly desirable to study geometries of recent experimental interest, since they may allow to incorporate the effects of the low dimensionality and the nanowire electrostatic environment at a reduced computational cost. Commonly used conduction band approximations, valid for bulk materials, greatly underestimate the SO coupling in Zinc-blende crystal structures and overestimate it for Wurtzite ones when applied to finite cross-section wires, where confinement effects turn out to play an important role. We demonstrate here that an effective equation for the linear Rashba SO coupling of the semiconductor conduction band can reproduce the behavior of more sophisticated eight-band k$\cdot$p model calculations. This is achieved by adjusting a single effective parameter that depends on the nanowire crystal structure and its chemical composition. We further compare our results to the Rashba coupling extracted from magnetoconductance measurements in several experiments on InAs and InSb nanowires, finding excellent agreement. This approach may be relevant in systems where Rashba coupling is known to play a major role, such as in spintronic devices or Majorana nanowires.
condensed matter
Probabilistic optimal power flow (POPF) is an important analytical tool to ensure the secure and economic operation of power systems. POPF needs to solve enormous nonlinear and nonconvex optimization problems. The huge computational burden has become the major bottleneck for the practical application. This paper presents a deep learning approach to solve the POPF problem efficiently and accurately. Taking advantage of the deep structure and reconstructive strategy of stacked denoising auto encoders (SDAE), a SDAE-based optimal power flow (OPF) is developed to extract the high-level nonlinear correlations between the system operating condition and the OPF solution. A training process is designed to learn the feature of POPF. The trained SDAE network can be utilized to conveniently calculate the OPF solution of random samples generated by Monte-Carlo simulation (MCS) without the need of optimization. A modified IEEE 118-bus power system is simulated to demonstrate the effectiveness of the proposed method.
electrical engineering and systems science
The quantum gravity path integral involves a sum over topologies that invites comparisons to worldsheet string theory and to Feynman diagrams of quantum field theory. However, the latter are naturally associated with the non-abelian algebra of quantum fields, while the former has been argued to define an abelian algebra of superselected observables associated with partition-function-like quantities at an asymptotic boundary. We resolve this apparent tension by pointing out a variety of discrete choices that must be made in constructing a Hilbert space from such path integrals, and arguing that the natural choices for quantum gravity differ from those used to construct QFTs. We focus on one-dimensional models of quantum gravity in order to make direct comparisons with worldline QFT.
high energy physics theory
A central theme in the field of survey statistics is estimating population-level quantities through data coming from potentially non-representative samples of the population. Multilevel Regression and Poststratification (MRP), a model-based approach, is gaining traction against the traditional weighted approach for survey estimates. MRP estimates are susceptible to bias if there is an underlying structure that the methodology does not capture. This work aims to provide a new framework for specifying structured prior distributions that lead to bias reduction in MRP estimates. We use simulation studies to explore the benefit of these prior distributions and demonstrate their efficacy on non-representative US survey data. We show that structured prior distributions offer absolute bias reduction and variance reduction for posterior MRP estimates in a large variety of data regimes.
statistics
When monitoring the dynamics of stochastic systems, such as interacting particles agitated by thermal noise, disentangling deterministic forces from Brownian motion is challenging. Indeed, we show that there is an information-theoretic bound, the capacity of the system when viewed as a communication channel, that limits the rate at which information about the force field can be extracted from a Brownian trajectory. This capacity provides an upper bound to the system's entropy production rate, and quantifies the rate at which the trajectory becomes distinguishable from pure Brownian motion. We propose a practical and principled method, Stochastic Force Inference, that uses this information to approximate force fields and spatially variable diffusion coefficients. It is data efficient, including in high dimensions, robust to experimental noise, and provides a self-consistent estimate of the inference error. In addition to forces, this technique readily permits the evaluation of out-of-equilibrium currents and the corresponding entropy production with a limited amount of data.
condensed matter
Frustrated quantum spin systems such as the Heisenberg and Kitaev models on various lattices, have been known to exhibit various exotic properties not only at zero temperature but also for finite temperatures. Inspired by the remarkable development of the quantum frustrated spin systems in recent years, we investigate the finite-temperature properties of the $S=1/2$ Kitaev-Heisenberg models on kagome and triangular lattices by means of finite-temperature Lanczos methods with improved accuracy. In both lattices, multiple peaks are confirmed in the specific heat. To find the origin of the multiple peaks, we calculate the static spin structure factor. The origin of the high-temperature peak of the specific heat is attributed to a crossover from the paramagnetic state to a short-range ordered state whose static spin structure factor has zigzag or linear intensity distributions in momentum space. In the triangular Kitaev model, the "order by disorder" due to quantum fluctuation occurs. On the other hand, in the kagome Kitaev model it does not occur even with both quantum and thermal fluctuations.
condensed matter
Working in the basis where the charged-lepton Yukawa matrix is diagonal and making the $\tau$-dominance approximations, we analytically derive integral solutions to the one-loop renormalization-group equations (RGEs) for neutrino masses, flavor mixing angles, CP-violating phases and the Jarlskog invariant under the standard parametrization of the PMNS matrix in the standard model or its minimal supersymmetric extension for both Majorana and Dirac neutrinos. With these integral solutions, we carry out numerical calculations to investigate the RGE running of lepton flavor mixing parameters and the Jarlskog invariant, and also compare these integral solutions with the exact results obtained by numerically solving the one-loop RGEs. It is shown that these integral solutions coincide with the exact results and can well describe the evolution of lepton flavor mixing parameters and the Jarlskog invariant in most cases. Some important features of our integral solutions and the evolution behaviors of relevant flavor parameters are also discussed in detail both analytically and numerically.
high energy physics phenomenology
Given a standard graded polynomial ring over a commutative Noetherian ring $A$, we prove that the cohomological dimension and the height of the ideals defining any of its Veronese subrings are equal. This result is due to Ogus when $A$ is a field of characteristic zero, and follows from a result of Peskine and Szpiro when $A$ is a field of positive characteristic; our result applies, for example, when $A$ is the ring of integers.
mathematics
The Symbolic Regression (SR) problem, where the goal is to find a regression function that does not have a pre-specified form but is any function that can be composed of a list of operators, is a hard problem in machine learning, both theoretically and computationally. Genetic programming based methods, that heuristically search over a very large space of functions, are the most commonly used methods to tackle SR problems. An alternative mathematical programming approach, proposed in the last decade, is to express the optimal symbolic expression as the solution of a system of nonlinear equations over continuous and discrete variables that minimizes a certain objective, and to solve this system via a global solver for mixed-integer nonlinear programming problems. Algorithms based on the latter approach are often very slow. We propose a hybrid algorithm that combines mixed-integer nonlinear optimization with explicit enumeration and incorporates constraints from dimensional analysis. We show that our algorithm is competitive, for some synthetic data sets, with a state-of-the-art SR software and a recent physics-inspired method called AI Feynman.
computer science
In this note we consider inhomogeneous solutions of two-dimensional linear sigma model in the large $N$ limit. These solutions are similar to the ones found recently in two-dimensional $CP^N$ sigma model. The solution exists only for some range of coupling constant. We calculate energy of the solutions as function of parameters of the model and show that at some value of the coupling constant it changes sign signaling a possible phase transition. The case of the nonlinear model at finite temperature is also discussed. The free energy of the inhomogeneous solution is shown to change sign at some critical temperature.
high energy physics theory
In this paper we start a systematic study of quantum field theory on random trees. Using precise probability estimates on their Galton-Watson branches and a multiscale analysis, we establish the general power counting of averaged Feynman amplitudes and check that they behave indeed as living on an effective space of dimension 4/3, the spectral dimension of random trees. In the `just renormalizable' case we prove convergence of the averaged amplitude of any completely convergent graph, and establish the basic localization and subtraction estimates required for perturbative renormalization. Possible consequences for an SYK-like model on random trees are briefly discussed.
high energy physics theory
Local Fourier analysis is a useful tool for predicting and analyzing the performance of many efficient algorithms for the solution of discretized PDEs, such as multigrid and domain decomposition methods. The crucial aspect of local Fourier analysis is that it can be used to minimize an estimate of the spectral radius of a stationary iteration, or the condition number of a preconditioned system, in terms of a symbol representation of the algorithm. In practice, this is a "minimax" problem, minimizing with respect to solver parameters the appropriate measure of work, which involves maximizing over the Fourier frequency. Often, several algorithmic parameters may be determined by local Fourier analysis in order to obtain efficient algorithms. Analytical solutions to minimax problems are rarely possible beyond simple problems; the status quo in local Fourier analysis involves grid sampling, which is prohibitively expensive in high dimensions. In this paper, we propose and explore optimization algorithms to solve these problems efficiently. Several examples, with known and unknown analytical solutions, are presented to show the effectiveness of these approaches.
mathematics
Single flux quantum (SFQ) logic is a promising candidate to replace the CMOS logic for high speed and low power applications due to its superiority in providing high performance and energy efficient circuits. However, developing effective Electronic Design Automation (EDA) tools, which cater to special characteristics and requirements of SFQ circuits such as depth minimization and path balancing, are essential to automate the whole process of designing large SFQ circuits. In this paper, a novel technology mapping tool, called SFQmap, is presented, which provides optimization methods for minimizing first the circuit depth and path balancing overhead and then the worst-case stage delay of mapped SFQ circuits. Compared with the state-of-the-art technology mappers, SFQmap reduces the depth and path balancing overhead by an average of 14% and 31%, respectively.
computer science
We described the design and operation principles of a new tunable-filter photometer developed for the 1-m telescope of the Special Astrophysical Observatory of the Russian Academy of Sciences and the 2.5-m telescope of the Sternberg Astronomical Institute of the Moscow State University. The instrument is mounted on the scanning Fabry-Perot interferometer operating in the tunable-filter mode in the spectral range of 460-800 nm with a typical spectral resolution of about 1.3 nm. It allows one to create images of galactic and extragalactic nebulae in the emission lines having different excitation conditions and to carry out diagnostics of the gas ionization state. The main steps of observations, data calibration, and reduction are illustrated by examples of different emission-line objects: galactic HII regions, planetary nebulae, active galaxies with extended filaments, starburst galaxies, and Perseus galaxy cluster.
astrophysics
We consider small perturbations of a conformal iterated function system (CIFS) produced by either adding or removing some generators with small derivative from the original. We establish a formula, utilizing transfer operators arising from the thermodynamic formalism \`a la Sinai--Ruelle--Bowen, which may be solved to express the Hausdorff dimension of the perturbed limit set in series form: either exactly, or as an asymptotic expansion. Significant applications include strengthening Hensley's asymptotic formula from 1992, which improved on earlier bounds due to Jarn\'ik and Kurzweil, for the Hausdorff dimension of the set of real numbers whose continued fraction expansion partial quotients are all $\leq N$; as well as its counterpart for reals whose partial quotients are all $\geq N$ due to Good from 1941.
mathematics
We study the probability distribution $P(X_N=X,N)$ of the total displacement $X_N$ of an $N$-step run and tumble particle on a line, in presence of a constant nonzero drive $E$. While the central limit theorem predicts a standard Gaussian form for $P(X,N)$ near its peak, we show that for large positive and negative $X$, the distribution exhibits anomalous large deviation forms. For large positive $X$, the associated rate function is nonanalytic at a critical value of the scaled distance from the peak where its first derivative is discontinuous. This signals a first-order dynamical phase transition from a homogeneous `fluid' phase to a `condensed' phase that is dominated by a single large run. A similar first-order transition occurs for negative large fluctuations as well. Numerical simulations are in excellent agreement with our analytical predictions.
condensed matter
Magnetic and spintronic media have offered fundamental scientific subjects and technological applications. Magneto-optic Kerr effect (MOKE) microscopy provides the most accessible platform to study the dynamics of spins, magnetic quasi-particles, and domain walls. However, in the research of nanoscale spin textures and state-of-the-art spintronic devices, optical techniques are generally restricted by the extremely weak magneto-optical activity and diffraction limit. Highly sophisticated, expensive electron microscopy and scanning probe methods thus have come to the forefront. Here, we show that perfect optical absorption (POA) dramatically improves the performance and functionality of MOKE microscopy. For 1-nm-thin Co film, we demonstrate a Kerr amplitude as large as 20 degree and magnetic domain imaging visibility of 0.47. Especially, POA-enhanced MOKE microscopy enables real-time detection and statistical analysis of sub-wavelength magnetic domain reversals. Furthermore, we exploit enhanced magneto-optic birefringence and demonstrate analyser-free MOKE microscopy. The POA technique is promising for optical investigations and applications of nanomagnetic systems.
physics
In this note, we apply Stein's method to analyze the steady-state distribution of queueing systems in the traditional heavy-traffic regime. Compared to previous methods (e.g., drift method and transform method), Stein's method allows us to establish stronger results with simple and template proofs. In particular, we consider discrete-time systems in this note. We first introduce the key ideas of Stein's method for heavy-traffic analysis through a single-server system. Then, we apply the developed template to analyze both load balancing problems and scheduling problems. All these three examples demonstrate the power and flexibility of Stein's method in heavy-traffic analysis. In particular, we can see that one appealing property of Stein's method is that it combines the advantages of both the drift method and the transform method.
mathematics
As cancer patient survival improves, late effects from treatment are becoming the next clinical challenge. Chemotherapy and radiotherapy, for example, potentially increase the risk of both morbidity and mortality from second malignancies and cardiovascular disease. To provide clinically relevant population-level measures of late effects, it is of importance to (1) simultaneously estimate the risks of both morbidity and mortality, (2) partition these risks into the component expected in the absence of cancer and the component due to the cancer and its treatment, and (3) incorporate the multiple time scales of attained age, calendar time, and time since diagnosis. Multi-state models provide a framework for simultaneously studying morbidity and mortality, but do not solve the problem of partitioning the risks. However, this partitioning can be achieved by applying a relative survival framework, by allowing is to directly quantify the excess risk. This paper proposes a combination of these two frameworks, providing one approach to address (1)-(3). Using recently developed methods in multi-state modeling, we incorporate estimation of excess hazards into a multi-state model. Both intermediate and absorbing state risks can be partitioned and different transitions are allowed to have different and/or multiple time scales. We illustrate our approach using data on Hodgkin lymphoma patients and excess risk of diseases of the circulatory system, and provide user-friendly Stata software with accompanying example code.
statistics
We propose a method for the realization of the two-qubit quantum Fourier transform (QFT) using a Hamiltonian which possesses the circulant symmetry. Importantly, the eigenvectors of the circulant matrices are the Fourier modes and do not depend on the magnitude of the Hamiltonian elements as long as the circulant symmetry is preserved. The QFT implementation relies on the adiabatic transition from each of the spin product states to the respective quantum Fourier superposition states. We show that in ion traps one can obtain a Hamiltonian with the circulant symmetry by tuning the spin-spin interaction between the trapped ions. We present numerical results which demonstrate that very high fidelity can be obtained with realistic experimental resources. We also describe how the gate can be accelerated by using a "shortcut-to-adiabaticity" field.
quantum physics
An accurate modeling of a Josephson junction that is embedded in an arbitrary environment is of crucial importance for qubit design. We present a formalism to obtain a Lindblad master equation that describes the evolution of the system. As the qubit degrees of freedom oscillate with a well-defined frequency $\omega_q$, the environment only has to be modeled close to this frequency. Different from alternative approaches, we show that this goal can be achieved by modeling the environment with only few degrees of freedom. We treat the example of a transmon qubit coupled to a stripline resonator. We derive the parameters of a dissipative single-mode Jaynes-Cummings model starting from first principles. We show that the leading contribution of the off-resonant modes is a correlated decay process involving both the qubit and the resonator mode. In particular, our results show that the effect of the off-resonant modes in the multi-mode Jaynes-Cummings model is perturbative in $1/ \omega_q$.
condensed matter
In this survey, we describe the fundamental differential-geometric structures of information manifolds, state the fundamental theorem of information geometry, and illustrate some use cases of these information manifolds in information sciences. The exposition is self-contained by concisely introducing the necessary concepts of differential geometry, but proofs are omitted for brevity.
computer science
A sterile neutrino in the $3+1$ scheme, where the sterile accounts for neutrino anomalies not explained solely by the weak active neutrinos, arises as a natural source for the breaking of the $\mu-\tau$ symmetry suggested by oscillation neutrino data. We explore the predictions for the Dirac CP phases in this scenario and show that current limits on $\delta_{CP}$ suggest a normal hierarchy and a lightest neutrino scale below 0.1 eV as the the most plausible explanation for that. Other Dirac phases turn out to be non zero as well.
high energy physics phenomenology
The recent application of deep learning technologies in medical image registration has exponentially decreased the registration time and gradually increased registration accuracy when compared to their traditional counterparts. Most of the learning-based registration approaches considers this task as a one directional problem. As a result, only correspondence from the moving image to the target image is considered. However, in some medical procedures bidirectional registration is required to be performed. Unlike other learning-based registration, we propose a registration framework with inverse consistency. The proposed method simultaneously learns forward transformation and backward transformation in an unsupervised manner. We perform training and testing of the method on the publicly available LPBA40 MRI dataset and demonstrate strong performance than baseline registration methods.
electrical engineering and systems science
We analyze which forces explain inflation and how in a large panel of 124 countries from 1997 to 2015. Models motivated by economic theory are compared to an approach based on model-based boosting and non-linearities are explicitly considered. We provide compelling evidence that the interaction of energy price and energy rents stand out among 40 explanatory variables. The output gap and globalization are also relevant drivers of inflation. Credit and money growth, a country's inflation history and demographic changes are comparably less important while central bank related variables as well as political variables turn out to have the least empirical relevance. In a subset of countries public debt denomination and exchange rate arrangements also play a noteworthy role in the inflation process. By contrast, other public-debt variables and an inflation targeting regime have weaker explanatory power. Finally, there is clear evidence of structural breaks in the effects since the financial crisis.
statistics
This study aims to develop a wearable device that collect health data from maintenance personnel and environmental conditions data in order to ensure the safety of the staff in industrial work areas where have different levels of risk categories. This device can stop machine immediately according to the health parameters that collected from the maintenance personnel to prevent work accidents and also with measuring stress level of this personnel, system make prediction of suitability of personnel for maintenance operation according to degree of risk level of machine. System consists from four parts. Wearable unit, smartphone, server computer and machine mounted unit. The wearable device can measure personnel; heart rate, oxygen saturation, body temperature, skin resistance; and also collect data from environment; heat, light, humidity, CO2. The results which obtained by decision support system algorithm can stop machine immediately or predict the suitability of corresponding operator for this maintenance operation. This application is an important personal safety system and can prevent work accidents that may occur during maintenance or repair operations.
electrical engineering and systems science
Our opinions, which things we like or dislike, depend on the opinions of those around us. Nowadays, we are influenced by the opinions of online strangers, expressed in comments and ratings on online platforms. Here, we perform novel "academic A/B testing" experiments with over 2,500 participants to measure the extent of that influence. In our experiments, the participants watch and evaluate videos on mirror proxies of YouTube and Vimeo. We control the comments and ratings that are shown underneath each of these videos. Our study shows that from 5$\%$ up to 40$\%$ of subjects adopt the majority opinion of strangers expressed in the comments. Using Bayes' theorem, we derive a flexible and interpretable family of models of social influence, in which each individual forms posterior opinions stochastically following a logit model. The variants of our mixture model that maximize Akaike information criterion represent two sub-populations, i.e., non-influenceable and influenceable individuals. The prior opinions of the non-influenceable individuals are strongly correlated with the external opinions and have low standard error, whereas the prior opinions of influenceable individuals have high standard error and become correlated with the external opinions due to social influence. Our findings suggest that opinions are random variables updated via Bayes' rule whose standard deviation is correlated with opinion influenceability. Based on these findings, we discuss how to hinder opinion manipulation and misinformation diffusion in the online realm.
physics
Valley Hall effect is an appearance of the valley current in the direction transverse to the electric current. We develop the microscopic theory of the valley Hall effect in two-dimensional semiconductors where the electrons are dragged by the phonons or photons. We derive and analyze all relevant contributions to the valley current including the skew-scattering effects together with the anomalous contributions caused by the side-jumps and the anomalous velocity. The partial compensation of the anomalous contributions is studied in detail. The role of two-phonon and two-impurity scattering processes is analyzed. We also compare the valley Hall effect under the drag conditions and the valley Hall effect caused by the external static electric field.
condensed matter
Segmentation of anatomical structures and pathologies is inherently ambiguous. For instance, structure borders may not be clearly visible or different experts may have different styles of annotating. The majority of current state-of-the-art methods do not account for such ambiguities but rather learn a single mapping from image to segmentation. In this work, we propose a novel method to model the conditional probability distribution of the segmentations given an input image. We derive a hierarchical probabilistic model, in which separate latent variables are responsible for modelling the segmentation at different resolutions. Inference in this model can be efficiently performed using the variational autoencoder framework. We show that our proposed method can be used to generate significantly more realistic and diverse segmentation samples compared to recent related work, both, when trained with annotations from a single or multiple annotators.
electrical engineering and systems science
We propose a controllable non-reciprocal transmission model. The model consists of a Mobius ring, which is connected with two one-dimensional semi-infinite chains, and with a two-level atom located inside one of the cavities of the Mobius ring. We use the method of Green function to study the transmittance of a single photon through the model. The results show that the non-reciprocal transmission can be achieved in this model and the two-level atom can behave as a quantum switch for the non-reciprocal transport of the single photon. This controllable non-reciprocal transmission model may inspire new quantum non-reciprocal devices.
quantum physics
The visible and dark sectors of particle physics can be connected via kinetic mixing between ordinary and hidden photons. If the latter is light its production in high energy collisions of ordinary particles proceeds via oscillations with ordinary photons similarly to the neutrino processes. Generically, the experiments are insensitive to mass of the hidden vector, if it is lighter than 1\,MeV, and it does not decay into $e^+e^-$-pair. Still, one can use the missing energy and scattering off the detector material as signatures to search for the light vectors. Presence of media suppresses production of the light vectors making the experiments insensitive to the entire model. We present analytic formulas for the light hidden photon production, propagation and detection valid for searches at colliders and beam-target experiments and apply them to estimate the impact on the sensitivities of a set of experiments --- NA64, FASER, MATHUSLA, SHiP, T2K, DUNE, NA62 --- in a zero-background case.
high energy physics phenomenology
Federated Learning (FL) has been recently presented as a new technique for training shared machine learning models in a distributed manner while respecting data privacy. However, implementing FL in wireless networks may significantly reduce the lifetime of energy-constrained mobile devices due to their involvement in the construction of the shared learning models. To handle this issue, we propose a novel approach at the physical layer based on the application of lightwave power transfer in the FL-based wireless network and a resource allocation scheme to manage the network's power efficiency. Hence, we formulate the corresponding optimization problem and then propose a method to obtain the optimal solution. Numerical results reveal that, the proposed scheme can provide sufficient energy to a mobile device for performing FL tasks without using any power from its own battery. Hence, the proposed approach can support the FL-based wireless network to overcome the issue of limited energy in mobile devices.
electrical engineering and systems science
We explore the deep ultraviolet (that is, short-distance) limit of the power spectrum (PS) and of the correlation function of a cold dark matter dominated Universe. While for large scales the PS can be written as a double series expansion, in powers of the linear PS and of the wavenumber $k$, we show that, in the opposite limit, it can be expressed via an expansion in powers of the form $1/k^{d+2n}$, where $d$ is the number of spatial dimensions, and $n$ is a non negative integer. The coefficients of the terms of the expansion are nonperturbative in the linear PS, and can be interpreted in terms of the probability density function for the displacement field, evaluated around specific configurations of the latter, that we identify. In the case of the Zel'dovich dynamics, these coefficients can be determined analytically, whereas for the exact dynamics they can be treated as fit, or nuisance, parameters. We confirm our findings with numerical simulations and discuss the necessary steps to match our results to those obtained for larger scales and to actual measurements.
astrophysics
We present a comprehensive analysis of the potential sensitivity of the Electron-Ion Collider (EIC) to charged lepton flavor violation (CLFV) in the channel $ep\to \tau X$, within the model-independent framework of the Standard Model Effective Field Theory (SMEFT). We compute the relevant cross sections to leading order in QCD and electroweak corrections and perform simulations of signal and SM background events in various $\tau$ decay channels, suggesting simple cuts to enhance the associated estimated efficiencies. To assess the discovery potential of the EIC in $\tau$-$e$ transitions, we study the sensitivity of other probes of this physics across a broad range of energy scales, from $pp \to e \tau X$ at the Large Hadron Collider to decays of $B$ mesons and $\tau$ leptons, such as $\tau \to e \gamma$, $\tau \to e \ell^+ \ell^-$, and crucially the hadronic modes $\tau \to e Y$ with $Y \in \{ \pi, K, \pi \pi, K \pi, ...\}$. We find that electroweak dipole and four-fermion semi-leptonic operators involving light quarks are already strongly constrained by $\tau$ decays, while operators involving the $c$ and $b$ quarks present more promising discovery potential for the EIC. An analysis of three models of leptoquarks confirms the expectations based on the SMEFT results. We also identify future directions needed to maximize the reach of the EIC in CLFV searches: these include an optimization of the $\tau$ tagger in hadronic channels, an exploration of background suppression through tagging $b$ and $c$ jets in the final state, and a global fit by turning on all SMEFT couplings, which will likely reveal new discovery windows for the EIC.
high energy physics phenomenology
This work proposes a methodology to find performance and energy trade-offs for parallel applications running on Heterogeneous Multi-Processing systems with a single instruction-set architecture. These offer flexibility in the form of different core types and voltage and frequency pairings, defining a vast design space to explore. Therefore, for a given application, choosing a configuration that optimizes the performance and energy consumption is not straightforward. Our method proposes novel analytical models for performance and power consumption whose parameters can be fitted using only a few strategically sampled offline measurements. These models are then used to estimate an application's performance and energy consumption for the whole configuration space. In turn, these offline predictions define the choice of estimated Pareto-optimal configurations of the model, which are used to inform the selection of the configuration that the application should be executed on. The methodology was validated on an ODROID-XU3 board for eight programs from the PARSEC Benchmark, Phoronix Test Suite and Rodinia applications. The generated Pareto-optimal configuration space represented a 99% reduction of the universe of all available configurations. Energy savings of up to 59.77%, 61.38% and 17.7% were observed when compared to the performance, ondemand and powersave Linux governors, respectively, with higher or similar performance.
electrical engineering and systems science
The structural, optical, morphological and nonlinear optical properties of Cu:ZnO spray-coated films are studied. The surface morphology of Cu:ZnO thin films turned out to be homogenous, crack free and well covered with pea-shaped grains. The peak shift observed in the x-ray photoelectron spectroscopy spectra of the Cu:ZnO thin films infers the defect states present in the films. The satellite peak observed at 939.9 eV for Cu2P core-level spectra confirms the +2 oxidation state of Cu in the films. The formation of additional defect levels in the nanostruc-tures upon Cu doping was investigated using photoluminescence (PL) and Raman spectroscopy studies. The luminescent centers in the violet, blue and green spectral region were observed. The most prominent emission was centered at the blue color center for 5% Cu:ZnO thin films. The en-hancement in the PL emission intensity confirms the increase in the defect state density upon Cu doping. The shifting of the UV emission peak to the visible region validates the increase in the non-radiative recombination process in the films upon doping. The phonon modes observed in Raman analysis around 439, 333 and 558 cm-1 confirm the improvement in the crystallinity and formation of defect states in the films. X-ray diffraction reveals that the deposited films are of single-phase wurtzite ZnO structure with preferential growth orientation parallel (0 0 2) to the C-axis. The third-order optical susceptibility \c{hi}(3) has been increased from 3.5 x 10-4 to 2.77 x 10-3 esu due to the enhancement of electronic transition to different defect levels formed in the films and through local heating effects arising due to continuous wave laser illumination. The enhanced third harmonic generation signal investigated using a Nd:YAG laser at 1064 nm and 8 ns pulse width shows the credibility of Cu:ZnO films in frequency tripling applications.
condensed matter
The combination of optical and mid-infrared (MIR) photometry has been extensively used to select red active galactic nuclei (AGNs). Our aim is to explore the obscuration properties of these red AGNs with both X-ray spectroscopy and spectral energy distributions (SEDs). In this study, we re-visit the relation between optical/MIR extinction and X-ray absorption. We use IR selection criteria, specifically the $W1$ and $W2$ WISE bands, to identify 4798 AGNs in the $\it{XMM-XXL}$ area ($\sim 25$deg$^2$). Application of optical/MIR colours ($r- W2 > 6$) reveals 561 red AGNs (14$\%$). Of these, 47 have available X-ray spectra with at least 50 net (background-subtracted) counts per detector. For these sources, we construct SEDs from the optical to the MIR using the CIGALE code. The SED fitting shows that 44 of these latter 47 sources present clear signs of obscuration based on the AGN emission and the estimated inclination angle. Fitting the SED also reveals ten systems ($\sim20\%$) which are dominated by the galaxy. In these cases, the red colours are attributed to the host galaxy rather than AGN absorption. Excluding these ten systems from our sample and applying X-ray spectral fitting analysis shows that up to $76\%$ (28/37) of the IR red AGNs present signs of X-ray absorption. Thus, there are nine sources ($\sim20\%$ of the sample) that although optically red, are not substantially X-ray absorbed. Approximately $50\%$ of these sources present broad emission lines in their optical spectra. We suggest that the reason for this apparent discrepancy is that the r-W2 criterion is sensitive to smaller amounts of obscuration relative to the X-ray spectroscopy. In conclusion, it appears that the majority of red AGNs present considerable obscuration levels as shown by their SEDs. Their X-ray absorption is moderate with a mean of $\rm N_H \sim 10^{22}\, \rm{cm^{-2}}$.
astrophysics
The reliability of using fully convolutional networks (FCNs) has been successfully demonstrated by recent studies in many speech applications. One of the most popular variants of these FCNs is the `U-Net', which is an encoder-decoder network with skip connections. In this study, we propose `SkipConvNet' where we replace each skip connection with multiple convolutional modules to provide decoder with intuitive feature maps rather than encoder's output to improve the learning capacity of the network. We also propose the use of optimal smoothing of power spectral density (PSD) as a pre-processing step, which helps to further enhance the efficiency of the network. To evaluate our proposed system, we use the REVERB challenge corpus to assess the performance of various enhancement approaches under the same conditions. We focus solely on monitoring improvements in speech quality and their contribution to improving the efficiency of back-end speech systems, such as speech recognition and speaker verification, trained on only clean speech. Experimental findings show that the proposed system consistently outperforms other approaches.
electrical engineering and systems science
The surfaces of intrinsic magnetic topological insulators (TIs) host magnetic moments exchange-coupled to Dirac electrons. We study the magnetic phases arising from tuning the electron density using variational and exact diagonalization approaches. In the dilute limit, we find that magnetic skrymions are formed which bind to electrons leading to a skyrmion crystal / Wigner crystal phase while at higher densities spin spirals accompanied by chiral 1d channels of electrons are formed. The binding of electrons to textures raises the possibility of manipulating textures with electrostatic gating. We determine the phase diagram capturing the competition of intrinsic spin-spin interactions and carrier density and comment on the possible application to experiments in magnetic TIs and spintronic devices such as skyrmion-based memory.
condensed matter
Recent observations of the high-mass X-ray binary Cygnus X-1 have shown that both the companion star (41 solar masses) and the black hole (21 solar masses) are more massive than previously estimated. Furthermore, the black hole appears to be nearly maximally spinning. Here we present a possible formation channel for the Cygnus X-1 system that matches the observed system properties. In this formation channel, we find that the orbital parameters of Cygnus X-1, combined with the observed metallicity of the companion, imply a significant reduction in mass loss through winds relative to commonly used prescriptions for stripped stars.
astrophysics
We present a computational design methodology for topology optimization of multi-material-based flexoelectric composites. The methodology extends our recently proposed design methodology for a single flexoelectric material. We adopt the multi-phase vector level set (LS) model which easily copes with various numbers of phases, efficiently satisfies multiple constraints and intrinsically avoids overlap or vacuum among different phases. We extend the point wise density mapping technique for multi-material design and use the B-spline elements to discretize the partial differential equations (PDEs) of flexoelectricity. The dependence of the objective function on the design variables is incorporated using the adjoint technique.The obtained design sensitivities are used in the Hamilton Jacobi equation to update the LS function. We provide numerical examples for two, three and four phase flexoelectric composites to demonstrate the flexibility of the model as well as the significant enhancement in electromechanical coupling coefficient that can be obtained using multi-material topology optimization for flexoelectric composites.
physics
We show that hyperrigidity for a C*-correspondence $(A,X)$ is equivalent to non-degeneracy of the left action of the Katsura ideal $\mathcal{J}_X$ on $X$. Due to the work of Katsoulis and Ramsey, our result shows that if $G$ is a locally compact group acting on $(A,X)$ and the Katsura ideal $\mathcal{J}_X$ acts on $X$ non-degenerately then the Hao-Ng isomorphism problem for reduced crossed products has a positive solution and the Hao-Ng isomorphism problem for full crossed products has a partial solution.
mathematics
By means of C-OTDR (Correlation - Optical Time Domain Reflectometry), we measured the latency of100 km fiber with an accuracy of a few picoseconds. Based on iterating 49 measurements, we calculated a standard deviation of 12 ps between the round-trip latency values. To verify the reflection measurements, we used a single pass setup without reflector, which showed a maximum difference of only 11 ps.
electrical engineering and systems science
We explore the possibility of CP violation in baryonic $\Lambda_b\to(\Lambda_c^+, p^+)\pi^+\mu^-\mu^-$ decays which are mediated by two Majorana sterile neutrino and are $|\Delta L|=2$ lepton number violating processes. Appreciable CP asymmetry can be obtained if there are two on-shell Majorana neutrinos that are quasi-degenerate in mass with the mass difference of the order of average decay widths. We find that given the present constraints on the heavy to light mixing element $|V_{\mu N}|$, the $\Lambda_b\to p^+\pi^+\mu^-\mu^-$ and $\Lambda_b\to \Lambda_c^+\pi^+\mu^-\mu^-$ decay rates are suppressed but could be within the experimental reach at the LHC. If searches of the modes are performed, then experimental limits on the rates can be translated to constraints on the Majorana neutrino mass $m_N$ and heavy to light mixing element squared $|V_{\mu N}|^2$. We show that the constraints on the $(m_N, |V_{\mu N}|^2)$ parameter space coming from the $|\Delta L| = 2$ baryonic decays are complementary to the bounds coming from other processes.
high energy physics phenomenology
Multistate stimulated Raman adiabatic passage (STIRAP) is a process which allows for adiabatic population transfer between the two ends of a chainwise-connected quantum system. The process requires large temporal areas of the driving pulsed fields (pump and Stokes) in order to suppress the nonadiabatic couplings and thereby to make adiabatic evolution possible. To this end, in the present paper a variation of multistate STIRAP, which accelerates and improves the population transfer, is presented. In addition to the usual pump and Stokes fields it uses shortcut fields applied between the states, which form the dark state of the system. The shortcuts cancel the couplings between the dark state and the other adiabatic states thereby resulting (in the ideal case) in a unit transition probability between the two end states of the chain. Specific examples of five-state systems formed of the magnetic sublevels of the transitions between two degenerate levels with angular momenta $J_g=2$ and $J_e=1$ or $J_e=2$ are considered in detail, for which the shortcut fields are derived analytically. The proposed method is simpler than the usual "shortcuts to adiabaticity" recipe, which prescribes shortcut fields between all states of the system, while the present proposal uses shortcut fields between the sublevels forming the dark state only. The results are of potential interest in applications where high-fidelity quantum control is essential, e.g. quantum information, atom optics, formation of ultracold molecules, cavity QED, etc.
quantum physics
The deepening penetration of renewable resources into power systems entails great difficulties that have not been surmounted satisfactorily. An issue that merits special attention is the short-term planning of power systems under net load uncertainty. To this end, we work out a distributionally robust unit commitment methodology that expressly assesses the uncertainty associated with net load. The principal strength of the proposed methodology lies in its ability to represent the probabilistic nature of net load without having to set forth its probability distribution. This strength is brought about by the notion of ambiguity set, for the construction of which the Kullback-Leibler divergence is employed in this paper. We demonstrate the effectiveness of the proposed methodology on real-world data using representative studies. The sensitivity analyses performed provide quantitative answers to a broad array of what if questions on the influence of divergence tolerance and dataset size on optimal solutions.
mathematics
In this work, we study the driven-dissipative dynamics of a coherently-driven spin ensemble with a squeezed, superradiant decay. This decay consists of a sum of both raising and lowering collective spin operators with a tunable weight. The model presents different critical non-equilibrium phases with a gapless Liouvillian that are associated to particular symmetries and that give rise to distinct kinds of non-ergodic dynamics. In Ref. [1] we focus on the case of a strong-symmetry and use this model to introduce and discuss the effect of dissipative freezing, where, regardless of the system size, stochastic quantum trajectories initialized in a superposition of different symmetry sectors always select a single one of them and remain there for the rest of the evolution. Here, we deepen this analysis and study in more detail the other type of non-ergodic physics present in the model, namely, the emergence of non-stationary dynamics in the thermodynamic limit. We complete our description of squeezed superradiance by analysing its metrological properties in terms of spin squeezing and by analysing the features that each of these critical phases imprint on the light emitted by the system.
quantum physics
We explore the idea to bootstrap Feynman integrals using integrability. In particular, we put the recently discovered Yangian symmetry of conformal Feynman integrals to work. As a prototypical example we demonstrate that the D-dimensional box integral with generic propagator powers is completely fixed by its symmetries to be a particular linear combination of Appell hypergeometric functions. In this context the Bloch-Wigner function arises as a special Yangian invariant in 4D. The bootstrap procedure for the box integral is naturally structured in algorithmic form. We then discuss the Yangian constraints for the six-point double box integral as well as for the related hexagon. For the latter we argue that the constraints are solved by a set of generalized Lauricella functions and we comment on complications in identifying the integral as a certain linear combination of these. Finally, we elaborate on the close relation to the Mellin-Barnes technique and argue that it generates Yangian invariants as sums of residues.
high energy physics theory
We construct a class of interacting $(d-2)$-form theories in $d$ dimensions that are `third way' consistent. This refers to the fact that the interaction terms in the $p$-form field equations of motion neither come from the variation of an action nor are they off-shell conserved on their own. Nevertheless the full equation is still on-shell consistent. Various generalizations, e.g. coupling them to $(d-3)$-forms, where 3-algebras play a prominent role, are also discussed. The method to construct these models also easily recovers the modified 3$d$ Yang-Mills theory obtained earlier and straightforwardly allows for higher derivative extensions.
high energy physics theory
An extensible statistical framework for detecting anomalous time series including those with heavy-tailed distributions and non-stationarity in higher-order moments is introduced based on penalized likelihood distributional regression. Specifically, generalized additive models for location, scale, and shape are used to infer sample path representations defined by a parametric distribution with parameters comprised of basis functions. Akaike weights are then applied to each model and time series, yielding a probability measure that can be effectively used to classify and rank anomalous time series. A mathematical exposition is also given to justify the proposed Akaike weight scoring under a suitable model embedding as a way to asymptotically identify anomalous time series. Studies evaluating the methodology on both multiple simulations and real-world datasets also confirm that high accuracy can be obtained detecting many different and complex types of shape anomalies. Both code implementing GAWS for running on a local machine and the datasets referenced in this paper are available online.
statistics
We present an architecture to investigate wave-particle duality in $N$-path interferometers on a universal quantum computer involving as low as $2\log N$ qubits and develop a measurement scheme which allows the efficient extraction of quantifiers of interference visibility and which-path information. We implement our algorithms for interferometers with up to $N=16$ paths in proof-of-principle experiments on a noisy intermediate-scale quantum (NISQ) device using down to $\mathcal{O}(\log N)$ gates and despite increasing noise consistently observe a complementary behavior between interference visibility and which-path information. Our results are in accordance with our current understanding of wave-particle duality and allow its investigation for interferometers with an exponentially growing number of paths on future quantum devices beyond the NISQ era.
quantum physics
We consider the estimation of two-sample integral functionals, of the type that occur naturally, for example, when the object of interest is a divergence between unknown probability densities. Our first main result is that, in wide generality, a weighted nearest neighbour estimator is efficient, in the sense of achieving the local asymptotic minimax lower bound. Moreover, we also prove a corresponding central limit theorem, which facilitates the construction of asymptotically valid confidence intervals for the functional, having asymptotically minimal width. One interesting consequence of our results is the discovery that, for certain functionals, the worst-case performance of our estimator may improve on that of the natural `oracle' estimator, which is given access to the values of the unknown densities at the observations.
mathematics
Increasingly, scholars seek to integrate legal and technological insights to combat bias in AI systems. In recent years, many different definitions for ensuring non-discrimination in algorithmic decision systems have been put forward. In this paper, we first briefly describe the EU law framework covering cases of algorithmic discrimination. Second, we present an algorithm that harnesses optimal transport to provide a flexible framework to interpolate between different fairness definitions. Third, we show that important normative and legal challenges remain for the implementation of algorithmic fairness interventions in real-world scenarios. Overall, the paper seeks to contribute to the quest for flexible technical frameworks that can be adapted to varying legal and normative fairness constraints.
computer science
After extensively explored, broad agreement between observations and theories has been reached that satellites are preferentially aligned with major axes of their host centrals. There are still some issues unsolved on this topic. In this paper, we present studies on satellite spatial distribution. To fairly compare with observations, we develop a novel galaxy finder and reconstruction algorithm in hydrodynamical simulation, which is based on the projected mock image, taking into account the full consideration of the point spread function, pixel size, surface brightness limit, resolution and redshift dimming effects. With galaxy samples constructed using such an algorithm, the satellite alignment is examined by comparing with observational results. It is found that the observational alignment can be reproduced for red galaxies, which dominate the sample in this study, but not for blue galaxies. Satellites' radial distribution is also investigated. It exhibits that outer satellites within host halos show stronger alignment signal than satellites in the inner regions, especially for red satellites, which is in contrast with previous studies. The disagreement is mainly due to extra galaxies identified by our new galaxy finder, which are mainly located in the inner region of host halos. Our study illustrates that at lower redshift, the alignment strength becomes stronger, while radial distribution curve becomes flatter. This suggests differences in the evolution of the angular distribution between satellites residing in the inner and outer halos, and implies that the post-infall evolution reduces the original alignment signal, that the impact decreases for satellites with later infall times.
astrophysics
We estimate the sensitivity to the top-quark Yukawa coupling $y_t$ at future $e^+e^-$ colliders. We go beyond the standard approach that focuses on $t\bar t h$ production and consider final states with a Higgs boson but not top quarks. The sensitivity to $y_t$ in such processes comes from the coupling of the Higgs boson to top quarks in loops. Such final states can be produced in significant numbers at center-of-mass energies that will be accessible by all proposed $e^+e^-$ colliders. In a simplified theoretical framework to parametrise deviations from the Standard Model, we find that at FCC-$ee$ and CEPC operating at $\sqrt{s}=240$ GeV, $y_t$ could potentially be measured with precision better than $1\%$. For CLIC and ILC the extraction of $y_t$ could be improved by a factor of about 2 and 7 respectively, compared to its extraction from just $t\bar t h$ final states.
high energy physics phenomenology
In many real world problems, we want to infer some property of an expensive black-box function f, given a budget of T function evaluations. One example is budget constrained global optimization of f, for which Bayesian optimization is a popular method. Other properties of interest include local optima, level sets, integrals, or graph-structured information induced by f. Often, we can find an algorithm A to compute the desired property, but it may require far more than T queries to execute. Given such an A, and a prior distribution over f, we refer to the problem of inferring the output of A using T evaluations as Bayesian Algorithm Execution (BAX). To tackle this problem, we present a procedure, InfoBAX, that sequentially chooses queries that maximize mutual information with respect to the algorithm's output. Applying this to Dijkstra's algorithm, for instance, we infer shortest paths in synthetic and real-world graphs with black-box edge costs. Using evolution strategies, we yield variants of Bayesian optimization that target local, rather than global, optima. On these problems, InfoBAX uses up to 500 times fewer queries to f than required by the original algorithm. Our method is closely connected to other Bayesian optimal experimental design procedures such as entropy search methods and optimal sensor placement using Gaussian processes.
statistics
We performed integrated modelling of the chemical pathways of formation for boron nitride nanotube (BNNT) precursors during high-temperature synthesis in a B/N2 mixture. Modelling includes quantum chemistry, quantum-classical molecular dynamics, thermodynamic, and kinetic approaches. It is shown that BN compounds are formed in the interaction of N2 molecules with small boron clusters (N2 molecule fixation) rather than with less reactive liquid boron. We demonstrate that the transformation and consumption of liquid boron proceeds through the evaporation of clusters, Bm with m less than or equal to 5 and their subsequent conversion into BmNn chains. The production of such chains is crucial to the growth of BNNTs because these chains form the building blocks of bigger and longer BN chains and rings, which are themselves the building blocks of fullborenes and BNNTs. Moreover, kinetic modelling revealed that B4N4 and B5N4 species play a major role in the N2 molecule fixation process. The formation of these species via reactions with B4 and B5 clusters is not adequately described under the assumption of thermodynamic equilibrium because the accumulation of both B4N4 and B5N4 depends on the background gas pressure and the gas cooling rate. Long BN chains and rings, which are precursors of the fullborene and BNNT growth, form via self-assembly of component B4N4 and B5N4. Our modelling results (particularly the increased densities of B4N4 and B5N4 species at higher gas pressures) explain the experimentally observed effect of gas pressure on the yield of high-quality BNNTs. The catalytic role of hydrogen was also studied; it is shown that HBNH molecules can be the main precursor of BNNT synthesis in the presence of hydrogen.
physics
Existing electricity market designs assume risk neutrality and lack risk-hedging instruments, which leads to suboptimal market outcomes and reduces the overall market efficiency. This paper enables risk-trading in the chance-constrained stochastic electricity market by introducing Arrow-Debreu Securities (ADS) and derives a risk-averse market-clearing model with risk trading. To enable risk trading, the probability space of underlying uncertainty is discretized in a finite number of outcomes, which makes it possible to design practical risk contracts and to produce energy, balancing reserve and risk prices. Notably, although risk contracts are discrete, the model preserves the continuity of chance constraints. The case study illustrates the usefulness of the proposed risk-averse chance-constrained electricity market with risk trading.
electrical engineering and systems science
We propose a degenerated hierarchical look-up table (DH-LUT) scheme to compensate component nonlinearities. For probabilistically shaped 64-QAM signals, it achieves up to 2-dB SNR improvement, while the size of table is only 8.59% compared to the conventional LUT method.
electrical engineering and systems science
We established order-preserving versions of the basic principles of functional analysis such as Hahn-Banach, Banach-Steinhaus, open mapping and Banach-Alaoglu theorems.
mathematics
We report two microlensing planet candidates discovered by the KMTNet survey in $2017$. However, both events have the 2L1S/1L2S degeneracy, which is an obstacle to claiming the discovery of the planets with certainty unless the degeneracy can be resolved. For KMT-2017-BLG-0962, the degeneracy cannot be resolved. If the 2L1S solution is correct, KMT-2017-BLG-0962 might be produced by a super Jupiter-mass planet orbiting a mid-M dwarf host star. For KMT-2017-BLG-1119, the light curve modeling favors the 2L1S solution but higher-resolution observations of the baseline object tend to support the 1L2S interpretation rather than the planetary interpretation. This degeneracy might be resolved by a future measurement of the lens-source relative proper motion. This study shows the problem of resolving 2L1S/1L2S degeneracy exists over a much wider range of conditions than those considered by the theoretical study of Gaudi (1998).
astrophysics
Neutrinos are the second most abundant particle in the universe. Since the last 50 years, Neutrino physics has been a source of limelight in modern physics because of the incredible characteristics of this elusive particle. According to the basic postulates of the Standard Model of particle physics neutrinos were assumed to be massless, but the results inferred from the experiments conducted in the past few years in the field of neutrino physics have concluded that neutrinos have a finite mass. In this paper we have discussed the various experiments conducted in the past which have tried to set an experimental limit on the absolute neutrino masses. Further, the future experiments which are expected to bring these limits to sub eV regime are also discussed.
high energy physics phenomenology
Superheated perfluorocarbon nanodroplets are emerging ultrasound imaging contrast agents boasting biocompatible components, unique phase-change dynamics, and therapeutic loading capabilities. Upon exposure to a sufficiently high intensity pulse of acoustic energy, the nanodroplet perfluorocarbon core undergoes a liquid-to-gas phase change and becomes an echogenic microbubble, providing ultrasound contrast. The controllable activation leads to high-contrast images, while the small size of the nanodroplets promotes longer circulation times and better in-vivo stability. One drawback, however, is that the nanodroplets can only be vaporized a single time, limiting their versatility. Recently, we and others have address this issue by using a perfluorohexane core, which has a boiling point above body temperature. Thus after vaporization, the microbubbles recondense back into their stable nanodroplets form. Previous work with perfluorohexane nanodroplets relied on optical activation via pulse laser absorption of an encapsulated dye. This strategy limits the imaging depth and temporal resolution of the method. In this study we overcome these limitations by demonstrating acoustic droplet vaporization with 1.1-MHz high intensity focused ultrasound. A short-duration, high-amplitude pulse of focused ultrasound provides a sufficiently strong peak negative pressure to initiate vaporization. When using a custom imaging sequence with a high-frequency transducer, the repeated acoustic activation of perfluorohexane nanodroplets can be visualized in polyacrylamide tissue-mimicking phantoms. We demonstrate detection of hundreds of vaporization events from individual nanodroplets with activation thresholds well below the tissue cavitation limit. Overall, this approach has the potential to result in reliable contrast-enhanced ultrasound imaging at clinically relevant depths.
physics
In this paper we consider a Boltzmann-type kinetic description of Follow-the-Leader traffic dynamics and we study the resulting asymptotic distributions, namely the counterpart of the Maxwellian distribution of the classical kinetic theory. In the Boltzmann-type equation we include a non-constant collision kernel, in the form of a cutoff, in order to exclude from the statistical model possibly unphysical interactions. In spite of the increased analytical difficulty caused by this further non-linearity, we show that a careful application of the quasi-invariant limit (an asymptotic procedure reminiscent of the grazing collision limit) successfully leads to a Fokker-Planck approximation of the original Boltzmann-type equation, whence stationary distributions can be explicitly computed. Our analytical results justify, from a genuinely model-based point of view, some empirical results found in the literature by interpolation of experimental data.
physics
We prove that, consistently, there exists a weakly but not strongly inaccessible cardinal $\lambda$ for which the sequence $\langle 2^\theta:\theta<\lambda\rangle$ is not eventually constant and the weak diamond fails at $\lambda$. We also prove that consistently diamond fails but a parametrized version of weak diamond holds at some strongly inaccessible $\lambda$.
mathematics
We have experimentally demonstrated the first non-intrusive 1-GeV proton beam extraction for the generation of muons with a temporal structure optimized for Muon Spin Relaxation/Rotation/Resonance (MuSR) applications. The proton pulses are extracted based on the laser neutralization of 1 GeV hydrogen ion (H-) beam in the high energy beam transport of the Spallation Neutron Source (SNS) accelerator. The maximum flux of the extracted proton beam accounts for only 0.2% of the total proton beam used for neutron production, a marked difference from the 20% reduction at other co-located muon and neutron facilities, and thus the proposed method will result in negligible impact on the SNS operation. This paper describes the development of a fiber/solid-state hybrid laser system that has high flexibility of pulse structure and output power, initial experiments on laser neutralization of H- beam and separation of H0 beam from the existing SNS accelerator beam line, conversion of H0 to proton at the SNS linac dump, and measurement results of 30 ns/50 kHz proton pulses. This system conclusively demonstrates the feasibility of laser-based proton beam extraction to power a world-leading MuSR facility at the SNS.
physics
Using the most recent experimental data on parameters of the standard electroweak theory, as well as renormalisation group equations with a boundary matching condition, we derive a refined and more accurate value for the mass of the doubly-charged bilepton ($Y^{\pm\pm}$) occurring in the spontaneous breaking of the gauge group $SU(3)_L \times U(1)_X$ to the standard electroweak gauge group $SU(2)_L \times U(1)_Y$. Our result is $M(Y^{\pm\pm}) = (1.29 \pm 0.06)$ TeV.
high energy physics phenomenology
We analyse the phenomenological implications of the two-families scenario on the merger of compact stars. That scenario is based on the coexistence of both hadronic stars and strange quark stars. After discussing the classification of the possible mergers, we turn to detailed numerical simulations of the merger of two hadronic stars, i.e., "first family" stars in which delta resonances and hyperons are present, and we show results for the threshold mass of such binaries, for the mass dynamically ejected and the mass of the disk surrounding the post-merger object. We compare these results with those obtained within the one-family scenario and we conclude that relevant signatures of the two-families scenario can be suggested, in particular: the possibility of a rapid collapse to a black hole for masses even smaller than the ones associated to GW170817; during the first milliseconds, oscillations of the postmerger remnant at frequencies higher than the ones obtained in the one-family scenario; a large value of the mass dynamically ejected and a small mass of the disk, for binaries of low total mass. Finally, based on a population synthesis analysis, we present estimates of the number of mergers for: two hadronic stars; hadronic star - strange quark star; two strange quark stars. We show that for unequal mass systems and intermediate values of the total mass, the merger of a hadronic star and a strange quark star is very likely (GW170817 has a possible interpretation into this category of mergers). On the other hand, mergers of two strange quark stars are strongly suppressed.
astrophysics
The eigenvector-eigenvalue identity relates the eigenvectors of a Hermitian matrix to its eigenvalues and the eigenvalues of its principal submatrices in which the jth row and column have been removed. We show that one-dimensional arrays of coupled resonators, described by square matrices with real eigenvalues, provide simple physical systems where this formula can be applied in practice. The subsystems consist of arrays with the jth resonator removed, and thus can be realized physically. From their spectra alone, the oscillation modes of the full system can be obtained. This principle of successive single resonator deletions is demonstrated in two experiments of coupled radiofrequency resonator arrays with greater-than-nearest neighbor couplings, in which the spectra are measured with a network analyzer. Both the Hermitian as well as a non-Hermitian case are covered in the experiments. In both cases the experimental eigenvector estimates agree well with numerical simulations if certain consistency conditions imposed by system symmetries are taken into account. In the Hermitian case, these estimates are obtained from resonance spectra alone without knowledge of the system parameters. It remains an interesting problem of physical relevance to find conditions under which the full non-Hermitian eigenvector set can be obtained from the spectra alone.
quantum physics
In this article, we introduce the notion of periodic de Rham bundles over smooth complex curves. We prove that motivic de Rham bundles over smooth complex curves are periodic. We conjecture the converse, that is, that periodic de Rham bundles over smooth complex curves are motivic. The conjecture holds for rank one objects and certain rigid objects.
mathematics
The growing interest in active nematics and the emerging evidence of the relevance of topological defects in biology asks for reliable data analysis tools to identify, classify and track such defects in simulation and microscopy data. We here provide such tools and demonstrate on two examples, on an active turbulent state in an active nematodynamic model and on emerging nematic order in a multi-phase field model, the possibility to compare statistical data on defect velocities with experimental results. The considered tools, which are physics based and data driven, are compared with each other.
condensed matter
Cognitive radio is an intelligent and adaptive radio that improves the utilization of the spectrum by its opportunistic sharing. However, it is inherently vulnerable to primary user emulation and jamming attacks that degrade the spectrum utilization. In this paper, an algorithm for the detection of primary user emulation and jamming attacks in cognitive radio is proposed. The proposed algorithm is based on the sparse coding of the compressed received signal over a channel-dependent dictionary. More specifically, the convergence patterns in sparse coding according to such a dictionary are used to distinguish between a spectrum hole, a legitimate primary user, and an emulator or a jammer. The process of decision-making is carried out as a machine learning-based classification operation. Extensive numerical experiments show the effectiveness of the proposed algorithm in detecting the aforementioned attacks with high success rates. This is validated in terms of the confusion matrix quality metric. Besides, the proposed algorithm is shown to be superior to energy detection-based machine learning techniques in terms of receiver operating characteristics curves and the areas under these curves
electrical engineering and systems science
Solid state physics deals with systems composed of atoms with strongly bound electrons. The tunneling probability of each electron is determined by interactions that typically extend to neighboring sites, as their corresponding wave amplitudes decay rapidly away from an isolated atomic core. This kind of description is essential to material science, and it rules the electronic transport properties of metals, insulators and other condensed matter systems. The corresponding phenomenology is well captured by tight-binding models, where the electronic band structure emerges from atomic orbitals of isolated atoms plus their coupling to neighboring sites in a cristal. In this work, a mechanical system that emulates dynamically a tightly bound electron is built. This is done by connecting mechanical resonators via locally periodic aluminum bars acting as couplers. When the frequency of a particular resonator lies within the frequency gap of a coupler, the vibrational wave amplitude imitates a bound electron orbital. The localization of the wave at the resonator site and its exponential decay along the coupler are experimentally verified. The quantum dynamical tight-binding model and frequency measurements in mechanical structures show an excellent agreement.
condensed matter
Discovering other worlds the size of our own has been a long-held dream of astronomers. The transiting planets Kepler-20e and Kepler-20f, which belong to a multi-planet system, hold a very special place among the many groundbreaking discoveries of the Kepler mission because they finally realized that dream. The radius of Kepler-20f is essentially identical to that of the Earth, while Kepler-20e is even smaller (0.87 R[Earth]), and was the first exoplanet to earn that distinction. Their masses, however, are too light to measure with current instrumentation, and this has prevented their confirmation by the usual Doppler technique that has been used so successfully to confirm many other larger planets. To persuade themselves of the planetary nature of these tiny objects, astronomers employed instead a statistical technique to "validate" them, showing that the likelihood they are planets is orders of magnitude larger than a false positive. Kepler-20e and 20f orbit their Sun-like star every 6.1 and 19.6 days, respectively, and are most likely of rocky composition. Here we review the history of how they were found, and present an overview of the methodology that was used to validate them.
astrophysics
We classify the large $N$ limits of four-dimensional supersymmetric gauge theories with simple gauge groups that flow to superconformal fixed points. We restrict ourselves to the ones without a superpotential and with a fixed flavor symmetry. We find 35 classes in total, with 8 having a dense spectrum of chiral gauge-invariant operators. The central charges $a$ and $c$ for the dense theories grow linearly in $N$ in contrast to the $N^2$ growth for the theories with a sparse spectrum. The difference between the central charges $a-c$ can have both signs, and it does not vanish in the large $N$ limit for the dense theories. We find that there can be multiple bands separated by a gap, or a discrete spectrum above the band. We also find a criterion on the matter content for the fixed point theory to possess either a dense or sparse spectrum. We discover a few curious aspects regarding supersymmetric RG flows and $a$-maximization along the way. For all the theories with the dense spectrum, the AdS version of the Weak Gravity Conjecture (including the convex hull condition for the cases with multiple $U(1)$'s) holds for large enough $N$ even though they do not have weakly-coupled gravity duals.
high energy physics theory
In the present paper we study $\SXR$ and $\HXR$ geometries, which are homogeneous Thurston 3-geometries. We define and determine the generalized Apollonius surfaces and with them define the "surface of a geodesic triangle". Using the above Apollonius surfaces we develop a procedure to determine the centre and the radius of the circumscribed geodesic sphere of an arbitrary $\SXR$ and $\HXR$ tetrahedron. Moreover, we generalize the famous Menelaus' and Ceva's theorems for geodesic triangles in both spaces. In our work we will use the projective model of $\SXR$ and $\HXR$ geometries described by E. Moln\'ar in \cite{M97}.
mathematics
Recently, a full-scale data processing workflow of the Square Kilometre Array (SKA) Phase 1 was successfully executed on the world's fastest supercomputer Summit, proving that scientists have the expertise, software tools and computing resources to process the SKA data. The SKA-Summit experiment shows the importance of multidisciplinary cooperation between astronomy, computer science and others communities. The SKA science cannot be achieved without the joint efforts of talents from multiple fields.
astrophysics
Numerical resolution of exterior Helmholtz problems requires some approach to domain truncation. As an alternative to approximate nonreflecting boundary conditions and invocation of the Dirichlet-to-Neumann map, we introduce a new, nonlocal boundary condition. This condition is exact and requires the evaluation of layer potentials involving the free space Green's function. However, it seems to work in general unstructured geometry, and Galerkin finite element discretization leads to convergence under the usual mesh constraints imposed by G{\aa}rding-type inequalities. The nonlocal boundary conditions are readily approximated by fast multipole methods, and the resulting linear system can be preconditioned by the purely local operator involving transmission boundary conditions.
mathematics