text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
Let $E$ be a sublattice of a vector lattice $F$. $\left( x_\alpha \right)\subseteq E$ is said to be $ F $-order convergent to a vector $ x $ (in symbols $ x_\alpha \xrightarrow{Fo} x $), whenever there exists another net $ \left(y_\alpha\right) $ in $F $ with the some index set satisfying $ y_\alpha\downarrow 0 $ in $F$ and $ | x_\alpha - x | \leq y_\alpha $ for all indexes $ \alpha $. If $F=E^{\sim\sim}$, this convergence is called $b$-order convergence and we write $ x_\alpha \xrightarrow{bo} x$. In this manuscript, first we study some properties of $Fo$-convergence nets and we extend some results to the general case. In the second part, we introduce $b$-order continuous operators and we invistegate some properties of this new concept. An operator $T$ between two vector lattices $E$ and $F$ is said to be $b$-order continuous, if $ x_\alpha \xrightarrow{bo} 0 $ in $E$ implies $ Tx_\alpha \xrightarrow{bo} 0$ in $F$.
|
mathematics
|
While much of the focus around Ultra-Diffuse Galaxies (UDGs) has been given to those in galaxy groups and clusters, relatively little is known about them in less-dense environments. These isolated UDGs provide fundamental insights into UDG formation because environmentally driven evolution and survivability play less of a role in determining their physical and observable properties. We have recently conducted a statistical analysis of UDGs in the field using a new catalogue of sources detected in the deep Kilo-Degree Survey (KiDS) and Hyper Suprime-Cam Subaru Strategic Program (HSC-SSP) optical imaging surveys. Using an empirical model to assess our contamination from interloping sources, we show that a scenario in which cluster-like quiescent UDGs occupy a large fraction of the field UDG population is unlikely, with most being significantly bluer and some showing signs of localised star formation. We estimate an upper-limit on the total field abundance of UDGs of 8$\pm$3$\times10^{-3}$cMpc$^{-3}$ within our selection range. The mass formation efficiency of UDGs implied by this upper-limit is similar to what is measured in groups and clusters, meaning that secular formation channels may significantly contribute to the overall UDG population.
|
astrophysics
|
Folded linear molecular chains are ubiquitous in biology. Folding is mediated by intra-chain interactions that "glue" two or more regions of a chain. The resulting fold topology is widely believed to be a determinant of biomolecular properties and function. Recently, knot theory has been extended to describe the topology of folded linear chains such as proteins and nucleic acids. To classify and distinguish chain topologies, algebraic structure of quandles has been adapted and applied. However, the approach is limited as apparently distinct topologies may end up having the same number of colorings. Here, we enhance the resolving power of the quandle coloring approach by introducing Boltzmann weights. We demonstrate that the enhanced coloring invariants can distinguish fold topologies with an improved resolution.
|
mathematics
|
This study aims towards a systematic reciprocity of the tunable synthesis parameters - partial pressure of N$_2$ gas, ion energy (\Ei) and Ti interface in TiN thin film samples deposited using ion beam sputtering at ambient temperature (300\,K). At the optimum partial pressure of N$_2$ gas, samples were prepared with or without Ti interface at \Ei~=~1.0 or 0.5\,keV. They were characterized using x-ray reflectivity (XRR) to deduce thickness, roughness and density. The roughness of TiN thin films was found to be below 1\,nm, when deposited at the lower \Ei~of 0.5\,keV and when interfaced with a layer of Ti. Under these conditions, the density of TiN sample reaches to 5.80($\pm$0.03)\,g~cm$^{-3}$, a value highest hitherto for any TiN sample. X-ray diffraction and electrical resistivity measurements were performed. It was found that the cumulative effect of the reduction in \Ei~from 1.0 to 0.5\,keV and the addition of Ti interface favors (111) oriented growth leading to dense and smooth TiN films and a substantial reduction in the electrical resistivity. The reduction in \Ei~has been attributed to the surface kinetics mechanism (simulated using SRIM) where the available energy of the sputtered species (\Esp) leaving the target at \Ei~= 0.5\,keV is the optimum value favoring the growth of defects free homogeneously distributed films. The electronic structure of samples was probed using N K-edge absorption spectroscopy and the information about the crystal field and spin-orbit splitting confirmed TiN phase formation. In essence, through this work, we demonstrate the role of \Esp~and Ti interface in achieving highly dense and smooth TiN thin films with low resistivity without the need of a high temperature or substrate biasing during the thin film deposition process.
|
condensed matter
|
We discuss the hadronic contributions to the muon anomalous magnetic moment. They are dominated by light quark contributions which are constrained by the mechanism of chiral symmetry breaking. Using the leading order result based on $e^+ e^-$ scattering data, we show that the next-to-leading order contributions in the fine structure constant $\alpha$ can be reliably calculated. Extending this idea to the hadronic four-point function we give a prediction for the light-by-light contribution.
|
high energy physics phenomenology
|
The Nadaraya-Watson kernel estimator is among the most popular nonparameteric regression technique thanks to its simplicity. Its asymptotic bias has been studied by Rosenblatt in 1969 and has been reported in a number of related literature. However, Rosenblatt's analysis is only valid for infinitesimal bandwidth. In contrast, we propose in this paper an upper bound of the bias which holds for finite bandwidths. Moreover, contrarily to the classic analysis we allow for discontinuous first order derivative of the regression function, we extend our bounds for multidimensional domains and we include the knowledge of the bound of the regression function when it exists and if it is known, to obtain a tighter bound. We believe that this work has potential applications in those fields where some hard guarantees on the error are needed
|
statistics
|
We consider an interacting particle system modeled as a system of $N$ stochastic differential equations driven by Brownian motions. We prove that the (mollified) empirical process converges, uniformly in time and space variables, to the solution of the two-dimensional Navier-Stokes equation written in vorticity form. The proofs follow a semigroup approach.
|
mathematics
|
The recognition that the eigenvalues of a non-Hermitian Hamiltonian could all be real if the Hamiltonian had an antilinear symmetry such as $PT$ stimulated new insight into the underlying structure of quantum mechanics. Specifically, it lead to the realization that Hilbert space could be richer than the established Dirac approach of constructing inner products out of ket vectors and their Hermitian conjugate bra vectors. With antilinear symmetry one must instead build inner products out of ket vectors and their antilinear conjugates, and it is these inner products that would be time independent in the non-Hermitian but antilinearly symmetric case even as the standard Dirac inner products would not be. Moreover, and in a sense quite remarkably, antilinear symmetry could address not only the temporal behavior of the inner product but also the issue of its overall sign, with antilinear symmetry being capable of yielding a positive inner product in situations where the standard Dirac inner product is found to have ghostlike negative signature. Antilinear symmetry thus solves the ghost problem in quantum field theory by showing that when a theory has ghost states it is being formulated in the wrong Hilbert space, with antilinear symmetry providing a Hilbert space that is ghost free. Antilinear symmetry does not actually get rid of the ghost states. Rather, it shows that the reasoning that led one to think that ghosts were present in the first place is faulty.
|
high energy physics theory
|
A common problem in cosmology is to integrate the product of two or more spherical Bessel functions (sBFs) with different configuration-space arguments against the power spectrum or its square, weighted by powers of wavenumber. Naively computing them scales as $N_{\rm g}^{p+1}$ with $p$ the number of configuration space arguments and $N_{\rm g}$ the grid size, and they cannot be done with Fast Fourier Transforms (FFTs). Here we show that by rewriting the sBFs as sums of products of sine and cosine and then using the product to sum identities, these integrals can then be performed using 1-D FFTs with $N_{\rm g} \log N_{\rm g}$ scaling. This "rotation" method has the potential to accelerate significantly a number of calculations in cosmology, such as perturbation theory predictions of loop integrals, higher order correlation functions, and analytic templates for correlation function covariance matrices. We implement this approach numerically both in a free-standing, publicly-available \textsc{Python} code and within the larger, publicly-available package \texttt{mcfit}. The rotation method evaluated with direct integrations already offers a factor of 6-10$\times$ speed-up over the naive approach in our test cases. Using FFTs, which the rotation method enables, then further improves this to a speed-up of $\sim$$1000-3000\times$ over the naive approach. The rotation method should be useful in light of upcoming large datasets such as DESI or LSST. In analysing these datasets recomputation of these integrals a substantial number of times, for instance to update perturbation theory predictions or covariance matrices as the input linear power spectrum is changed, will be one piece in a Monte Carlo Markov Chain cosmological parameter search: thus the overall savings from our method should be significant.
|
astrophysics
|
An integral function of fully autonomous robots and humans is the ability to focus attention on a few relevant percepts to reach a certain goal while disregarding irrelevant percepts. Humans and animals rely on the interactions between the Pre-Frontal Cortex (PFC) and the Basal Ganglia (BG) to achieve this focus called Working Memory (WM). The Working Memory Toolkit (WMtk) was developed based on a computational neuroscience model of this phenomenon with Temporal Difference (TD) Learning for autonomous systems. Recent adaptations of the toolkit either utilize Abstract Task Representations (ATRs) to solve Non-Observable (NO) tasks or storage of past input features to solve Partially-Observable (PO) tasks, but not both. We propose a new model, PONOWMtk, which combines both approaches, ATRs and input storage, with a static or dynamic number of ATRs. The results of our experiments show that PONOWMtk performs effectively for tasks that exhibit PO, NO, or both properties.
|
computer science
|
For finite samples with binary outcomes penalized logistic regression such as ridge logistic regression (RR) has the potential of achieving smaller mean squared errors (MSE) of coefficients and predictions than maximum likelihood estimation. There is evidence, however, that RR is sensitive to small or sparse data situations, yielding poor performance in individual datasets. In this paper, we elaborate this issue further by performing a comprehensive simulation study, investigating the performance of RR in comparison to Firth's correction that has been shown to perform well in low-dimensional settings. Performance of RR strongly depends on the choice of complexity parameter that is usually tuned by minimizing some measure of the out-of-sample prediction error or information criterion. Alternatively, it may be determined according to prior assumptions about true effects. As shown in our simulation and illustrated by a data example, values optimized in small or sparse datasets are negatively correlated with optimal values and suffer from substantial variability which translates into large MSE of coefficients and large variability of calibration slopes. In contrast, if the degree of shrinkage is pre-specified, accurate coefficients and predictions can be obtained even in non-ideal settings such as encountered in the context of rare outcomes or sparse predictors.
|
statistics
|
Fermi's golden rule describes the decay dynamics of unstable quantum systems coupled to a reservoir, and predicts a linear decay in time. Although it arises at relatively short times, the Fermi regime does not take hold in the earliest stages of the quantum dynamics. The standard criterion in the literature for the onset time of the Fermi regime is $t_F\sim1/\Delta\omega$, with $\Delta\omega$ the frequency interval around the resonant transition frequency $\omega_0$ of the system, over which the coupling to the reservoir does not vary appreciably. In this work, this criterion is shown to be inappropriate in general for broadband reservoirs, where the reservoir coupling spectrum takes the form $R\left(\omega\right)\propto\omega^\eta$, and for which it is found that for $\eta>1$, the onset time of the Fermi regime is given by $t_F\propto\left(\omega_{\mathrm{X}}/\omega_0\right)^{\eta-1}\times1/\omega_0$ where $\omega_{\mathrm{X}}$ is the high-frequency cutoff of the reservoir. Therefore, the onset of the Fermi regime can take place at times orders of magnitude larger than those predicted by the standard criterion. This phenomenon is shown to be related to the excitation of the off-resonant frequencies of the reservoir at short times. For broadband reservoirs with $\eta\leq1$, and for narrowband reservoirs, it is shown that the standard criterion is correct. Our findings revisit the conditions of applicability of Fermi's golden rule and improve our understanding of the dynamics of unstable quantum systems.
|
quantum physics
|
Query by Humming (QBH) is a system to provide a user with the song(s) which the user hums to the system. Current QBH method requires the extraction of onset and pitch information in order to track similarity with various versions of different songs. However, we here focus on detecting precise onsets only and use them to build a QBH system which is better than existing methods in terms of speed and memory and empirically in terms of accuracy. We also provide statistical analogy for onset detection functions and provide a measure of error in our algorithm.
|
statistics
|
Multi-class classification problem is among the most popular and well-studied statistical frameworks. Modern multi-class datasets can be extremely ambiguous and single-output predictions fail to deliver satisfactory performance. By allowing predictors to predict a set of label candidates, set-valued classification offers a natural way to deal with this ambiguity. Several formulations of set-valued classification are available in the literature and each of them leads to different prediction strategies. The present survey aims to review popular formulations using a unified statistical framework. The proposed framework encompasses previously considered and leads to new formulations as well as it allows to understand underlying trade-offs of each formulation. We provide infinite sample optimal set-valued classification strategies and review a general plug-in principle to construct data-driven algorithms. The exposition is supported by examples and pointers to both theoretical and practical contributions. Finally, we provide experiments on real-world datasets comparing these approaches in practice and providing general practical guidelines.
|
statistics
|
In regression analysis under artificial neural networks, the prediction performance depends on determining the appropriate weights between layers. As randomly initialized weights are updated during back-propagation using the gradient descent procedure under a given loss function, the loss function structure can affect the performance significantly. In this study, we considered the distribution error, i.e., the inconsistency of two distributions (those of the predicted values and label), as the prediction error, and proposed weighted empirical stretching (WES) as a novel loss function to increase the overlap area of the two distributions. The function depends on the distribution of a given label, thus, it is applicable to any distribution shape. Moreover, it contains a scaling hyperparameter such that the appropriate parameter value maximizes the common section of the two distributions. To test the function capability, we generated ideal distributed curves (unimodal, skewed unimodal, bimodal, and skewed bimodal) as the labels, and used the Fourier-extracted input data from the curves under a feedforward neural network. In general, WES outperformed loss functions in wide use, and the performance was robust to the various noise levels. The improved results in RMSE for the extreme domain (i.e., both tail regions of the distribution) are expected to be utilized for prediction of abnormal events in non-linear complex systems such as natural disaster and financial crisis.
|
computer science
|
We propose inference procedures for general nonparametric factorial survival designs with possibly right-censored data. Similar to additive Aalen models, null hypotheses are formulated in terms of cumulative hazards. Thereby, deviations are measured in terms of quadratic forms in Nelson-Aalen-type integrals. Different to existing approaches this allows to work without restrictive model assumptions as proportional hazards. In particular, crossing survival or hazard curves can be detected without a significant loss of power. For a distribution-free application of the method, a permutation strategy is suggested. The resulting procedures' asymptotic validity as well as their consistency are proven and their small sample performances are analyzed in extensive simulations. Their applicability is finally illustrated by analyzing an oncology data set.
|
statistics
|
We, team AImsterdam, summarize our submission to the fastMRI challenge (Zbontar et al., 2018). Our approach builds on recent advances in invertible learning to infer models as presented in Putzky and Welling (2019). Both, our single-coil and our multi-coil model share the same basic architecture.
|
electrical engineering and systems science
|
We discuss recent experimental results concerning the cross section ratio of positron over electron elastic scattering on protons, and compare with the predictions of a pre-existent calculation. The deviation from unity of this ratio, $i.e.$, a charge asymmetry different from zero, is the signature of contributions beyond the Born approximation. After reviewing the published results, we compare the elastic data to a calculation which includes the diagram corresponding to two-photon exchange. It turns out that all the data on the cross section ratio, in the limit of their precision, do not show evidence of enhanced two-photon contribution beyond the expected percent level. Our results confirm that experimental evidence for a large contribution of two-photon exchange is not yet found.
|
high energy physics phenomenology
|
Previous work has suggested that disordered swarms of flying insects can be well modeled as self-gravitating systems, as long as the "gravitational" interaction is adaptive. Motivated by this work we compare the predictions of the classic, mean-field King model for isothermal globular clusters to observations of insect swarms. Detailed numerical simulations of regular and adaptive gravity allow us to expose the features of the swarms' density profiles that are captured by the King model phenomenology, and those that are due to adaptivity and short-range repulsion. Our results provide further support for adaptive gravity as a model for swarms.
|
physics
|
In this paper, we investigate the encoding circuit size of Hamming codes and Hadamard codes. To begin with, we prove the exact lower bound of circuit size required in the encoding of (punctured)~Hadamard codes and (extended)~Hamming codes. Then the encoding algorithms for (punctured)~Hadamard codes are presented to achieve the derived lower bounds. For (extended)~Hamming codes, we also propose encoding algorithms that achieve the lower bounds.
|
computer science
|
We theoretically investigate gate-defined graphene superlattices with broken inversion symmetry as a platform for realizing tunable valley dependent transport. Our analysis is motivated by recent experiments [C. Forsythe et al., Nat. Nanotechnol. 13, 566 (2018)] wherein gate-tunable superlattice potentials have been induced on graphene by nanostructuring a dielectric in the graphene/patterneddielectric/gate structure. We demonstrate how the electronic tight-binding structure of the superlattice system resembles a gapped Dirac model with associated valley dependent transport using an unfolding procedure. In this manner we obtain the valley Hall conductivities from the Berry curvature distribution in the superlattice Brillouin zone, and demonstrate the tunability of this conductivity by the superlattice potential. Finally, we calculate the valley Hall angle relating the transverse valley current and longitudinal charge current and demonstrate the robustness of the valley currents against irregularities in the patterned dielectric.
|
condensed matter
|
We introduce a new outdoor urban 3D pointcloud dataset, covering a total area of 2.7 $km^2$, sampled from three Swiss cities with different characteristics. The dataset is manually annotated for semantic segmentation with per-point labels, and is built using photogrammetry from images acquired by multirotors equipped with high-resolution cameras. In contrast to datasets acquired with ground LiDAR sensors, the resulting point clouds are uniformly dense and complete, and are useful to disparate applications, including autonomous driving, gaming and smart city planning. As a benchmark, we report quantitative results of PointNet++, an established point-based deep 3D semantic segmentation model; on this model, we additionally study the impact of using different cities for model generalization.
|
computer science
|
Random-effects meta-analysis requires an estimate of the between-study variance, $\tau^2$. We study methods of estimation of $\tau^2$ and its confidence interval in meta-analysis of odds ratio, and also the performance of related estimators of the overall effect. We provide results of extensive simulations on five point estimators of $\tau^2$ (the popular methods of DerSimonian-Laird, restricted maximum likelihood, and Mandel and Paule; the less-familiar method of Jackson; and the new method (KD) based on the improved approximation to the distribution of the Q statistic by Kulinskaya and Dollinger (2015)); five interval estimators for $\tau^2$ (profile likelihood, Q-profile, Biggerstaff and Jackson, Jackson, and KD), six point estimators of the overall effect (the five inverse-variance estimators related to the point estimators of $\tau^2$ and an estimator (SSW) whose weights use only study-level sample sizes), and eight interval estimators for the overall effect (five based on the point estimators for $\tau^2$; the Hartung-Knapp-Sidik-Jonkman (HKSJ) interval; a KD-based modification of HKSJ; and an interval based on the sample-size-weighted estimator). Results of our simulations show that none of the point estimators of $\tau^2$ can be recommended, however the new KD estimator provides a reliable coverage of $\tau^2$. Inverse-variance estimators of the overall effect are substantially biased. The SSW estimator of the overall effect and the related confidence interval provide the reliable point and interval estimation of log-odds-ratio.
|
statistics
|
The space of time-like geodesics on Minkowski spacetime is constructed as a coset space of the Poincar\'e group in (3+1) dimensions with respect to the stabilizer of a worldline. When this homogeneous space is endowed with a Poisson homogeneous structure compatible with a given Poisson-Lie Poincar\'e group, the quantization of this Poisson bracket gives rise to a noncommutative space of worldlines with quantum group invariance. As an oustanding example, the Poisson homogeneous space of worldlines coming from the $\kappa$-Poincar\'e deformation is explicitly constructed, and shown to define a symplectic structure on the space of worldlines. Therefore, the quantum space of $\kappa$-Poincar\'e worldlines is just the direct product of three Heisenberg-Weyl algebras in which the parameter $\kappa^{-1}$ plays the very same role as the Planck constant $\hbar$ in quantum mechanics. In this way, noncommutative spaces of worldlines are shown to provide a new suitable and fully explicit arena for the description of quantum observers with quantum group symmetry.
|
high energy physics theory
|
We establish a generalization of Bourgain double recurrence theorem by proving that for any map $T$ acting on a probability space $(X,\mathcal{A},\mu)$, and for any non-constant polynomials $P, Q$ mapping natural numbers to themselves, for any $f,g \in L^2(X)$, and for almost all $x \in X$, we have $$\lim_{\bar{N} \longrightarrow +\infty} \frac{1}{N} \sum_{n=1}^{N}\boldsymbol{\nu}(n) f(T^{P(n)}x)g(T^{Q(n)}x)=0$$ where $\boldsymbol{\nu}$ is the Liouville function or the M\"{o}bius{\P} function.
|
mathematics
|
The $g$-factor anisotropy of the heavy quasiparticles in the hidden order state of URu$_2$Si$_2$ has been determined from the superconducting upper critical field and microscopically from Shubnikov-de Haas (SdH) oscillations. We present a detailed analysis of the $g$-factor for the $\alpha$, $\beta$ and $\gamma$ Fermi-surface pockets. Our results suggest a strong $g$-factor anisotropy between the $c$ axis and the basal plane for all observed Fermi surface pockets. The observed anisotropy of the $g$-factor from the quantum oscillations is in good agreement with the anisotropy of the superconducting upper critical field at low temperatures, which is strongly limited by the paramagnetic pair breaking along the easy magnetization axis $c$. However, the anisotropy of the initial slope of the upper critical field near $T_c$ cannot be explained by the anisotropy of the effective masses and Fermi velocities derived from quantum oscillations.
|
condensed matter
|
We report the observation of a quasi-coherent density fluctuation (QCF) by the Doppler backscattering system in the scrape-off layer (SOL) region of the DIII-D tokamak. This QCF is observed in high-power, high-performance hybrid plasmas with near double-null divertor (DND) shape during the electron cyclotron heating period. This mode is correlated with a steepened SOL density profile and leads to significantly elevated particle and heat fluxes between ELMs. The SOL QCF is a long-wavelength ion-scale fluctuation and propagates in the ion diamagnetic direction in the plasma frame. Its radial expanse is about 1.5-2 cm, well beyond the typical width of heat flux on DIII-D. Also, the SOL QCF does not show any clear dependence on the effective SOL collisionality and thus may raise issues on the control of plasma-material interactions in low collisionality plasmas in which the blob-induced transport is reduced. A linear simulation using BOUT++ with a 5-field reduced model is performed and compared with experimental observations. In simulation results, an interchange-like density perturbation can be driven by the SOL density gradient, and its peak location and the radial width of the density perturbation are in agreement with the experimental observations.
|
physics
|
Quantum back action imposes fundamental sensitivity limits to the majority of quantum measurements. The effect results from the unavoidable contamination of the measured parameter with the quantum noise of a meter. Back action evading measurements take advantage of the quantum correlations introduced by the system under study to the meter and allow overcoming the fundamental limitations. The measurements are frequently restricted in their bandwidth due to a finite response time of the system components. Here we show that probing a mechanical oscillator with a dichromatic field with frequencies separated by the oscillator frequency enables independent detection and complete subtraction of the measurement noise associated with the quantum back action.
|
quantum physics
|
Gamma Cas (B0.5IVe) is the noted prototype of a subgroup of classical Be stars exhibiting hard thermal X-ray emission. This paper reports results from a 23-year optical campaign with an Automated Photometric Telescope (APT) on this star. A series of unstable long cycles of length 56--91 days has nearly ceased over the last decade. Herein, we revise the frequency of the dominant coherent signal at 0.82238 cy/d. This signal's amplitude has nearly disappeared in the last 15 years but has somewhat recovered its former strength. We confirm the presence of secondary nonradial pulsation signals found by other authors at frequencies 1.25, 2.48, and 5.03 cy/d. The APT data from intensively monitored nights reveal rapidly variable amplitudes among these frequencies. We show that peculiarities in the 0.82 cy/d waveform exist that can vary even over several days. Although the 0.82 cy/d frequency is near the star's presumed rotation frequency. However, because of its phase slippage with respect to a dip pattern in the star's far-UV light curve it is preferable to consider the latter pattern, not the 0.82 cy/d signal, that carries a rotation signature. We also find hints of the UV dip pattern in periodograms of early-season APT data.
|
astrophysics
|
This paper seeks to test if the large-scale galaxy distribution can be characterized as a fractal system. Tools appropriate for describing galaxy fractal structures with a single fractal dimension $D$ in relativistic settings are developed and applied to the UltraVISTA galaxy survey. A graph of volume-limited samples corresponding to the redshift limits in each redshift bins for absolute magnitude is presented. Fractal analysis using the standard $\Lambda$CDM cosmological model is applied to a reduced subsample in the range $0.1\le z \le 4$, and the entire sample within $0.1\le z\le 6$. Three relativistic distances are used, the luminosity distance $d_L$, redshift distance $d_z$ and galaxy area distance $d_G$, because for data at $z\gtrsim 0.3$ relativistic effects are such that for the same $z$ these distance definitions yield different values. The results show two consecutive and distinct redshift ranges in both the reduced and complete samples where the data behave as a single fractal galaxy structure. For the reduced subsample we found that the fractal dimension is $D=\left(1.58\pm0.20\right)$ for $z<1$, and $D=\left(0.59\pm0.28\right)$ for $1\le z\le 4$. The complete sample yielded $D=\left(1.63\pm0.20\right)$ for $z<1$ and $D=\left(0.52\pm0.29\right)$ for $1\le z\le6$. These results are consistent with those found by Conde-Saavedra et al. (2015; arXiv:1409.5409v1), where a similar analysis was applied to a much more limited survey at equivalent redshift depths, and suggest that either there are yet unclear observational biases causing such decrease in the fractal dimension, or the galaxy clustering was possibly more sparse and the universe void dominated in a not too distant past.
|
astrophysics
|
A 40-year-old puzzle in transition metal pentatellurides ZrTe$_5$ and HfTe$_5$ is the anomalous peak in the temperature dependence of the longitudinal resistivity, which is accompanied by sign reverses of the Hall and Seebeck coefficients. We give a plausible explanation for these phenomena without assuming any phase transition or strong interaction effect. We show that due to intrinsic thermodynamics and diluteness of the conducting electrons in these materials, the chemical potential displays a strong dependence on the temperature and magnetic field. With that, we compute resistivity, Hall and Seebeck coefficients in zero field, and magnetoresistivity and Hall resistivity in finite magnetic fields, in all of which we reproduce the main features that are observed in experiments.
|
condensed matter
|
Analyzing large scale networks requires high performance streaming updates of graph representations of these data. Associative arrays are mathematical objects combining properties of spreadsheets, databases, matrices, and graphs, and are well-suited for representing and analyzing streaming network data. The Dynamic Distributed Dimensional Data Model (D4M) library implements associative arrays in a variety of languages (Python, Julia, and Matlab/Octave) and provides a lightweight in-memory database. Associative arrays are designed for block updates. Streaming updates to a large associative array requires a hierarchical implementation to optimize the performance of the memory hierarchy. Running 34,000 instances of a hierarchical D4M associative arrays on 1,100 server nodes on the MIT SuperCloud achieved a sustained update rate of 1,900,000,000 updates per second. This capability allows the MIT SuperCloud to analyze extremely large streaming network data sets.
|
computer science
|
We present a new determination of the galaxy stellar mass function (GSMF) over the redshift interval $0.25 \leq z \leq 3.75$, derived from a combination of ground-based and Hubble Space Telescope (HST) imaging surveys. Based on a near-IR selected galaxy sample selected over a raw survey area of 3 deg$^{2}$ and spanning $\geq 4$ dex in stellar mass, we fit the GSMF with both single and double Schechter functions, carefully accounting for Eddington bias to derive both observed and intrinsic parameter values. We find that a double Schechter function is a better fit to the GSMF at all redshifts, although the single and double Schechter function fits are statistically indistinguishable by $z=3.25$. We find no evidence for significant evolution in $M^{\star}$, with the intrinsic value consistent with $\log_{10}(M^{\star} / M_{\odot})=10.55\pm{0.1}$ over the full redshift range. Overall, our determination of the GSMF is in good agreement with recent simulation results, although differences persist at the highest stellar masses. Splitting our sample according to location on the UVJ plane, we find that the star-forming GSMF can be adequately described by a single Schechter function over the full redshift range, and has not evolved significantly since $z\simeq 2.5$. In contrast, both the normalization and functional form of the passive GSMF evolves dramatically with redshift, switching from a single to a double Schechter function at $z \leq 1.5$. As a result, we find that while passive galaxies dominate the integrated stellar-mass density at $z \leq 0.75$, they only contribute $\lesssim 10$ per cent by $z\simeq 3$. Finally, we provide a simple parameterization that provides an accurate estimate of the GSMF, both observed and intrinsic, at any redshift within the range $0 \leq z \leq 4$.
|
astrophysics
|
The conjugate gradient method (CG) is typically used with a preconditioner which improves efficiency and robustness of the method. Many preconditioners include parameters and a proper choice of a preconditioner and its parameters is often not a trivial task. Although many convergence estimates exist which can be used for optimizing preconditioners, these estimates typically hold for all initial guess vectors, in other words, they reflect the worst convergence rate. To account for the mean convergence rate instead, in this paper, we follow a stochastic approach. It is based on trial runs with random initial guess vectors and leads to a functional which can be used to monitor convergence and to optimize preconditioner parameters in CG. Presented numerical experiments show that optimization of this new functional with respect to preconditioner parameters usually yields a better parameter value than optimization of the functional based on the spectral condition number.
|
mathematics
|
In this work, we investigate the solutions of vortices in the O(3)-sigma model with the gauge field governed by the Chern-Simons term and subject to a hyperbolic self-dual potential. We show that this model admits both topological and nontopological solitons solutions. By means of numerical analysis, we realize that the topological solutions of the model can be transformed into compacton-like solutions. On the other hand, after modifying the model by the introduction of a dielectric constant, an interesting feature appears; namely, the nontopological solutions can be transformed into kink-like solutions through the numerical variation of the dielectric constant. Finally, we discuss the degeneracy for the topological solitons in a given sector and present the numerical solutions of the first model.
|
high energy physics theory
|
The self-exciting Hawkes process is widely used to model events which occur in bursts. However, many real world data sets contain missing events and/or noisily observed event times, which we refer to as data distortion. The presence of such distortion can severely bias the learning of the Hawkes process parameters. To circumvent this, we propose modeling the distortion function explicitly. This leads to a model with an intractable likelihood function which makes it difficult to deploy standard parameter estimation techniques. As such, we develop the ABC-Hawkes algorithm which is a novel approach to estimation based on Approximate Bayesian Computation (ABC) and Markov Chain Monte Carlo. This allows the parameters of the Hawkes process to be learned in settings where conventional methods induce substantial bias or are inapplicable. The proposed approach is shown to perform well on both real and simulated data.
|
statistics
|
We construct the $\mathcal{N}=8$ supersymmetric mechanics with potential term whose configuration space is the special K\"ahler manifold of rigid type and show that it can be viewed as the K\"ahler counterpart of $\mathcal{N}=4$ mechanics related to "curved WDVV equations". Then, we consider the special case of the supersymmetric mechanics with the non-zero potential term defined on the family of $U(1)$-invariant one-(complex)dimensional special K\"ahler metrics. The bosonic parts of these systems include superintegrable deformations of perturbed two-dimensional oscillator and Coulomb systems.
|
high energy physics theory
|
Quantum teleportation provides a "disembodied" way to transfer an unknown quantum state from one quantum system to another. However, all teleportation experiments to date are limited to cases where the target quantum system contains no prior quantum information. Here we propose a scheme for teleporting a quantum state to a quantum system with prior quantum information. By using an optical qubit-ququart entangling gate, we have experimentally demonstrated the new teleportation protocol -- teleporting a qubit to a photon preloaded with one qubit of quantum information. After the teleportation, the target photon contains two qubits of quantum information, one from the teleported qubit and the other from the pre-existing qubit. The teleportation fidelities range from $0.70$ to $0.92$, all above the classical limit of $2/3$. Our work sheds light on a new direction for quantum teleportation and demonstrates our ability to implement entangling operations beyond two-level quantum systems.
|
quantum physics
|
Dark matter (DM) scattering with nuclei in solid-state systems may produce elastic nuclear recoil at high energies and single-phonon excitation at low energies. When the dark matter momentum is comparable to the momentum spread of nuclei bound in a lattice, $q_0 = \sqrt{2 m_N \omega_0}$ where $m_N$ is the mass of the nucleus and $\omega_0$ is the optical phonon energy, an intermediate scattering regime characterized by multi-phonon excitations emerges. We study a greatly simplified model of a single nucleus in a harmonic potential and show that, while the mean energy deposited for a given momentum transfer $q$ is equal to the elastic value $q^2/(2m_N)$, the phonon occupation number follows a Poisson distribution and thus the energy spread is $\Delta E = q\sqrt{\omega_0/(2m_N)}$. This observation suggests that low-threshold calorimetric detectors may have significantly increased sensitivity to sub-GeV DM compared to the expectation from elastic scattering, even when the energy threshold is above the single-phonon energy, by exploiting the tail of the Poisson distribution for phonons above the elastic energy. We use a simple model of electronic excitations to argue that this multi-phonon signal will also accompany ionization signals induced from DM-electron scattering or the Migdal effect. In well-motivated models where DM couples to a heavy, kinetically-mixed dark photon, we show that these signals can probe experimental milestones for cosmological DM production via thermal freeze-out, including the thermal target for Majorana fermion DM.
|
high energy physics phenomenology
|
Bayesian optimization (BO) methods often rely on the assumption that the objective function is well-behaved, but in practice, this is seldom true for real-world objectives even if noise-free observations can be collected. Common approaches, which try to model the objective as precisely as possible, often fail to make progress by spending too many evaluations modeling irrelevant details. We address this issue by proposing surrogate models that focus on the well-behaved structure in the objective function, which is informative for search, while ignoring detrimental structure that is challenging to model from few observations. First, we demonstrate that surrogate models with appropriate noise distributions can absorb challenging structures in the objective function by treating them as irreducible uncertainty. Secondly, we show that a latent Gaussian process is an excellent surrogate for this purpose, comparing with Gaussian processes with standard noise distributions. We perform numerous experiments on a range of BO benchmarks and find that our approach improves reliability and performance when faced with challenging objective functions.
|
statistics
|
Using N-body simulations of the Large Magellanic Cloud (LMC's) passage through the Milky Way (MW), tailored to reproduce observed kinematic properties of both galaxies, we show that the high-speed tail of the Solar Neighborhood dark matter distribution is overwhelmingly of LMC origin. Two populations contribute at high speeds: 1) Particles that were once bound to the LMC, and 2) MW halo particles that have been accelerated owing to the response of the halo to the recent passage of the LMC. These particles reach speeds of 700-900 km/s with respect to the Earth, above the local escape speed of the MW. The high-speed particles follow trajectories similar to the Solar reflex motion, with peak velocities reached in June. For low-mass dark matter, these high-speed particles can dominate the signal in direct-detection experiments, extending the reach of the experiments to lower mass and elastic scattering cross sections even with existing data sets. Our study shows that even non-disrupted MW satellite galaxies can leave a significant dark-matter footprint in the Solar Neighborhood.
|
astrophysics
|
We explore a self-learning Markov chain Monte Carlo method based on the Adversarial Non-linear Independent Components Estimation Monte Carlo, which utilizes generative models and artificial neural networks. We apply this method to the scalar $\varphi^4$ lattice field theory in the weak-coupling regime and, in doing so, greatly increase the system sizes explored to date with this self-learning technique. Our approach does not rely on a pre-existing training set of samples, as the agent systematically improves its performance by bootstrapping samples collected by the model itself. We evaluate the performance of the trained model by examining its mixing time and study the ergodicity of generated samples. When compared to methods such as Hamiltonian Monte Carlo, this approach provides unique advantages such as the speed of inference and a compressed representation of Monte Carlo proposals for potential use in downstream tasks.
|
condensed matter
|
Recent modeling of NICER observations of thermal X-ray pulsations from the surface of the isolated millisecond pulsar PSR J0030+0451 suggests that the hot emitting regions on the pulsar's surface are far from antipodal, which is at odds with the classical assumption that the magnetic field in the pulsar magnetosphere is predominantly that of a centered dipole. Here, we review these results and examine previous attempts to constrain the magnetospheric configuration of PSR J0030+0451. To the best of our knowledge, there is in fact no direct observational evidence that PSR J0030+0451's magnetic field is a centered dipole. Developing models of physically motivated, non-canonical magnetic field configurations and the currents that they can support poses a challenging task. However, such models may have profound implications for many aspects of pulsar research, including pulsar braking, estimates of birth velocities, and interpretations of multi-wavelength magnetospheric emission.
|
astrophysics
|
Let g be a basic classical Lie superalgebra over C. In the case of a typical weight whose every nonnegative integer multiple is also typical, we compute a closed form for the Hilbert series whose coefficients encode the dimensions of finite-dimensional irreducible typical g-representations. We give a formula for this Hilbert series in terms of elementary symmetric polynomials and Eulerian polynomials. Additionally, we show a simple closed form in terms of differential operators.
|
mathematics
|
Light field microscopy (LFM) uses a microlens array (MLA) near the sensor plane of a microscope to achieve single-shot 3D imaging of a sample without any moving parts. Unfortunately, the 3D capability of LFM comes with a significant loss of lateral resolution at the focal plane. Placing the MLA near the pupil plane of the microscope, instead of the image plane, can mitigate the artifacts and provide an efficient forward model, at the expense of field-of-view (FOV). Here, we demonstrate improved resolution across a large volume with Fourier DiffuserScope, which uses a diffuser in the pupil plane to encode 3D information, then computationally reconstructs the volume by solving a sparsity-constrained inverse problem. Our diffuser consists of randomly placed microlenses with varying focal lengths; the random positions provide a larger FOV compared to a conventional MLA, and the diverse focal lengths improve the axial depth range. To predict system performance based on diffuser parameters, we for the first time establish a theoretical framework and design guidelines, which are verified by numerical simulations, then build an experimental system that achieves $< 3$ um lateral and $4$ um axial resolution over a $1000 \times 1000 \times 280$ um$^3$ volume. Our diffuser design outperforms the MLA used in LFM, providing more uniform resolution over a larger volume, both laterally and axially.
|
electrical engineering and systems science
|
Given an undirected measurement graph $G = ([n], E)$, the classical angular synchronization problem consists of recovering unknown angles $\theta_1,\dots,\theta_n$ from a collection of noisy pairwise measurements of the form $(\theta_i - \theta_j) \mod 2\pi$, for each $\{i,j\} \in E$. This problem arises in a variety of applications, including computer vision, time synchronization of distributed networks, and ranking from preference relationships. In this paper, we consider a generalization to the setting where there exist $k$ unknown groups of angles $\theta_{l,1}, \dots,\theta_{l,n}$, for $l=1,\dots,k$. For each $ \{i,j\} \in E$, we are given noisy pairwise measurements of the form $\theta_{\ell,i} - \theta_{\ell,j}$ for an unknown $\ell \in \{1,2,\ldots,k\}$. This can be thought of as a natural extension of the angular synchronization problem to the heterogeneous setting of multiple groups of angles, where the measurement graph has an unknown edge-disjoint decomposition $G = G_1 \cup G_2 \ldots \cup G_k$, where the $G_i$'s denote the subgraphs of edges corresponding to each group. We propose a probabilistic generative model for this problem, along with a spectral algorithm for which we provide a detailed theoretical analysis in terms of robustness against both sampling sparsity and noise. The theoretical findings are complemented by a comprehensive set of numerical experiments, showcasing the efficacy of our algorithm under various parameter regimes. Finally, we consider an application of bi-synchronization to the graph realization problem, and provide along the way an iterative graph disentangling procedure that uncovers the subgraphs $G_i$, $i=1,\ldots,k$ which is of independent interest, as it is shown to improve the final recovery accuracy across all the experiments considered.
|
statistics
|
The novel PQ mechanism replaces the strong CP problem with some challenges in a model building. In particular, the challenges arise regarding i) the origin of an anomalous global symmetry called a PQ symmetry, ii) the scale of the PQ symmetry breaking, and iii) the quality of the PQ symmetry. In this letter, we provide a natural and simple UV completed model that addresses these challenges. Extra quarks and anti-quarks are separated by two branes in the Randall-Sundrum ${\bf R}^4 \times S^1 / {\bf Z}_2$ spacetime while a hidden SU($N_H$) gauge field condensates in the bulk. The brane separation is the origin of the PQ symmetry and its breaking scale is given by the dynamical scale of the SU($N_H$) gauge interaction. The (generalized) Casimir force of SU($N_H$) condensation stabilizes the 5th dimension, which guarantees the quality of the PQ symmetry.
|
high energy physics phenomenology
|
The isotope effect in the superconducting transition temperature is anomalous if the isotope coefficient $\alpha<0$ or $\alpha>1/2$. In this work, we show that such anomalous behaviors can naturally arise within the Bardeen-Cooper-Schrieffer framework if both phonon and non-phonon modes coexist. Different from the case of the standard Eliashberg theory (with only phonon) in which $\alpha\le1/2$, the isotope coefficient can now take arbitrary values in the simultaneous presence of phonon and the other non-phonon mode. In particular, most strikingly, a pair-breaking phonon can give rise to large isotope coefficient $\alpha>1/2$ if the unconventional superconductivity is mediated by the lower frequency non-phonon boson mode. Based on our studies, implications on several families of superconductors are discussed.
|
condensed matter
|
We show that for a K-unstable Fano variety, any divisorial valuation computing its stability threshold induces a non-trivial special test configuration preserving the stability threshold. When such a divisorial valuation exists, we show that the Fano variety degenerates to a uniquely determined twisted K-polystable Fano variety. We also show that the stability threshold can be approximated by divisorial valuations induced by special test configurations. As an application of the above results and the analytic work of Datar, Sz\'ekelyhidi, and Ross, we deduce that greatest Ricci lower bounds of Fano manifolds of fixed dimension form a finite set of rational numbers. As a key step in the proofs, we adapt the process of Li and Xu producing special test configurations to twisted K-stability in the sense of Dervan.
|
mathematics
|
Synthetic opals, based on self-assembly of polymeric nanoparticles generally produces fainted/pale structural colors due to too many lattice flaws in the structures. Here we produces carbon nanotubes (CNTs) incorporated high quality 3D photonic opals (PC-CNT) by evaporative self-assembly. Although the CNTs make up only 0.01% of the fabricated photonic opal, their controlled incorporation has dramatic effect to change the color of the photonic crystals from milky white to intense red. Microscopic study suggest that CNT incorporation did not affect the lattice ordering of the photonic crystals. The tunability of structural colors, as a function of incident angle, were tested and varied against Bragg-Snell law. Furthermore, we tested mechanochromic sensing of the photonic opals, demonstrating their potential as visual indicators. This tunable PC-CNT brings many possibilities including strain sensing or structural health care monitoring, as well as being of fundamental interest.
|
physics
|
Combinations of Monte-Carlo tree search and Deep Neural Networks, trained through self-play, have produced state-of-the-art results for automated game-playing in many board games. The training and search algorithms are not game-specific, but every individual game that these approaches are applied to still requires domain knowledge for the implementation of the game's rules, and constructing the neural network's architecture -- in particular the shapes of its input and output tensors. Ludii is a general game system that already contains over 500 different games, which can rapidly grow thanks to its powerful and user-friendly game description language. Polygames is a framework with training and search algorithms, which has already produced superhuman players for several board games. This paper describes the implementation of a bridge between Ludii and Polygames, which enables Polygames to train and evaluate models for games that are implemented and run through Ludii. We do not require any game-specific domain knowledge anymore, and instead leverage our domain knowledge of the Ludii system and its abstract state and move representations to write functions that can automatically determine the appropriate shapes for input and output tensors for any game implemented in Ludii. We describe experimental results for short training runs in a wide variety of different board games, and discuss several open problems and avenues for future research.
|
computer science
|
Deep reinforcement learning (deep RL) holds the promise of automating the acquisition of complex controllers that can map sensory inputs directly to low-level actions. In the domain of robotic locomotion, deep RL could enable learning locomotion skills with minimal engineering and without an explicit model of the robot dynamics. Unfortunately, applying deep RL to real-world robotic tasks is exceptionally difficult, primarily due to poor sample complexity and sensitivity to hyperparameters. While hyperparameters can be easily tuned in simulated domains, tuning may be prohibitively expensive on physical systems, such as legged robots, that can be damaged through extensive trial-and-error learning. In this paper, we propose a sample-efficient deep RL algorithm based on maximum entropy RL that requires minimal per-task tuning and only a modest number of trials to learn neural network policies. We apply this method to learning walking gaits on a real-world Minitaur robot. Our method can acquire a stable gait from scratch directly in the real world in about two hours, without relying on any model or simulation, and the resulting policy is robust to moderate variations in the environment. We further show that our algorithm achieves state-of-the-art performance on simulated benchmarks with a single set of hyperparameters. Videos of training and the learned policy can be found on the project website.
|
computer science
|
The Hodgkin and Huxley (H-H) model is a nonlinear system of four equations that describes how action potentials in neurons are initiated and propagated, and represents a major advance in the understanding of nerve cells. However, some of the parameters are obtained through a tedious combination of experiments and data tuning. In this paper, we propose the use of an iterative method (Landweber iteration) to estimate some of the parameters in the H-H model, given the membrane electric potential. We provide numerical results showing that the method is able to capture the correct parameters using the measured voltage as data, even in the presence of noise.
|
mathematics
|
Advances in data-driven methods have sparked renewed interest for applications in power systems. Creating datasets for successful application of these methods has proven to be very challenging, especially when considering power system security. This paper proposes a computationally efficient method to create datasets of secure and insecure operating points. We propose an infeasibility certificate based on separating hyperplanes that can a-priori characterize large parts of the input space as insecure, thus significantly reducing both computation time and problem size. Our method can handle an order of magnitude more control variables and creates balanced datasets of secure and insecure operating points, which is essential for data-driven applications. While we focus on N-1 security and uncertainty, our method can extend to dynamic security. For PGLib-OPF networks up to 500 buses and up to 125 control variables, we demonstrate drastic reductions in unclassified input space volumes and computation time, create balanced datasets, and evaluate an illustrative data-driven application.
|
electrical engineering and systems science
|
Several statistics-based detectors, based on unimodal matrix models, for determining the number of sources in a field are designed. A new variance ratio statistic is proposed, and its asymptotic distribution is analyzed. The variance ratio detector is shown to outperform the alternatives. It is shown that further improvements are achievable via optimally selected rotations. Numerical experiments demonstrate the performance gains of our detection methods over the baseline approach.
|
statistics
|
The BPS D3 brane has a non-supersymmetric cousin, called the non-susy D3 brane, which is also a solution of type IIB string theory. The corresponding counterpart of black D3 brane is the `black' non-susy D3 brane and like the BPS D3 brane, it also has a decoupling limit, where the decoupled geometry (in the case we are interested, this is asymptotically AdS$_5$ $\times$ S$^5$) is the holographic dual of a non-conformal, non-supersymmetric QFT in $(3+1)$-dimensions. In this QFT we compute the entanglement entropy (EE), the complexity and the Fisher information metric holographically using the above mentioned geometry for spherical subsystems. The fidelity and the Fisher information metric have been calculated from the regularized extremal volume of the codimension one time slice of the bulk geometry using two different proposals in the literature. Although for AdS black hole both the proposals give identical results, the results differ for the non-supersymmetric background.
|
high energy physics theory
|
When computed to next-to-leading order in perturbative QCD, the non-linear Balitsky-Kovchegov (BK) equation for the high-energy evolution of the dipole-hadron scattering appears to be unstable. We show that this instability can be avoided by using the rapidity of the dense hadronic target (instead of that of the dilute dipole projectile) as the evolution time. Using this variable, we construct a collinearly-improved version of the BK equation, where the dominant radiative corrections to the kernel -- those enhanced by double collinear logarithms -- are resummed to all orders.
|
high energy physics phenomenology
|
We study the super-resolution problem of recovering a periodic continuous-domain function from its low-frequency information. This means that we only have access to possibly corrupted versions of its Fourier samples up to a maximum cut-off frequency. The reconstruction task is specified as an optimization problem with generalized total-variation regularization involving a pseudo-differential operator. Our special emphasis is on the uniqueness of solutions. We show that, for elliptic regularization operators (e.g., the derivatives of any order), uniqueness is always guaranteed. To achieve this goal, we provide a new analysis of constrained optimization problems over Radon measures. We demonstrate that either the solutions are always made of Radon measures of constant sign, or the solution is unique. Doing so, we identify a general sufficient condition for the uniqueness of the solution of a constrained optimization problem with TV-regularization, expressed in terms of the Fourier samples.
|
mathematics
|
MAGIC is a system of two Cherenkov telescopes located in the Canary island of La Palma. A key part of MAGIC Fundamental Physics program is the search for indirect signals of Dark Matter (DM) from different sources. In the Milky Way, DM forms an almost spherically symmetric halo, with a density peaked towards the center of the Galaxy and decreasing toward the outer region. We search for DM decay signals from the Galactic Halo, with a special methodology developed for this work. Our strategy is to compare pairs of observations performed at different angular distances from the Galactic Center, selected in such a way that all the diffuse components cancel out, except for those coming from the DM. In order to keep the systematic uncertainty of this novel background estimation method down to a minimum, the observation pairs have been acquired during the same nights and follow exactly the same azimuth and zenith paths. We collected 20 hours of data during 2018. Using half of them to determine the systematic uncertainty in the background estimation of our analysis, we obtain a value of 4.8\% with no dependence on energy. Accounting for this systematic uncertainty in the likelihood analysis based on the 10 remaining hours of data collected so far, we present the limit to TeV DM particle with a lifetime of $10^{26}$ s in the $\mathrm{b\bar{b}}$ decay channel.
|
astrophysics
|
Photonic lanterns rely on a close packed arrangement of single mode fibers, which are tapered and fused into one multi-mode core. Topologically optimal circle packing arrangements have been well studied. Using this, we fabricate PLs with 19 and 37 SMFs showing tightly packed, ordered arrangements with packing densities of 95 % and 99 % of theoretically achievable values, with mean adjacent core separations of 1.03 and 1.08 fiber diameters, respectively. We demonstrate that topological circle packing data is a good predictor for optimal PL parameters.
|
physics
|
We study diagonal representatives of boundary condition matrices on the orbifolds $S^1/Z_2$ and $T^2/Z_m$ ($m=2, 3, 4, 6$). We give an alternative proof of the existence of diagonal representatives in each equivalent class of boundary condition matrices on $S^1/Z_2$, using a matrix exponential representation, and show that they do not necessarily exist on $T^2/Z_2$, $T^2/Z_3$, and $T^2/Z_4$. Each equivalence class on $T^2/Z_6$ has a diagonal representative, because its boundary conditions are determined by a single unitary matrix.
|
high energy physics theory
|
We are concerned with testing replicability hypotheses for many endpoints simultaneously. This constitutes a multiple test problem with composite null hypotheses. Traditional $p$-values, which are computed under least favourable parameter configurations, are over-conservative in the case of composite null hypotheses. As demonstrated in prior work, this poses severe challenges in the multiple testing context, especially when one goal of the statistical analysis is to estimate the proportion $\pi_0$ of true null hypotheses. Randomized $p$-values have been proposed to remedy this issue. In the present work, we discuss the application of randomized $p$-values in replicability analysis. In particular, we introduce a general class of statistical models for which valid, randomized $p$-values can be calculated easily. By means of computer simulations, we demonstrate that their usage typically leads to a much more accurate estimation of $\pi_0$. Finally, we apply our proposed methodology to a real data example from genomics.
|
statistics
|
The engines that produce extragalactic fast radio bursts (FRBs), and the mechanism by which the emission is generated, remain unknown. Many FRB models predict prompt multi-wavelength counterparts, which can be used to refine our knowledge of these fundamentals of the FRB phenomenon. However, several previous targeted searches for prompt FRB counterparts have yielded no detections, and have additionally not reached sufficient sensitivity with respect to the predictions. In this work, we demonstrate a technique to estimate the ratio, $\eta$, between the energy outputs of FRB counterparts at various wavelengths and the radio-wavelength emission. Our technique combines the fluence distribution of the FRB population with results from several wide-field blind surveys for fast transients from the optical to the TeV bands. We present constraints on $\eta$ that improve upon previous observations even in the case that all unclassified transient events in existing surveys are FRB counterparts. In some scenarios for the FRB engine and emission mechanism, we find that FRB counterparts should have already been detected, thus demonstrating that our technique can successfully test predictions for $\eta$. However, it is possible that FRB counterparts are lurking amongst catalogs of unclassified transient events. Although our technique is robust to the present uncertainty in the FRB fluence distribution, its ultimate application to accurately estimate or bound $\eta$ will require the careful analysis of all candidate fast-transient events in multi-wavelength survey data sets.
|
astrophysics
|
We study the fine distribution of lattice points lying on expanding circles in the hyperbolic plane $\mathbb{H}$. The angles of lattice points arising from the orbit of the modular group $PSL_{2}(\mathbb{Z})$, and lying on hyperbolic circles, are shown to be equidistributed for generic radii. However, the angles fail to equidistribute on a thin set of exceptional radii, even in the presence of growing multiplicity. Surprisingly, the distribution of angles on hyperbolic circles turns out to be related to the angular distribution of $\mathbb{Z}^2$-lattice points (with certain parity conditions) lying on circles in $\mathbb{R}^2$, along a thin subsequence of radii. A notable difference is that measures in the hyperbolic setting can break symmetry - on very thin subsequences they are not invariant under rotation by $\frac{\pi}{2}$, unlike the Euclidean setting where all measures have this invariance property.
|
mathematics
|
Quaternion, an extension of complex number, is the first discovered non-commutative division algebra by William Rowan Hamilton in 1843. In this article, we review the recent progress on building up the connection between the mathematical concept of quaternoinic analyticity and the physics of high-dimensional topological states. Three- and four-dimensional harmonic oscillator wavefunctions are reorganized by the SU(2) Aharanov-Casher gauge potential to yield high-dimensional Landau levels possessing the full rotational symmetries and flat energy dispersions. The lowest Landau level wavefunctions exhibit quaternionic analyticity, satisfying the {\it Cauchy-Riemann-Fueter} condition, which generalizes the two-dimensional complex analyticity to three and four dimensions. It is also the Euclidean version of the helical Dirac and the chiral Weyl equations. After dimensional reductions, these states become two- and three-dimensional topological states maintaining time-reversal symmetry but exhibiting broken parity. We speculate that quaternionic analyticity can provide a guiding principle for future researches on high-dimensional interacting topological states. Other progresses including high-dimensional Landau levels of Dirac fermions, their connections to high energy physics, and high-dimensional Landau levels in the Landau-type gauges, are also reviewed. This research is also an important application of the mathematical subject of quaternion analysis in theoretical physics, and provides useful guidance for the experimental explorations on novel topological states of matter.
|
condensed matter
|
The mixed fractional Vasicek model, which is an extended model of the traditional Vasicek model, has been widely used in modelling volatility, interest rate and exchange rate. Obviously, if some phenomenon are modeled by the mixed fractional Vasicek model, statistical inference for this process is of great interest. Based on continuous time observations, this paper considers the problem of estimating the drift parameters in the mixed fractional Vasicek model. We will propose the maximum likelihood estimators of the drift parameters in the mixed fractional Vasicek model with the Radon-Nikodym derivative for a mixed fractional Brownian motion. Using the fundamental martingale and the Laplace transform, both the strong consistency and the asymptotic normality of the maximum likelihood estimators have been established for all $H\in(0,1)$, $H\neq 1/2$.
|
mathematics
|
This paper considers the control design for a low-cost ventilator that is based on a manual resuscitator bag (also known as AmbuBag) to pump air into the lungs of a patient who is physically unable to breathe. First, it experimentally shows that for accurately tracking tidal volumes, the controller needs to be adapted to the individual patient and the different configurations, e.g., hardware or operation modes. Second, it proposes a set-point adaptation algorithm that uses sensor measurements of a flow meter to automatically adapt the controller to the setup at hand. Third, it experimentally shows that such an adaptive solution improves the performance of the ventilator for various setups. One objective of this paper is to increase awareness of the need for feedback control using sensor measurements in low-cost ventilator solutions in order to automatically adapt to the specific scenario.
|
electrical engineering and systems science
|
Low-light image sequences generally suffer from spatio-temporal incoherent noise, flicker and blurring of moving objects. These artefacts significantly reduce visual quality and, in most cases, post-processing is needed in order to generate acceptable quality. Most state-of-the-art enhancement methods based on machine learning require ground truth data but this is not usually available for naturally captured low light sequences. We tackle these problems with an unpaired-learning method that offers simultaneous colorization and denoising. Our approach is an adaptation of the CycleGAN structure. To overcome the excessive memory limitations associated with ultra high resolution content, we propose a multiscale patch-based framework, capturing both local and contextual features. Additionally, an adaptive temporal smoothing technique is employed to remove flickering artefacts. Experimental results show that our method outperforms existing approaches in terms of subjective quality and that it is robust to variations in brightness levels and noise.
|
electrical engineering and systems science
|
Multi-degree Tchebycheffian splines are splines with pieces drawn from extended (complete) Tchebycheff spaces, which may differ from interval to interval, and possibly of different dimensions. These are a natural extension of multi-degree polynomial splines. Under quite mild assumptions, they can be represented in terms of a so-called MDTB-spline basis; such basis possesses all the characterizing properties of the classical polynomial B-spline basis. We present a practical framework to compute MDTB-splines, and provide an object-oriented implementation in Matlab. The implementation supports the construction, differentiation, and visualization of MDTB-splines whose pieces belong to Tchebycheff spaces that are null-spaces of constant-coefficient linear differential operators. The construction relies on an extraction operator that maps local Tchebycheffian Bernstein functions to the MDTB-spline basis of interest.
|
mathematics
|
Shanghai Synchrotron Radiation Facility (SSRF) is a 3.5 GeV storage ring with a bunch rate of 499.654 MHz, harmonic number of 720, and circumference of 432 meters. SSRF injection works at 3.5 GeV, where the multi-bunch instabilities limit the maximum stored current. In order to suppress multi-bunch instabilities caused by transverse impedance, a bunch-by-bunch transverse feedback system is indispensable for SSRF. The key component of that system is the bunch-by-bunch transverse feedback electronics. An important task in the electronics is precise time synchronization. In this paper, a novel clock synchronization and precise delay adjustment method based on the PLLs and delay lines are proposed. Test results indicate that the ENOB (Effective Number Of Bits) of the analog-to-digital conversion circuit is better than 9 bits in the input signal frequency range from 100 kHz to 700 MHz, and the closed loop attenuation at the critical frequency points is better than 40 dB. The initial commissioning tests with the beam in SSRF are also conducted, and the results are consistent with the expectations.
|
physics
|
In this paper, we prove a lower bound for $\underset{\chi \neq \chi_0}{\max}\bigg|\sum_{n\leq x} \chi(n)\bigg|$, when $x= \frac{q}{(\log q)^B}$. This improves on a result of Granville and Soundararajan for large character sums when the range of summation is wide. When $B$ goes to zero, our lower bound recovers the expected maximal value of character sums for most characters.
|
mathematics
|
We determine a supercharacter theory for Sylow $p$-subgroups ${^2{G}_2^{syl}(3^{2m+1})}$ of the Ree groups $^2{G}_2(3^{2m+1})$, calculate the conjugacy classes of ${^2{G}_2^{syl}(3^{2m+1})}$, and establish the character table of ${^2{G}_2^{syl}(3)}$.
|
mathematics
|
We study the local limit of the fixed-point forest, a tree structure associated to a simple sorting algorithm on permutations. This local limit can be viewed as an infinite random tree that can be constructed from a Poisson point process configuration on $[0,1]^\mathbb{N}$. We generalize this random tree, and compute the expected size and expected number of leaves of a random rooted subtree in the generalized version. We also obtain bounds on the variance of the size.
|
mathematics
|
We consider generalized Melvin-like solutions corresponding to Lie algebras of rank $5$ ($A_5$, $B_5$, $C_5$, $D_5$). The solutions take place in $D$-dimensional gravitational model with five Abelian 2-forms and five scalar fields. They are governed by five moduli functions $H_s(z)$ ($s = 1,...,5$) of squared radial coordinate $z=\rho^2$ obeying five differential master equations. The moduli functions are polynomials of powers $(n_1, n_2, n_3, n_4, n_5) = (5,8,9,8,5), (10,18,24,28,15), (9,16,21,24,25), (8,14,18,10,10)$ for Lie algebras $A_5$, $B_5$, $C_5$, $D_5$ respectively. The asymptotic behaviour for the polynomials at large distances is governed by some integer-valued $5 \times 5$ matrix $\nu$ connected in a certain way with the inverse Cartan matrix of the Lie algebra and (in $A_5$ and $D_5$ cases) with the matrix representing a generator of the $\mathbb{Z}_2$-group of symmetry of the Dynkin diagram. The symmetry and duality identities for polynomials are obtained, as well as asymptotic relations for solutions at large distances.
|
high energy physics theory
|
Using a combinatorial argument, we prove the well-known result that the Wirtinger and Dehn presentations of a link in 3-space describe isomorphic groups. The result is not true for links $\ell$ in a thickened surface $S \times [0,1]$. Their precise relationship, as given in the 2012 thesis of R.E. Byrd, is established here by an elementary argument. When a diagram in $S$ for $\ell$ can be checkerboard shaded, the Dehn presentation leads naturally to an abelian "Dehn coloring group," an isotopy invariant of $\ell$. Introducing homological information from $S$ produces a stronger invariant, $\cal C$, a module over the group ring of $H_1(S; {\mathbb Z})$. The authors previously defined the Laplacian modules ${\cal L}_G,{ \cal L}_{G^*}$ and polynomials $\Delta_G, \Delta_{G^*}$ associated to a Tait graph $G$ and its dual $G^*$, and showed that the pairs $\{{\cal L}_G, {\cal L}_{G^*}\}$, $\{\Delta_G, \Delta_{G^*}\}$ are isotopy invariants of $\ell$. The relationship between $\cal C$ and the Laplacian modules is described and used to prove that $\Delta_G$ and $\Delta_{G^*}$ are equal when $S$ is a torus.
|
mathematics
|
Sampling high-dimensional images is challenging due to limited availability of sensors; scanning is usually necessary in these cases. To mitigate this challenge, snapshot compressive imaging (SCI) was proposed to capture the high-dimensional (usually 3D) images using a 2D sensor (detector). Via novel optical design, the {\em measurement} captured by the sensor is an encoded image of multiple frames of the 3D desired signal. Following this, reconstruction algorithms are employed to retrieve the high-dimensional data. Though various algorithms have been proposed, the total variation (TV) based method is still the most efficient one due to a good trade-off between computational time and performance. This paper aims to answer the question of which TV penalty (anisotropic TV, isotropic TV and vectorized TV) works best for video SCI reconstruction? Various TV denoising and projection algorithms are developed and tested for video SCI reconstruction on both simulation and real datasets.
|
electrical engineering and systems science
|
We propose an extension of the standard model with Majorana-type fermionic dark matters based on the flatland scenario where all scalar coupling constants, including scalar mass terms, vanish at the Planck scale, i.e. the scalar potential is flat above the Planck scale. This scenario could be compatible with the asymptotic safety paradigm for quantum gravity. We search the parameter space so that the model reproduces the observed values such as the Higgs mass, the electroweak vacuum and the relic abundance of dark matter. We also investigate the spin-independent elastic cross section for the Majorana fermions and a nucleon. It is shown that the Majorana fermions as dark matter candidates could be tested by dark matter direct detection experiments such as XENON, LUX and PandaX-II. We demonstrate that within the minimal setup compatible with the flatland scenario at the Planck scale or asymptotically safe quantum gravity, the extended model could have a strong predictability.
|
high energy physics phenomenology
|
This paper discusses the analysis performed on a supersonic ramjet engine inlet for flight in the atmosphere of Jupiter. Since the Jovian atmosphere lacks oxygen, the thrust will be generated by nuclear fission heating in the heat chamber. The first task to solve in the design in a ramjet engine is to design the supersonic inlet. The developed design methodology utilizes theoretical calculations and Computational Fluid Dynamics (CFD) simulations. The analytical model used to calculate the gas parameters in front of the heat chamber, and the CFD analysis, used to define the inlet geometry, are discussed. The results from the analytical model and CFD are compared and used for validation of the design approach. The calculated pressure losses and the mass flow allow the determination of important parameters required for the design of the aircraft, such as the reactor power, the thrust, the maximum mass, and the overall external dimensions.
|
physics
|
In this paper, the dilepton electromagnetic decays $\chi_{cJ}(1P) \to J/\psi e^+e^-$ and $\chi_{cJ}(1P) \to J\psi \mu^+\mu^-$, where $\chi_{cJ}$ denotes $\chi_{c0}$, $\chi_{c1}$ and $\chi_{c2}$, are calculated systematically in the improved Bethe-Salpeter method. The numerical results of decay widths and the invariant mass distributions of the final lepton pairs are given. The comparison is made with the recently measured experimental data of BESIII. It is shown that for the cases including $e^+e^-$, the gauge invariance is decisive and should be considered carefully. For the processes of $\chi_{cJ}(1P) \to J/\psi e^+e^-$, the branching fraction are: $\mathcal{B}[\chi_{c0}(1P) \to J/\psi e^+e^-]=1.06^{+0.16}_{-0.18} \times 10^{-4}$, $\mathcal{B}[\chi_{c1}(1P) \to J/\psi e^+e^-]=2.88^{+0.50}_{-0.53} \times 10^{-3}$, and $\mathcal{B}[\chi_{c2}(1P) \to J/\psi e^+e^-]=1.74^{+0.22}_{-0.21} \times 10^{-3}$. The calculated branching fractions of $\chi_{cJ}(1P)\to J/\psi \mu^+\mu^-$ channels are: $\mathcal{B}[\chi_{c0}(1P) \to J/\psi \mu^+\mu^-]=3.80^{+0.59}_{-0.64} \times 10^{-6}$, $\mathcal{B}[\chi_{c1}(1P) \to J/\psi \mu^+\mu^-]=2.04^{+0.36}_{-0.38} \times 10^{-4}$, and $\mathcal{B}[\chi_{c2}(1P) \to J/\psi \mu^+\mu^-]=1.66^{+0.19}_{-0.19} \times 10^{-4}$.
|
high energy physics phenomenology
|
The utilization of computational photography becomes increasingly essential in the medical field. Today, imaging techniques for dermatology range from two-dimensional (2D) color imagery with a mobile device to professional clinical imaging systems measuring additional detailed three-dimensional (3D) data. The latter are commonly expensive and not accessible to a broad audience. In this work, we propose a novel system and software framework that relies only on low-cost (and even mobile) commodity devices present in every household to measure detailed 3D information of the human skin with a 3D-gradient-illumination-based method. We believe that our system has great potential for early-stage diagnosis and monitoring of skin diseases, especially in vastly populated or underdeveloped areas.
|
electrical engineering and systems science
|
In this paper, we propose a new kind of numerical scheme for high-dimensional backward stochastic differential equations based on modified multi-level Picard iteration. The proposed scheme is very similar to the original multi-level Picard iteration but it differs on underlying Monte-Carlo sample generation and enables an improvement in the sense of complexity. We prove the explicit error estimates for the case where the generator does not depend on control variate.
|
mathematics
|
A new model that maps a quantum random walk described by a Hadamard operator to a particular case of a random walk is presented. The model is represented by a Markov chain with a stochastic matrix, i.e., all the transition rates are positive, although the Hadamard operator contains negative entries. Using a proper transformation that is applied to the random walk distribution after n steps, the probability distributions in space of the two quantum states |1>, |0> are revealed. These show that a quantum walk can be entirely mapped to a particular case of a higher dimension of a random walk model. The random walk model and its equivalence to a Hadamard walk can be extended for other cases, such as a finite chain with two reflecting points
|
quantum physics
|
Launching in 2028, ESA's Atmospheric Remote-sensing Exoplanet Large-survey (ARIEL) survey of $\sim$1000 transiting exoplanets will build on the legacies of Kepler and TESS and complement JWST by placing its high precision exoplanet observations into a large, statistically-significant planetary population context. With continuous 0.5--7.8~$\mu$m coverage from both FGS (0.50--0.55, 0.8--1.0, and 1.0--1.2~$\mu$m photometry; 1.25--1.95~$\mu$m spectroscopy) and AIRS (1.95--7.80~$\mu$m spectroscopy), ARIEL will determine atmospheric compositions and probe planetary formation histories during its 3.5-year mission. NASA's proposed Contribution to ARIEL Spectroscopy of Exoplanets (CASE) would be a subsystem of ARIEL's FGS instrument consisting of two visible-to-infrared detectors, associated readout electronics, and thermal control hardware. FGS, to be built by the Polish Academy of Sciences' Space Research Centre, will provide both fine guiding and visible to near-infrared photometry and spectroscopy, providing powerful diagnostics of atmospheric aerosol contribution and planetary albedo, which play a crucial role in establishing planetary energy balance. The CASE team presents here an independent study of the capabilities of ARIEL to measure exoplanetary metallicities, which probe the conditions of planet formation, and FGS to measure scattering spectral slopes, which indicate if an exoplanet has atmospheric aerosols (clouds and hazes), and geometric albedos, which help establish planetary climate. Our design reference mission simulations show that ARIEL could measure the mass-metallicity relationship of its 1000-planet single-visit sample to $>7.5\sigma$ and that FGS could distinguish between clear, cloudy, and hazy skies and constrain an exoplanet's atmospheric aerosol composition to $>5\sigma$ for hundreds of targets, providing statistically-transformative science for exoplanet atmospheres.
|
astrophysics
|
Heavy goods vehicles (HGVs) are involved in 4.5% of police-reported road crashes in Europe and 14.2% of fatal road crashes. Active and passive safety systems can help to prevent crashes or mitigate the consequences but need detailed scenarios to be designed effectively. The aim of this paper is to give a comprehensive and up-to-date analysis of HGV crashes in Europe. The analysis is based on general statistics from CARE, results about trucks weighing 16 tons or more from national crash databases and a detailed study of in-depth crash data from GIDAS. Three scenarios are identified that should be addressed by future safety systems: (1) rear-end crashes with other vehicles in which the truck is the striking partner, (2) conflicts during right turn maneuvers of the truck and a cyclist and (3) pedestrians crossing the road perpendicular to the direction of travel of the truck.
|
statistics
|
We propose a flexible model for count time series which has potential uses for both underdispersed and overdispersed data. The model is based on the Conway-Maxwell-Poisson (COM-Poisson) distribution with parameters varying along time to take serial correlation into account. Model estimation is challenging however and require the application of recently proposed methods to deal with the intractable normalising constant as well as efficiently sampling values from the COM-Poisson distribution.
|
statistics
|
It follows by Bixby's Lemma that if $e$ is an element of a $3$-connected matroid $M$, then either $\mathrm{co}(M\backslash e)$, the cosimplification of $M\backslash e$, or $\mathrm{si}(M/e)$, the simplification of $M/e$, is $3$-connected. A natural question to ask is whether $M$ has an element $e$ such that both $\mathrm{co}(M\backslash e)$ and $\mathrm{si}(M/e)$ are $3$-connected. Calling such an element "elastic", in this paper we show that if $|E(M)|\ge 4$, then $M$ has at least four elastic elements provided $M$ has no $4$-element fans.
|
mathematics
|
This work is devoted to the study of integral $p$-adic Hodge theory in the context of Artin stacks. For a Hodge-proper stack, using the formalism of prismatic cohomology, we establish a version of $p$-adic Hodge theory with the \'etale cohomology of the Raynaud generic fiber as an input. In particular, we show that the corresponding Galois representation is crystalline and that the associated Breuil-Kisin module is given by the prismatic cohomology. An interesting new feature of the stacky setting is that the natural map between \'etale cohomology of the algebraic and the Raynaud generic fibers is often an equivalence even outside of the proper case. In particular, we show that this holds for global quotients $[X/G]$ where $X$ is a smooth proper scheme and $G$ is a reductive group. As applications we deduce Totaro's conjectural inequality and also set up a theory of $A_{\mathrm{inf}}$-characteristic classes.
|
mathematics
|
Nonlinear ICA is a fundamental problem for unsupervised representation learning, emphasizing the capacity to recover the underlying latent variables generating the data (i.e., identifiability). Recently, the very first identifiability proofs for nonlinear ICA have been proposed, leveraging the temporal structure of the independent components. Here, we propose a general framework for nonlinear ICA, which, as a special case, can make use of temporal structure. It is based on augmenting the data by an auxiliary variable, such as the time index, the history of the time series, or any other available information. We propose to learn nonlinear ICA by discriminating between true augmented data, or data in which the auxiliary variable has been randomized. This enables the framework to be implemented algorithmically through logistic regression, possibly in a neural network. We provide a comprehensive proof of the identifiability of the model as well as the consistency of our estimation method. The approach not only provides a general theoretical framework combining and generalizing previously proposed nonlinear ICA models and algorithms, but also brings practical advantages.
|
statistics
|
SNR G0.9+0.1 is a well known source in the direction of the Galactic Center composed by a Supernova Remnant (SNR) and a Pulsar Wind Nebula (PWN) in the core. We investigate the potential of the future Cherenkov Telescope Array (CTA), simulating observations of SNR G0.9+0.1. We studied the spatial and spectral properties of this source and estimated the systematic errors of these measurements. The source will be resolved if the VHE emission region is bigger than $\sim0.65'$. It will also be possible to distinguish between different spectral models and calculate the cut-off energy. The systematic errors are dominated by the IRF instrumental uncertainties, especially at low energies. We computed the evolution of a young PWN inside a SNR using a one-zone time-dependent leptonic model. We applied the model to the simulated CTA data and found that it will be possible to accurately measure the cut-off energy of the $\gamma$-ray spectrum. Fitting of the multiwavelength spectrum will allow us to constrain also the magnetization of the PWN. Conversely, a pure power law spectrum would rule out this model. Finally, we checked the impact of the spectral shape and the energy density of the Inter-Stellar Radiation Fields (ISRFs) on the estimate of the parameters of the PWN, finding that they are not significantly affected.
|
astrophysics
|
We propose a simple and computationally efficient approach for designing a robust Model Predictive Controller (MPC) for constrained uncertain linear systems. The uncertainty is modeled as an additive disturbance and an additive error on the system dynamics matrices. Set based bounds for each component of the model uncertainty are assumed to be known. We separate the constraint tightening strategy into two parts, depending on the length of the MPC horizon. For a horizon length of one, the robust MPC problem is solved exactly, whereas for other horizon lengths, the model uncertainty is over-approximated with a net-additive component. The resulting MPC controller guarantees robust satisfaction of state and input constraints in closed-loop with the uncertain system. With appropriately designed terminal components and an adaptive horizon strategy, we prove the controller's recursive feasibility and stability of the origin. With numerical simulations, we demonstrate that our proposed approach gains up to 15x online computation speedup over a tube MPC strategy, while stabilizing about 98$\%$ of the latter's region of attraction.
|
electrical engineering and systems science
|
Metrics of model goodness-of-fit, model comparison, and model parameter estimation are the main categories of statistical problems in science. Bayesian and frequentist methods that address these questions often rely on a likelihood function, which is the key ingredient in order to assess the plausibility of model parameters given observed data. In some complex systems or experimental setups, predicting the outcome of a model cannot be done analytically, and Monte Carlo techniques are used. In this paper, we present a new analytic likelihood that takes into account Monte Carlo uncertainties, appropriate for use in the large and small sample size limits. Our formulation performs better than semi-analytic methods, prevents strong claims on biased statements, and provides improved coverage properties compared to available methods.
|
physics
|
High precision CCD observations of six totally eclipsing contact binaries were presented and analyzed. It is found that only one target is an A-type contact binary (V429 Cam), while the others are W-type contact ones. By analyzing the times of light minima, we discovered that two of them exhibit secular period increase while three manifest long-term period decrease. For V1033 Her, a cyclic variation superimposed on the long-term increase was discovered. By comparing the Gaia distances with those calculated by the absolute parameters of 173 contact binaries, we found that Gaia distance can be applied to estimate absolute parameters for most contact binaries. The absolute parameters of our six targets were estimated by using their Gaia distances. The evolutionary status of contact binaries was studied, we found that the A- and W- subtype contact binaries may have different formation channels. The relationship between the spectroscopic and photometric mass ratios for 101 contact binaries was presented. It is discovered that the photometric mass ratios are in good agreement with the spectroscopic ones for almost all the totally eclipsing systems, which is corresponding to the results derived by Pribulla et al. (2003a) and Terrell & Wilson (2005).
|
astrophysics
|
We extract interstellar scintillation parameters for pulsars observed by the NANOGrav radio pulsar timing program. Dynamic spectra for the observing epochs of each pulsar were used to obtain estimates of scintillation timescales, scintillation bandwidths, and the corresponding scattering delays using a stretching algorithm to account for frequency-dependent scaling. We were able to measure scintillation bandwidths for 28 pulsars at 1500 MHz and 15 pulsars at 820 MHz. We examine scaling behavior for 17 pulsars and find power-law indices ranging from $-0.7$ to $-3.6$, though these may be biased shallow due to insufficient frequency resolution at lower frequencies. We were also able to measure scintillation timescales for six pulsars at 1500 MHz and seven pulsars at 820 MHz. There is fair agreement between our scattering delay measurements and electron-density model predictions for most pulsars. We derive interstellar scattering-based transverse velocities assuming isotropic scattering and a scattering screen halfway between the pulsar and earth. We also estimate the location of the scattering screens assuming proper motion and interstellar scattering-derived transverse velocities are equal. We find no correlations between variations in scattering delay and either variations in dispersion measure or flux density. For most pulsars for which scattering delays were measurable, we find that time of arrival uncertainties for a given epoch are larger than our scattering delay measurements, indicating that variable scattering delays are currently subdominant in our overall noise budget but are important for achieving precisions of tens of ns or less.
|
astrophysics
|
One-parameter functionals of the R\'{e}nyi $R_{\rho,\gamma}(\alpha)$ and Tsallis $T_{\rho,\gamma}(\alpha)$ types are calculated both in the position (subscript $\rho$) and momentum ($\gamma$) spaces for the azimuthally symmetric 2D nanoring that is placed into the combination of the transverse uniform magnetic field $\bf B$ and the Aharonov-Bohm (AB) flux $\phi_{AB}$ and whose potential profile is modelled by the superposition of the quadratic and inverse quadratic dependencies on the radius $r$. Position (momentum) R\'{e}nyi entropy depends on the field $B$ as a negative (positive) logarithm of $\omega_{eff}\equiv\left(\omega_0^2+\omega_c^2/4\right)^{1/2}$, where $\omega_0$ determines the quadratic steepness of the confining potential and $\omega_c$ is a cyclotron frequency. This makes the sum ${R_\rho}_{nm}(\alpha)+{R_\gamma}_{nm}(\frac{\alpha}{2\alpha-1})$ a field-independent quantity that increases with the principal $n$ and azimuthal $m$ quantum numbers and does satisfy corresponding uncertainty relation. Analytic expression for the lower boundary of the semi-infinite range of the dimensionless coefficient $\alpha$ where the momentum entropies exist reveals that it depends on the ring geometry, AB intensity and quantum number $m$. It is proved that there is the only orbital for which both R\'{e}nyi and Tsallis uncertainty relations turn into the identity at $\alpha=1/2$ and which is not necessarily the lowest-energy level. At any coefficient $\alpha$, the dependence of the position R\'{e}nyi entropy on the AB flux mimics the energy variation with $\phi_{AB}$ what, under appropriate scaling, can be used for the unique determination of the associated persistent current. Similarities and differences between the two entropies and their uncertainty relations are discussed too.
|
quantum physics
|
We theoretically investigate how the initial state influence the entanglement dynamics between two and three two-level atoms with dipole-dipole interaction (DDI) coupled to a whispering-gallery-mode (WGM) microtoroidal cavity. Two different cases, where the two atoms are coupled symmetrically or asymmetrically to the two WGMs through evanescent fields, are discussed in detail. Considering two types of initial states between the atoms and the symmetric regime, we show that for the initial entangled state, the sudden death and birth, as well as the freezing of the entanglement, can be obtained by adjusting both the scattering strength between the modes and the DDI, differently from the initial product state. Moreover, we note that the atomic entanglement generation is more susceptible to the scattering strength variation between the modes than to the DDI. In addition, for the asymmetric regime, the entanglement generation is strongly dependent on the atomic location and the scattering strength. Similar results are obtained for the case of three atoms coupled to a microtoroidal cavity, even in the presence of losses.
|
quantum physics
|
An unbiased one-dimensional weak link between two terminals, subjected to the Rashba spin-orbit interaction caused by an AC electric field which rotates periodically in the plane perpendicular to the link, is shown to inject spin-polarized electrons into the terminals. The injected spin-polarization has a DC component along the link and a rotating transverse component in the perpendicular plane. In the adiabatic, low rotation-frequency regime, these polarization components are proportional to the frequency. The DC component of the polarization vanishes for a linearly-polarized electric field.
|
condensed matter
|
We study the conditions under which a non-standard Wigner class concerning discrete symmetries may arise for massive spin one-half states. The mass dimension one fermionic states are shown \textcolor{red}{to} constitute explicit examples. We also show how to conciliate these states with the current criticism due to the Lee and Wick, and Weinberg formulation.
|
high energy physics theory
|
Intelligent reflecting surface (IRS) is envisioned to be a new and revolutionizing technology for achieving spectrum and energy efficient wireless communication networks cost-effectively in the future. Specifically, an IRS consists of a large number of low-cost passive elements each reflecting the incident signal with a certain phase shift to collaboratively achieve beamforming and/or interference suppression at designated receivers. In this paper, we study an IRS-aided multiuser multiple-input single-output (MISO) wireless system where one IRS is deployed to assist in the communication from a multi-antenna access point (AP) to multiple single-antenna users. As such, each user receives the superposed signals from the AP as well as the IRS via its reflection. We aim to minimize the total transmit power at the AP by jointly optimizing the transmit beamforming by active antenna array at the AP and reflect beamforming by passive phase shifters at the IRS, subject to users' individual signal-to-interference-plus-noise ratio (SINR) constraints. However, the formulated problem is non-convex and difficult to be solved optimally.
|
computer science
|
The mass function of clumps observed in molecular clouds raises interesting theoretical issues, especially in its relation to the stellar initial mass function. We propose a statistical model of the mass function of prestellar cores (CMF), formed in self-gravitating isothermal clouds at a given stage of their evolution. The latter is characterized by the mass-density probability distribution function ($\rho$-PDF), which is a power-law with slope $q$. The variety of MCs is divided in ensembles according to the PDF slope and each ensemble is represented by a single spherical cloud. The cores are considered as elements of self-similar structure typical for fractal clouds and are modeled by spherical objects populating each cloud shell. Our model assumes relations between size, mass and density of the statistical cores. Out of them a core mass-density relationship $\rho\propto m^x$ is derived where $x=1/(1+q)$. We found that $q$ determines the existence or non-existence of a threshold density for core collapse. The derived general CMF is a power law of slope $-1$ while the CMF of gravitationally unstable cores has a slope $(-1 + x/2)$, comparable with the slopes of the high-mass part of the stellar initial mass function and of observational CMFs.
|
astrophysics
|
We present strong evidence that the tree level slow roll bounds of arXiv:1807.05193 and arXiv:1810.05506 are valid, even when the tachyon has overlap with the volume of the cycle wrapped by the orientifold. This extends our previous results in the volume-dilaton subspace to a semi-universal modulus. Emboldened by this and other observations, we investigate what it means to have a bound on (generalized) slow roll in a multi-field landscape. We argue that for $any$ point $\phi_0$ in an $N$-dimensional field space with $V(\phi_0) > 0$, there exists a path of monotonically decreasing potential energy to a point $\phi_1$ within a path length $\lesssim {\cal O}(1)$, such that $\sqrt{N}\ln \frac{V(\phi_1)}{V(\phi_0)} \lesssim - {\cal O} (1)$. The previous de Sitter swampland bounds are specific ways to realize this stringent non-local constraint on field space, but we show that it also incorporates (for example) the scenario where both slow roll parameters are intermediate-valued and the Universe undergoes a small number of e-folds, as in the Type IIA set up of arXiv:1310.8300. Our observations are in the context of tree level constructions, so we take the conservative viewpoint that it is a characterization of the classical "boundary" of the string landscape. To emphasize this, we argue that these bounds can be viewed as a type of Dine-Seiberg statement.
|
high energy physics theory
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.