text
stringlengths
11
9.77k
label
stringlengths
2
104
Ensembles of bismuth donor spins in silicon are promising storage elements for microwave quantum memories due to their long coherence times which exceed seconds. Operating an efficient quantum memory requires achieving critical coupling between the spin ensemble and a suitable high-quality factor resonator -- this in turn requires a thorough understanding of the lineshapes for the relevant spin resonance transitions, particularly considering the influence of the resonator itself on line broadening. Here, we present pulsed electron spin resonance measurements of ensembles of bismuth donors in natural silicon, above which niobium superconducting resonators have been patterned. By studying spin transitions across a range of frequencies and fields we identify distinct line broadening mechanisms, and in particular those which can be suppressed by operating at magnetic-field-insensitive `clock transitions'. Given the donor concentrations and resonator used here, we measure a cooperativity $C\sim 0.2$ and based on our findings we discuss a route to achieve unit cooperativity, as required for a quantum memory.
condensed matter
Consumption of REST services has become a popular means of invoking code provided by third parties, particularly in web applications. Nowadays programmers of web applications can choose TypeScript over JavaScript to benefit from static type checking that enables validating calls to local functions or to those provided by libraries. Errors in calls to REST services, however, can only be found at run-time. In this paper, we present SafeRESTScript (SRS, for short) a language that extends the support of static analysis to calls to REST services, with the ability to statically find common errors such as missing or invalid data in REST calls and misuse of the results from such calls. SafeRESTScript features a syntax similar to JavaScript and is equipped with (i) a rich collection of types (including objects, arrays and refinement types)and (ii) primitives to natively support REST calls that are statically validated against specifications of the corresponding APIs. Specifications are written in HeadREST, a language that also features refinement types and supports the description of semantic aspects of REST APIs in a style reminiscent of Hoare triples. We present SafeRESTScript and its validation system, based on a general-purpose verification tool (Boogie). The evaluation of SafeRESTScript and of the prototype implementations for its validator, available in the form of an Eclipse plugin, is also discussed.
computer science
Black holes in a class of string compactifications, known as STU models, carry four electric and four magnetic charges. Furthermore a duality group, given by the product of three congruence subgroups of $SL(2,\mathbb{Z})$, acts on these integer valued charges. By placing these eight charges at the eight corners of a Bhargava cube, we provide a classification of the duality orbits in these theories.
high energy physics theory
Recently, we have seen a rapid development of Deep Neural Network (DNN) based visual tracking solutions. Some trackers combine the DNN-based solutions with Discriminative Correlation Filters (DCF) to extract semantic features and successfully deliver the state-of-the-art tracking accuracy. However, these solutions are highly compute-intensive, which require long processing time, resulting unsecured real-time performance. To deliver both high accuracy and reliable real-time performance, we propose a novel tracker called SiamVGG. It combines a Convolutional Neural Network (CNN) backbone and a cross-correlation operator, and takes advantage of the features from exemplary images for more accurate object tracking. The architecture of SiamVGG is customized from VGG-16, with the parameters shared by both exemplary images and desired input video frames. We demonstrate the proposed SiamVGG on OTB-2013/50/100 and VOT 2015/2016/2017 datasets with the state-of-the-art accuracy while maintaining a decent real-time performance of 50 FPS running on a GTX 1080Ti. Our design can achieve 2% higher Expected Average Overlap (EAO) compared to the ECO and C-COT in VOT2017 Challenge.
computer science
We introduce an axiomatic approach for channel divergences and channel relative entropies that is based on three information-theoretic axioms of monotonicity under superchannels (i.e. generalized data processing inequality), additivity under tensor products, and normalization, similar to the approach given recently for the state domain. We show that these axioms are sufficient to give enough structure also in the channel domain, leading to numerous properties that are applicable to all channel divergences. These include faithfulness, continuity, a type of triangle inequality, and boundedness between the min and max channel relative entropies. In addition, we prove a uniqueness theorem showing that the Kullback-Leibler divergence has only one extension to classical channels. For quantum channels, with the exception of the max relative entropy, this uniqueness does not hold. Instead we prove the optimality of the amortized channel extension of the Umegaki relative entropy, by showing that it provides a lower bound on all channel relative entropies that reduce to the Kullback-Leibler divergence on classical states. We also introduce the maximal channel extension of a given classical state divergence and study its properties.
quantum physics
Broadband dual-comb spectroscopy (DCS) based on portable mode-locked fiber frequency combs is a powerful tool for in situ, calibration free, multi-species spectroscopy. While the acquisition of a single spectrum with mode-locked DCS typically takes microseconds to milliseconds, the applications of these spectrometers have generally been limited to systems and processes with time changes on the order of seconds or minutes due to the need to average many spectra to reach a high signal-to-noise ratio (SNR). Here, we demonstrate high-speed, continuous, fiber mode-locked laser DCS with down to 11 $\mu$s time resolution. We achieve this by filtering the comb spectra using portable Fabry-Perot cavities to generate filtered combs with 1 GHz tooth spacing. The 1 GHz spacing increases the DCS acquisition speed and SNR for a given optical bandwidth while retaining a sufficient spacing to resolve absorption features over a wide range of conditions. We measure spectra of methane inside a rapid compression machine throughout the 16 ms compression cycle with 133 cm$^{-1}$ bandwidth (4000 comb teeth) and 1.4 ms time resolution by spectrally filtering one of the combs. By filtering both combs, we measured a single-shot, 25 cm$^{-1}$ (750 comb teeth) spectrum of CO around 6330 cm$^{-1}$ in 11 $\mu$s. The technique enables simultaneously high-speed and high-resolution DCS measurements, and can be applied anywhere within the octave-spanning spectrum of robust and portable fiber mode-locked frequency combs.
physics
The relaxation dynamics and thermodynamic properties of supercooled and glassy gambogic acid are investigated using both theory and experiment. We measure the temperature dependence of the relaxation times in three polymorphs (alpha-, beta-, and gamma-form). To gain insight into the relaxation processes, we propose a theoretical approach to quantitatively understand nature of these three relaxations. The alpha-relaxation captures cooperative motions of molecules while the beta-process is mainly governed by local dynamics of a single molecule within the cage formed by its nearest neighbors. Based on quantitative agreement between theory and experimental data, our calculations clearly indicate that the beta-process is a precursor of the structural relaxation and intramolecular motions are responsible for the gamma-relaxation. Moreover, the approach is exploited to study effects of the heating process on alpha relaxation. We find that the heating rate varies logarithmically with Tg and 1000/Tg. These variations are qualitatively consistent with many prior studies.
condensed matter
In this paper, we study a class of stochastic optimization problems, referred to as the \emph{Conditional Stochastic Optimization} (CSO), in the form of $\min_{x \in \mathcal{X}} \EE_{\xi}f_\xi\Big({\EE_{\eta|\xi}[g_\eta(x,\xi)]}\Big)$, which finds a wide spectrum of applications including portfolio selection, reinforcement learning, robust learning, causal inference and so on. Assuming availability of samples from the distribution $\PP(\xi)$ and samples from the conditional distribution $\PP(\eta|\xi)$, we establish the sample complexity of the sample average approximation (SAA) for CSO, under a variety of structural assumptions, such as Lipschitz continuity, smoothness, and error bound conditions. We show that the total sample complexity improves from $\cO(d/\eps^4)$ to $\cO(d/\eps^3)$ when assuming smoothness of the outer function, and further to $\cO(1/\eps^2)$ when the empirical function satisfies the quadratic growth condition. We also establish the sample complexity of a modified SAA, when $\xi$ and $\eta$ are independent. Several numerical experiments further support our theoretical findings. Keywords: stochastic optimization, sample average approximation, large deviations theory
mathematics
We revisit the degeneracy between the Hubble constant, $H_0$, and matter density, $\Omega_m$, for current cosmic microwave background (CMB) observations within the standard $\Lambda CDM$ model. We show that Planck, Wilkinson Microwave Anisotropy Probe (WMAP), South Pole Telescope (SPT), and Atacama Cosmology Telescope Polarimeter (ACTPol) temperature power spectra produce different values of the exponent $x$ from minimizing the variance of the product $\Omega_mH_0^x$. The distribution of $x$ from the different data sets does not follow the Markov Chain Monte Carlo (MCMC) best-fit values for $H_0$ or $\Omega_m$. Particularly striking is the difference between Planck multipoles $\ell\leq800$ ($x=2.81$), and WMAP ($x = 2.94$), despite very similar best-fit cosmologies. We use a Fisher matrix analysis to show that, in fact, this range in exponent values is exactly as expected in $\Lambda CDM$ given the multipole coverage and power spectrum uncertainties for each experiment. We show that the difference in $x$ from the Planck $\ell \leq 800$ and WMAP data is explained by a turning point in the relationship between $x$ and the maximum effective multipole, at around $\ell=700$. The value of $x$ is determined by several physical effects, and we highlight the significant impact of gravitational lensing for the high-multipole measurements. Despite the spread of $H_0$ values from different CMB experiments, the experiments are consistent with their sampling of the $\Omega_m-H_0$ degeneracy and do not show evidence for the need for new physics or for the presence of significant underestimated systematics according to these tests. The Fisher calculations can be used to predict the $\Omega_m-H_0$ degeneracy of future experiments.
astrophysics
Young, low-mass stars in the solar neighborhood are vital for completing the mass function for nearby, young coeval groups, establishing a more complete census for evolutionary studies, and providing targets for direct-imaging exoplanet and/or disk studies. We present properties derived from high-resolution optical spectra for 336 candidate young nearby, low-mass stars. These include measurements of radial velocities and age diagnostics such as H$\alpha$ and Li $\lambda$6707 equivalent widths. Combining our radial velocities with astrometry from Gaia DR2, we provide full 3D kinematics for the entire sample. We combine the measured spectroscopic youth information with additional age diagnostics (e.g., X-ray and UV fluxes, CMD positions) and kinematics to evaluate potential membership in nearby, young moving groups and associations. We identify 78 objects in our sample as bonafide members of 10 different moving groups, 15 of which are completely new members or have had their group membership reassigned. We also reject 44 previously proposed candidate moving group members. Furthermore, we have newly identified or confirmed the youth of numerous additional stars that do not belong to any currently known group, and find 69 co-moving systems using Gaia DR2 astrometry. We also find evidence that the Carina association is younger than previously thought, with an age similar to the $\beta$ Pictoris moving group ($\sim$22 Myr).
astrophysics
We design and implement an atomic frequency comb quantum memory for 793 nm wavelength photons using a monolithic cavity based on a thulium-doped Y$_3$Al$_5$O$_{12}$ (Tm:YAG) crystal. Approximate impedance matching results in the absorption of approximately $90\%$ of input photons and a memory efficiency of (27.5$\pm$ 2.7)% over a 500 MHz bandwidth. The cavity enhancement leads to a significant improvement over the previous efficiency in Tm-doped crystals using a quantum memory protocol. In turn, this allows us for the first time to store and recall quantum states of light in such a memory. Our results demonstrate progress toward efficient and faithful storage of single photon qubits with large time-bandwidth product and multi-mode capacity for quantum networking.
quantum physics
The spectra of repeating fast radio bursts (FRBs) are complex and time-variable, sometimes peaking within the observing band and showing a fractional emission bandwidth of about 10-30%. These spectral features may provide insight into the emission mechanism of repeating fast radio bursts, or they could possibly be explained by extrinsic propagation effects in the local environment. Broadband observations can better quantify this behavior and help to distinguish between intrinsic and extrinsic effects. We present results from a simultaneous 2.25 and 8.36 GHz observation of the repeating FRB 121102 using the 70 m Deep Space Network (DSN) radio telescope, DSS-43. During the 5.7 hr continuous observing session, we detected 6 bursts from FRB 121102, which were visible in the 2.25 GHz frequency band. However, none of these bursts were detected in the 8.36 GHz band, despite the larger bandwidth and greater sensitivity in the higher-frequency band. This effect is not explainable by Galactic scintillation and, along with previous multi-band experiments, clearly demonstrates that apparent burst activity depends strongly on the radio frequency band that is being observed.
astrophysics
Manganese (Mn) abundances are sensitive probes of the progenitors of Type Ia supernovae (SNe). In this work, we present a catalog of manganese abundances in dwarf spheroidal satellites of the Milky Way, measured using medium-resolution spectroscopy. Using a simple chemical evolution model, we infer the manganese yield of Type Ia SNe in the Sculptor dwarf spheroidal galaxy (dSph) and compare to theoretical yields. The sub-solar yield from Type Ia SNe ($\mathrm{[Mn/Fe]}_{\mathrm{Ia}}=-0.30_{-0.03}^{+0.03}$ at $\mathrm{[Fe/H]}=-1.5$ dex, with negligible dependence on metallicity) implies that sub-Chandrasekhar-mass (sub-$M_{\mathrm{Ch}}$) white dwarf progenitors are the dominant channel of Type Ia SNe at early times in this galaxy, although some fraction ($\gtrsim20\%$) of $M_{\mathrm{Ch}}$ Type Ia or Type Iax SNe are still needed to produce the observed yield. However, this result does not hold in all environments. In particular, we find that dSph galaxies with extended star formation histories (Leo I, Fornax dSphs) appear to have higher [Mn/Fe] at a given metallicity than galaxies with early bursts of star formation (Sculptor dSph), suggesting that $M_{\mathrm{Ch}}$ progenitors may become the dominant channel of Type Ia SNe at later times in a galaxy's chemical evolution.
astrophysics
In this paper, we compute uncertainty relations for non-commutative space and obtain a better lower bound than the standard one obtained from Heisenberg's uncertainty relation. We also derive the reverse uncertainty relation for product and sum of uncertainties of two incompatible variables for one linear and another non-linear model of the harmonic oscillator. The non-linear model in non-commutating space yields two different expressions for Schr\"odinger and Heisenberg uncertainty relation. This distinction does not arise in commutative space, and even in the linear model of non-commutative space.
quantum physics
This paper describes two novel complementary techniques that improve the detection of lexical stress errors in non-native (L2) English speech: attention-based feature extraction and data augmentation based on Neural Text-To-Speech (TTS). In a classical approach, audio features are usually extracted from fixed regions of speech such as syllable nucleus. We propose an attention-based deep learning model that automatically derives optimal syllable-level representation from frame-level and phoneme-level audio features. Training this model is challenging because of the limited amount of incorrect stress patterns. To solve this problem, we propose to augment the training set with incorrectly stressed words generated with Neural TTS. Combining both techniques achieves 94.8\% precision and 49.2\% recall for the detection of incorrectly stressed words in L2 English speech of Slavic speakers.
electrical engineering and systems science
Galaxies host a wide array of internal stellar components, which need to be decomposed accurately in order to understand their formation and evolution. While significant progress has been made with recent integral-field spectroscopic surveys of nearby galaxies, much can be learned from analyzing the large sets of realistic galaxies now available through state-of-the-art hydrodynamical cosmological simulations. We present an unsupervised machine learning algorithm, named auto-GMM, based on Gaussian mixture models, to isolate intrinsic structures in simulated galaxies based on their kinematic phase space. For each galaxy, the number of Gaussian components allowed by the data is determined through a modified Bayesian information criterion. We test our method by applying it to prototype galaxies selected from the cosmological simulation IllustrisTNG. Our method can effectively decompose most galactic structures. The intrinsic structures of simulated galaxies can be inferred statistically by non-human supervised identification of galaxy structures. We successfully identify four kinds of intrinsic structures: cold disks, warm disks, bulges, and halos. Our method fails for barred galaxies because of the complex kinematics of particles moving on bar orbits.
astrophysics
We present next-to-leading order (NLO) electroweak corrections to the dominant five angular coefficients parametrizing the Drell-Yan process in the $Z$-boson mass peak range for finite-$p_T$ vector boson production. The results are presented differentially in the vector boson transverse momentum. The Lam-Tung violating difference $A_0-A_2$ is examined alongside the coefficients. A single lepton transverse momentum cut is needed in the case of electroweak corrections to avoid a double singularity in the photon induced diagrams, and the dependence on the value of this cut is examined. We compare the electroweak corrections to the angular coefficients to the NLO QCD corrections, including the single lepton cut. The size of the single lepton cut is found to affect the two coefficients $A_0$ and $A_2$ to largest extent. The relative size of the electroweak corrections to the coefficients is moderate for all single lepton cut values, and by extrapolation to the inclusive results, is moderate also for the full dilepton phase space case. However, for the Lam-Tung violation, there is a significant contribution from the electroweak corrections for low $p_T$ of the lepton pair.
high energy physics phenomenology
In chemical process engineering, surrogate models of complex systems are often necessary for tasks of domain exploration, sensitivity analysis of the design parameters, and optimization. A suite of computational fluid dynamics (CFD) simulations geared toward chemical process equipment modeling has been developed and validated with experimental results from the literature. Various regression-based active learning strategies are explored with these CFD simulators in-the-loop under the constraints of a limited function evaluation budget. Specifically, five different sampling strategies and five regression techniques are compared, considering a set of four test cases of industrial significance and varying complexity. Gaussian process regression was observed to have a consistently good performance for these applications. The present quantitative study outlines the pros and cons of the different available techniques and highlights the best practices for their adoption. The test cases and tools are available with an open-source license to ensure reproducibility and engage the wider research community in contributing to both the CFD models and developing and benchmarking new improved algorithms tailored to this field.
computer science
We consider an analog of particle production in a quartic $O(N)$ quantum oscillator with time-dependent frequency, which is a toy model of particle production in de Sitter space and dynamical Casimir effect. We calculate exact quantum averages, Keldysh propagator, and particle number using two different methods. First, we employ a kind of rotating wave approximation to estimate these quantities for small deviations from stationarity. Second, we extend these results to arbitrary strong deviations using Schwinger--Keldysh diagrammatic technique. We show that in strongly nonstationary situations, including the case of resonant oscillations, loop corrections to the tree-level expressions result in an additional degree of freedom, $N \to N + \frac{3}{2}$, which modify the average number and energy of created particles.
high energy physics theory
Resilience against major disasters is the most essential characteristic of future electrical distribution systems (EDS). A multi-agent-based rolling optimization method for EDS restoration scheduling is proposed in this paper. When a blackout occurs, considering the risk of losing the centralized authority due to the failure of the common core communication network, the agents available after disasters or cyber-attacks identify the communication-connected parts (CCPs) in the EDS with distributed communication. A multi-time interval optimization model is formulated and solved by the agents for the restoration scheduling of a CCP. A rolling optimization process for the entire EDS restoration is proposed. During the scheduling/rescheduling in the rolling process, the CCPs in the EDS are reidentified and the restoration schedules for the CCPs are updated. Through decentralized decision-making and rolling optimization, EDS restoration scheduling can automatically start and periodically update itself, providing effective solutions for EDS restoration scheduling in a blackout event. A modified IEEE 123-bus EDS is utilized to demonstrate the effectiveness of the proposed method.
electrical engineering and systems science
We present a discrete Morse-theoretic method for proving that a regular CW complex is homeomorphic to a sphere. We use this method to define bisimplices, the cells of a class of regular CW complexes we call bisimplicial complexes. The 1-skeleta of bisimplices are complete bipartite graphs making them suitable in constructing higher dimensional skeleta for bipartite graphs. We show that the flag bisimplicial completion of a finite bipartite bi-dismantlable graph is collapsible. We use this to show that the flag bisimplicial completion of a quadric complex is contractible and to construct a compact K(G,1) for G a torsion-free quadric group.
mathematics
In this paper, we confirm some congruences conjectured by V.J.W. Guo and M.J. Schlosser recently. For example, we show that for primes $p>3$, $$ \sum_{k=0}^{p-1}(2pk-2k-1)\frac{\left(\frac{-1}{p-1}\right)_k^{2p-2}}{(k!)^{2p-2}}\equiv0\pmod{p^5}. $$
mathematics
We present a beamforming algorithm for multiuser wideband millimeter wave (mmWave) communication systems where one access point uses hybrid analog/digital beamforming while multiple user stations have phased-arrays with a single RF chain. The algorithm operates in a more general mode than others available in literature and has lower computational complexity and training overhead. Throughout the paper, we describe: i) the construction of novel beamformer sets (codebooks) with wide sector beams and narrow beams based on the orthogonality property of beamformer vectors, ii) a beamforming algorithm that uses training transmissions over the codebooks to select the beamformers that maximize the received sumpower along the bandwidth, and iii) a numerical validation of the algorithm in standard indoor scenarios for mmWave WLANs using channels obtained with both statistical and raytracing models. Our algorithm is designed to serve multiple users in a wideband OFDM system and does not require channel matrix knowledge or a particular channel structure. Moreover, we incorporate antenna-specific aspects in the analysis, such as antenna coupling, element radiation pattern, and beam squint. Although there are no other solutions for the general system studied in this paper, we characterize the algorithm's achievable rate and show that it attains more than 70 percent of the spectral efficiency (between 1.5 and 3 dB SNR loss) with respect to ideal fully-digital beamforming in the analyzed scenarios. We also show that our algorithm has similar sum-rate performance as other solutions in the literature for some special cases, while providing significantly lower computational complexity (with a linear dependence on the number of antennas) and shorter training overhead.
electrical engineering and systems science
Boosting is a general method to convert a weak learner (which generates hypotheses that are just slightly better than random) into a strong learner (which generates hypotheses that are much better than random). Recently, Arunachalam and Maity gave the first quantum improvement for boosting, by combining Freund and Schapire's AdaBoost algorithm with a quantum algorithm for approximate counting. Their booster is faster than classical boosting as a function of the VC-dimension of the weak learner's hypothesis class, but worse as a function of the quality of the weak learner. In this paper we give a substantially faster and simpler quantum boosting algorithm, based on Servedio's SmoothBoost algorithm.
quantum physics
We develop a framework to evaluate the time-dependent resonant inelastic X-ray scattering (RIXS) signal with the use of non-equilibrium dynamical mean field theory simulations. The approach is based on the solution of a time-dependent impurity model which explicitly incorporates the probe pulse. It avoids the need to compute four-point correlation functions, and can in principle be combined with different impurity solvers. This opens a path to study time-resolved RIXS processes in multi-orbital systems. The approach is exemplified with a study of the RIXS signal of a melting Mott antiferromagnet.
condensed matter
We report on the first frequency ratio measurement of an $^{115}$In+ single ion clock and a $^{87}$Sr optical lattice clock. A hydrogen maser serves as a reference oscillator to measure the ratio by independent optical combs. Over more than 90 000 seconds of measurement time, the frequency ratio $f_{\rm{In^+}}/f_{\rm{Sr}}$ is determined to be 2.952 748 749 874 863 4(21) with relative uncertainty of $7.0 \times 10^{-16}$. The measurement creates a new connection in the network of frequency ratios of optical clocks.
physics
This paper presents a method for the optimal siting and sizing of energy storage systems (ESSs) in active distribution networks (ADNs) to achieve their dispatchability. The problem formulation accounts for the uncertainty inherent to the stochastic nature of distributed energy sources and loads. Thanks to the operation of ESSs, the main optimization objective is to minimize the dispatch error, which accounts for the mismatch between the realization and prediction of the power profile at the ADN connecting point to the upper layer grid, while respecting the grid voltages and ampacity constraints. The proposed formulation relies on the so-called Augmented Relaxed Optimal Power Flow (AR-OPF) method: it expresses a convex full AC optimal power flow, which is proven to provide a global optimal and exact solution in the case of radial power grids. The AR-OPF is coupled with the proposed dispatching control resulting in a two-level optimization problem. In the first block, the site and size of the ESSs are decided along with the level of dispatchability that the ADN can achieve. Then, in the second block, the adequacy of the ESS allocations and the feasibility of the grid operating points are verified over operating scenarios using the Benders decomposition technique. Consequently, the optimal size and site of the ESSs are adjusted. To validate the proposed method, simulations are conducted on a real Swiss ADN hosting a large amount of stochastic Photovoltaic (PV) generation.
electrical engineering and systems science
The next generation of electron-hadron facilities has the potential for significantly improving our understanding of exotic hadrons. The XYZ states have not been seen in photon-induced reactions so far. Their observation in such processes would provide an independent confirmation of their existence and offer new insights into their internal structure. Based on the known experimental data and the well-established quarkonium and Regge phenomenology, we give estimates for the exclusive cross sections of several XYZ states. For energies near threshold we expect cross sections of few nanobarns for the Zc(3900)+ and upwards of tens of nanobarn for the X(3872), which are well within reach of new facilities.
high energy physics phenomenology
The R package "sensobol" provides several functions to conduct variance-based uncertainty and sensitivity analysis, from the estimation of sensitivity indices to the visual representation of the results. It implements several state-of-the-art first and total-order estimators and allows the computation of up to third-order effects, as well as of the approximation error, in a swift and user-friendly way. Its flexibility makes it also appropriate for models with either a scalar or a multivariate output. We illustrate its functionality by conducting a variance-based sensitivity analysis of three classic models: the Sobol' (1998) G function, the logistic population growth model of Verhulst (1845), and the spruce budworm and forest model of Ludwig, Jones and Holling (1976).
statistics
In the paper we study a deep learning based method to solve the multicell power control problem for sum rate maximization subject to per-user rate constraints and per-base station (BS) power constraints. The core difficulty of this problem is how to ensure that the learned power control results by the deep neural network (DNN) satisfy the per-user rate constraints. To tackle the difficulty, we propose to cascade a projection block after a traditional DNN, which projects the infeasible power control results onto the constraint set. The projection block is designed based on a geometrical interpretation of the constraints, which is of low complexity, meeting the real-time requirement of online applications. Explicit-form expression of the backpropagated gradient is derived for the proposed projection block, with which the DNN can be trained to directly maximize the sum rate via unsupervised learning. We also develop a heuristic implementation of the projection block to reduce the size of DNN. Simulation results demonstrate the advantages of the proposed method over existing deep learning and numerical optimization~methods, and show the robustness of the proposed method with the model mismatch between training and testing~datasets.
electrical engineering and systems science
The sparse-driven radar imaging can obtain the high-resolution images about target scene with the down-sampled data. However, the huge computational complexity of the classical sparse recovery method for the particular situation seriously affects the practicality of the sparse imaging technology. In this paper, this is the first time the quantum algorithms are applied to the image recovery for the radar sparse imaging. Firstly, the radar sparse imaging problem is analyzed and the calculation problem to be solved by quantum algorithms is determined. Then, the corresponding quantum circuit and its parameters are designed to ensure extremely low computational complexity, and the quantum-enhanced reconstruction algorithm for sparse imaging is proposed. Finally, the computational complexity of the proposed method is analyzed, and the simulation experiments with the raw radar data are illustrated to verify the validity of the proposed method.
quantum physics
The Simons Observatory (SO) will be a cosmic microwave background (CMB) survey experiment with three small-aperture telescopes and one large-aperture telescope, which will observe from the Atacama Desert in Chile. In total, SO will field $\sim$70,000 transition-edge sensor (TES) bolometers in six spectral bands centered between 27 and 280 GHz in order to achieve the sensitivity necessary to measure or constrain numerous cosmological quantities. The SO Universal Focal Plane Modules (UFMs) each contain a 150 mm diameter TES detector array, horn or lenslet optical coupling, cold readout components, and magnetic shielding. SO will use a microwave SQUID multiplexing ($\mu$MUX) readout at an initial multiplexing factor of $\sim$1000; the cold (100 mK) readout components are packaged in a $\mu$MUX readout module, which is part of the UFM, and can also be characterized independently. The 100 mK stage TES bolometer arrays and microwave SQUIDs are sensitive to magnetic fields, and their measured response will vary with the degree to which they are magnetically shielded. We present measurements of the magnetic pickup of test microwave SQUID multiplexers as a study of various shielding configurations for the Simons Observatory. We discuss how these measurements motivated the material choice and design of the UFM magnetic shielding.
astrophysics
Inferring model parameters from experimental data is a grand challenge in many sciences, including cosmology. This often relies critically on high fidelity numerical simulations, which are prohibitively computationally expensive. The application of deep learning techniques to generative modeling is renewing interest in using high dimensional density estimators as computationally inexpensive emulators of fully-fledged simulations. These generative models have the potential to make a dramatic shift in the field of scientific simulations, but for that shift to happen we need to study the performance of such generators in the precision regime needed for science applications. To this end, in this work we apply Generative Adversarial Networks to the problem of generating weak lensing convergence maps. We show that our generator network produces maps that are described by, with high statistical confidence, the same summary statistics as the fully simulated maps.
astrophysics
Forecasting the outstanding claim liabilities to set adequate reserves is critical for a nonlife insurer's solvency. Chain-Ladder and Bornhuetter-Ferguson are two prominent actuarial approaches used for this task. The selection between the two approaches is often ad hoc due to different underlying assumptions. We introduce a Dirichlet model that provides a common statistical framework for the two approaches, with some appealing properties. Depending on the type of information available, the model inference naturally leads to either Chain-Ladder or Bornhuetter-Ferguson prediction. Using claims data on Worker's compensation insurance from several US insurers, we discuss both frequentist and Bayesian inference.
statistics
We consider the irrelevant flow of classical Liouville field theory driven by the $T\bar T$ operator. After discussing properties of its exact action and equation of motion we construct an infinite set of conserved currents. We also find its vacuum solutions.
high energy physics theory
We analyze Higgs condensate bubble expansion during a first-order electroweak phase transition in the early Universe. The interaction of particles with the bubble wall can be accompanied by the emission of multiple soft gauge bosons. When computed at fixed order in perturbation theory, this process exhibits large logarithmic enhancements which must be resummed to all orders when the wall velocity is large. We perform this resummation both analytically and numerically at leading logarithmic accuracy. The numerical simulation is achieved by means of a particle shower in the broken phase of the electroweak theory. The two approaches agree to the 10\% level. For fast-moving walls, we find the scaling of the thermal pressure exerted against the wall to be $P\sim \gamma^2T^4$, independent of the particle masses, implying a significantly slower terminal velocity than previously suggested.
high energy physics phenomenology
The addition of filler particles to polymer electrolytes is known to increment their ionic conductivity (IC). A detailed understanding of how the interactions between the constituent materials are responsible for the enhancement, remains to be developed. A significant contribution is ascribed to an increment of the polymer amorphous fraction, induced by the fillers, resulting in the formation of higher ionic conductivity channels in the polymer matrix. However, the dependence of IC on the particle weight load and its composition on the polymer morphology is not fully understood. This work investigates Li ion transport in composite polymer electrolytes (CPE) comprising Bi-doped LLZO particles embedded in PEO: LiTFSI matrixes. We find that the IC optimizes for very low particle weight loads (5 to 10%) and that both its magnitude and the load required, strongly depend on the garnet particle composition. Based on structural characterization results and electrochemical impedance spectroscopy, a mechanism is proposed to explain these findings. It is suggested that the Li-molar content in the garnet particle controls its interactions with the polymer matrix, resulting at the optimum loads reported, in the formation of high ionic conductivity channels. We propose that filler particle chemical manipulation of the polymer morphology is a promising avenue for the further development of composite polymer electrolytes.
condensed matter
Group testing is a screening strategy that involves dividing a population into several disjointed groups of subjects. In its simplest implementation, each group is tested with a single test in the first phase, while in the second phase only subjects in positive groups, if any, need to be tested again individually. In this paper, we address the problem of group testing design, which aims to determine a partition into groups of a finite population in such a way that cardinality constraints on the size of each group and a constraint on the expected total number of tests are satisfied while minimizing a linear combination of the expected number of false negative and false positive classifications. First, we show that the properties and model introduced by Aprahmian et al. can be extended to the group test design problem, which is then modeled as a constrained shortest path problem on a specific graph. We design and implement an ad hoc algorithm to solve this problem. On instances based on Sant\'e Publique France data on Covid-19 screening tests, the results of the computational experiments are very promising.
statistics
We study the problem of selecting most informative subset of a large observation set to enable accurate estimation of unknown parameters. This problem arises in a variety of settings in machine learning and signal processing including feature selection, phase retrieval, and target localization. Since for quadratic measurement models the moment matrix of the optimal estimator is generally unknown, majority of prior work resorts to approximation techniques such as linearization of the observation model to optimize the alphabetical optimality criteria of an approximate moment matrix. Conversely, by exploiting a connection to the classical Van Trees' inequality, we derive new alphabetical optimality criteria without distorting the relational structure of the observation model. We further show that under certain conditions on parameters of the problem these optimality criteria are monotone and (weak) submodular set functions. These results enable us to develop an efficient greedy observation selection algorithm uniquely tailored for quadratic models, and provide theoretical bounds on its achievable utility.
electrical engineering and systems science
A detailed temperature and pressure investigation on BiGdO$_{3}$ is carried out by means of dielectric constant, piezoelectric current, polarization-electric field loop, Raman scattering and x-ray diffraction measurements. Temperature dependent dielectric constant and dielectric loss show two anomalies at about 290 K (T$_r$) and 720 K (T$_C$). The later anomaly is most likely due to antiferroelectric to paraelectric transition as hinted by piezoelectric current and polarization-electric field loop measurements at room temperature, while the former anomaly suggests reorientation of polarization. Cubic to orthorhombic structural transition is observed at about 10 GPa in high pressure x-ray diffraction studies accompanied by anisotropic lattice parameter changes. An expansion about 30 % along $a$-axis and 15 % contraction along $b$-axis during the structural transition result in 9.5 % expansion in unit cell volume. This structural transition is corroborated by anomalous softening and large increase in full width half maximum (FWHM) of 640 cm$^{-1}$ Raman mode above 10 GPa. Enhancement of large structural distortion and significant volume expansion during the structural transition indicate towards an antiferroelectric to ferroelectric transition in the system.
condensed matter
Histo-pathological diagnostics are an inherent part of the everyday work but are particularly laborious and associated with time-consuming manual analysis of image data. In order to cope with the increasing diagnostic case numbers due to the current growth and demographic change of the global population and the progress in personalized medicine, pathologists ask for assistance. Profiting from digital pathology and the use of artificial intelligence, individual solutions can be offered (e.g. detect labeled cancer tissue sections). The testing of the human epidermal growth factor receptor 2 (HER2) oncogene amplification status via fluorescence in situ hybridization (FISH) is recommended for breast and gastric cancer diagnostics and is regularly performed at clinics. Here, we develop an interpretable, deep learning (DL)-based pipeline which automates the evaluation of FISH images with respect to HER2 gene amplification testing. It mimics the pathological assessment and relies on the detection and localization of interphase nuclei based on instance segmentation networks. Furthermore, it localizes and classifies fluorescence signals within each nucleus with the help of image classification and object detection convolutional neural networks (CNNs). Finally, the pipeline classifies the whole image regarding its HER2 amplification status. The visualization of pixels on which the networks' decision occurs, complements an essential part to enable interpretability by pathologists.
electrical engineering and systems science
Mistakes/uncertainties in object detection could lead to catastrophes when deploying robots in the real world. In this paper, we measure the uncertainties of object localization to minimize this kind of risk. Uncertainties emerge upon challenging cases like occlusion. The bounding box borders of an occluded object can have multiple plausible configurations. We propose a deep multivariate mixture of Gaussians model for probabilistic object detection. The covariances help to learn the relationship between the borders, and the mixture components potentially learn different configurations of an occluded part. Quantitatively, our model improves the AP of the baselines by 3.9% and 1.4% on CrowdHuman and MS-COCO respectively with almost no computational or memory overhead. Qualitatively, our model enjoys explainability since the resulting covariance matrices and the mixture components help measure uncertainties.
computer science
We study the torus partition functions of free bosonic CFTs in two dimensions. Integrating over Narain moduli defines an ensemble-averaged free CFT. We calculate the averaged partition function and show that it can be reinterpreted as a sum over topologies in three dimensions. This result leads us to conjecture that an averaged free CFT in two dimensions is holographically dual to an exotic theory of three-dimensional gravity with $U(1)^c \times U(1)^c$ symmetry and a composite boundary graviton. Additionally, for small central charge $c$, we obtain general constraints on the spectral gap of free CFTs using the spinning modular bootstrap, construct examples of Narain compactifications with a large gap, and find an analytic bootstrap functional corresponding to a single self-dual boson.
high energy physics theory
A quantum process encodes the causal structure that relates quantum operations performed in local laboratories. The process matrix formalism includes as special cases quantum mechanics on a fixed background space-time, but also allows for more general causal structures. Motivated by the interpretation of processes as a resource for quantum information processing shared by two (or more) parties, with advantages recently demonstrated both for computation and communication tasks, we investigate the notion of composition of processes. We show that under very basic assumptions such a composition rule does not exist. While the availability of multiple independent copies of a resource, e.g. quantum states or channels, is the starting point for defining information-theoretic notions such as entropy (both in classical and quantum Shannon theory), our no-go result means that a Shannon theory of general quantum processes will not possess a natural rule for the composition of resources.
quantum physics
We experimentally investigate a mechanical squeezed state realized in a parametrically-modulated membrane resonator embedded in an optical cavity. We demonstrate that a quantum characteristic of the squeezed dynamics can be revealed and quantified even in a moderately warm oscillator, through the analysis of motional sidebands. We provide a theoretical framework for quantitatively interpreting the observations and present an extended comparison with the experiment. A notable result is that the spectral shape of each motional sideband provides a clear signature of a quantum mechanical squeezed state without the necessity of absolute calibrations, in particular in the regime where residual fluctuations in the squeezed quadrature are reduced below the zero-point level.
quantum physics
Silicon photomultipliers are photon-number-resolving detectors endowed with hundreds of cells enabling them to reveal high-populated quantum optical states. In this paper, we address such a goal by showing the possible acquisition strategies that can be adopted and discussing their advantages and limitations. In particular, we determine the best acquisition solution in order to properly reveal the nature, either classical or nonclassical, of mesoscopic quantum optical states.
quantum physics
The large $N$ limit of the four-dimensional superconformal index was computed and successfully compared to the entropy of a class of AdS$_5$ black holes only in the particular case of equal angular momenta. Using the Bethe Ansatz formulation, we compute the index at large $N$ with arbitrary chemical potentials for all charges and angular momenta, for general $\mathcal{N}=1$ four-dimensional conformal theories with a holographic dual. We conjecture and bring some evidence that a particular universal contribution to the sum over Bethe vacua dominates the index at large $N$. For $\mathcal{N}=4$ SYM, this contribution correctly leads to the entropy of BPS Kerr-Newman black holes in AdS$_5 \times S^5$ for arbitrary values of the conserved charges, thus completing the microscopic derivation of their microstates. We also consider theories dual to AdS$_5 \times \mathrm{SE}_5$, where SE$_5$ is a Sasaki-Einstein manifold. We first check our results against the so-called universal black hole. We then explicitly construct the near-horizon geometry of BPS Kerr-Newman black holes in AdS$_5 \times T^{1,1}$, charged under the baryonic symmetry of the conifold theory and with equal angular momenta. We compute the entropy of these black holes using the attractor mechanism and find complete agreement with the field theory predictions.
high energy physics theory
Objective: Often, people with Subjective Cognitive Impairment (SCI), Mild Cognitive Impairment (MCI) and dementia are underwent to Electroencephalography (EEG) in order to evaluate through biological indexes the functional connectivity between brain regions and activation areas during cognitive performance. EEG recordings are frequently contaminated by muscle artifacts, which obscure and complicate their interpretation. These muscle artifacts are particularly difficult to be removed from the EEG in order the latter to be used for further clinical evaluation. In this paper, we proposed a new approach in removing muscle artifacts from EEG data using a method that combines second and high order statistical information. Subjects and Methods: In the proposed system the muscle artifacts of the EEG signal are removed by using the Independent Vector Analysis (IVA). The latter was formulated as a general joint Blind Source Separation (BSS) method that uses both second-order and higher order statistical information and thus takes advantage of both Independent Component Analysis (ICA) and Canonical Correlation Analysis (CCA). Diagonalization methods for IVA in the proposed system were reworked based on SCHUR decomposition offering a faster second order blind identification algorithm that can be used on time demanding applications. Results: The proposed method is evaluated in both simulated and real EEG data. To quantitatively examine the performance of the new method, two objective measures were adopted. The first measure is the Root Mean Square Error (RMSE) while the second is the Signal-to-noise-ratio (SNR). Conclusion: The proposed method overcomes with the need of removing muscle artifacts on both realistic simulated EEG data and brain activity from people with cognitive impairment.
electrical engineering and systems science
We develop a multiple scattering theory for the absorption of waves in disordered media. Based on a general expression of the average absorbed power, we discuss the possibility to maximize absorption by using structural correlations of disorder as a degree of freedom. In a model system made of absorbing scatterers in a transparent background, we show that a stealth hyperuniform distribution of the scatterers allows the average absorbed power to reach its maximum value. This study provides a theoretical framework for the design of efficient non-resonant absorbers made of dilute disordered materials, for broadband and omnidirectional light, and other kinds of waves.
physics
The availability and use of egocentric data are rapidly increasing due to the growing use of wearable cameras. Our aim is to study the effect (positive, neutral or negative) of egocentric images or events on an observer. Given egocentric photostreams capturing the wearer's days, we propose a method that aims to assign sentiment to events extracted from egocentric photostreams. Such moments can be candidates to retrieve according to their possibility of representing a positive experience for the camera's wearer. The proposed approach obtained a classification accuracy of 75% on the test set, with a deviation of 8%. Our model makes a step forward opening the door to sentiment recognition in egocentric photostreams.
computer science
In this paper, we develop the constraint energy minimization generalized multiscale finite element method (CEM-GMsFEM) in mixed formulation applied to parabolic equations with heterogeneous diffusion coefficients. The construction of the method is based on two multiscale spaces: pressure multiscale space and velocity multiscale space. The pressure space is constructed via a set of well-designed local spectral problems, which can be solved independently. Based on the computed pressure multiscale space, we will construct the velocity multiscale space by applying constrained energy minimization. The convergence of the proposed method is proved.In particular, we prove that the convergence of the method depends only on the coarse grid size, and is independent of the heterogeneities and contrast of thediffusion coefficient. Four typical types of permeability fields are exploited in the numerical simulations, and the results indicate that our proposed method works well and gives efficient and accurate numerical solutions.
mathematics
Photonic pseudospin-1/2 systems, which exhibit Dirac cone dispersion at Brillouin zone corners in analogy to graphene, have been extensively studied in recent years. However, it is known that a linear band crossing of two bands cannot emerge at the center of Brillouin zone in a two-dimensional photonic system respecting time reversal symmetry. Using a square lattice of elliptical magneto-optical cylinders, we constructed an unpaired Dirac point at the Brillouin zone center as the intersection of the second and third bands corresponding to the monopole and dipole excitations. Effective medium theory can be applied to the two linearly crossed bands with the effective constitutive parameters numerically calculated using the boundary effective medium approach. It is shown that only the effective permittivity approaches zero while the determinant of the nonzero effective permeability vanishes at the Dirac point frequency, showing a different behavior from the double-zero index metamaterials obtained from the pseudospin-1 triply degenerate points for time reversal symmetric systems. Exotic phenomena, such as the Klein tunneling and Zitterbewegung, in the pseudospin-1/2 system can be well understood from the effective medium description. When the Dirac point is lifted, the edge state dispersion near the $\Gamma$ point can be accurately predicted by the effective constitutive parameters. We also further realized magneto-optical complex conjugate metamaterials for a wide frequency range by introducing a particular type of non-Hermittian perturbations which make the two linear bands coalescence to form exceptional points at the real frequency.
physics
We have proposed recently a framework for inflation driven by supersymmetry breaking with the inflaton being a superpartner of the goldstino, that avoids the main problems of supergravity inflation, allowing for: naturally small slow-roll parameters, small field initial conditions, absence of a (pseudo)scalar companion of the inflation, and a nearby minimum with tuneable cosmological constant. It contains a chiral multiplet charged under a gauged R-symmetry which is restored at the maximum of the scalar potential with a plateau where inflation takes place. The effective field theory relies on two phenomenological parameters corresponding to corrections to the K\"ahler potential up to second order around the origin. The first guarantees the maximum at the origin and the second allows the tuning of the vacuum energy between the F- and D-term contributions. Here, we provide a microscopic model leading to the required effective theory. It is a Fayet-Iliopoulos model with two charged chiral multiplets under a second U(1) R-symmetry coupled to supergravity. In the Brout-Englert-Higgs phase of this U(1), the gauge field becomes massive and can be integrated out in the limit of small supersymmetry breaking scale. In this work, we perform this integration and we show that there is a region of parameter space where the effective supergravity realises our proposal of small field inflation from supersymmetry breaking consistently with observations and with a minimum of tuneable energy that can describe the present phase of our Universe.
high energy physics theory
It has been proposed that Type Ia supernovae (SNe Ia) that are normal in their spectra and brightness can be explained by a double detonation that ignites first in a helium shell on the surface of the white dwarf (WD). This proposition is supported by the satisfactory match between simulated explosions of sub-Chandrasekhar-mass WDs with no surface He layer and observations of normal SNe Ia. However, previous calculations of He-ignited double detonations have required either artificial removal of the He shell ashes or extreme enrichment of the surface He layer in order to obtain normal SNe Ia. Here we demonstrate, for the first time in multi-dimensional full-star simulations, that a thin, modestly enriched He layer will lead to a SN Ia that is normal in its brightness and spectra. This strengthens the case for double detonations as a major contributing channel to the population of normal SNe Ia.
astrophysics
Recently GM Sofi & SA Shabir [arXive: 1903.01850v2 [math.GM] 6 Mar 2019] made an attempt to prove the Sendov's conjecture. But unfortunately the proof is not correct. In this note, we discuss the fallacy in the proof.
mathematics
We present the polynomiality sum rules for all leading-twist quark and gluon generalized parton distributions (GPDs) of spin-1 targets such as the deuteron nucleus. The sum rules connect the Mellin moments of these GPDs to polynomials in skewness parameter $\xi$, which contain generalized form factors (GFFs) as their coefficients. The decompositions of local currents in terms of generalized form factors for spin-1 targets are obtained as a byproduct of this derivation.
high energy physics phenomenology
The properties of primordial curvature perturbations on small scales are still unknown while those on large scales have been well probed by the observations of the cosmic microwave background anisotropies and the large scale structure. In this paper, we propose the reconstruction method of primordial curvature perturbations on small scales through the merger rate of binary primordial black holes, which could form from large primordial curvature perturbation on small scales.
astrophysics
Our knowledge and understanding of the Universe is mainly based on observations of the electromagnetic radiation in a wide range of wavelengths. Only during the past two decades, new kinds of detectors have been developed, exploiting other forms of cosmic probes: individual photons with energy above the GeV, charged particles and antiparticles, neutrinos and, finally, gravitational waves. These new ``telescopes'' leaded to unexpected breakthroughs. Years 2016 and 2017 have seen the dawn of the astrophysics and cosmology with gravitational waves, awarded with the 2017 Nobel Prize. The events GW150914 (the first black hole-black hole merger) and GW170817 (the coalescence of two neutron stars, producing a short gamma-ray burst and follow-up observed by more than 70 observatories on all continents and in space) represent really milestones in science that every physicist (senior or in formation) should appreciate. In this document, after an accessible discussion on the generation and propagation of GWs, the key features of observable quantities (the strain, the GW frequency $\nu_{gw}$, and $\dot \nu_{gw}$) of GW150914 and GW170817 are discussed using Newtonian physics, dimensional analysis and analogies with electromagnetic waves. The objective is to show how astrophysical quantities (the initial and final masses of merging objects, the energy loss, the distance, their spin) are derived from observables. The results from the fully general-relativistic analysis published in the two discovery papers are compared with the output of our simple treatment. Then, some of the outcomes of GW observations are discussed in terms of multimessenger astrophysics.
astrophysics
This work considers the remarkable suggestion that the three families of quarks and leptons may be unified, together with the Higgs and gauge fields of the Standard Model (SM), into a single "particle", namely the $\textbf{248}$ vector superfield of a ten-dimensional $E_8$ super Yang Mills (SYM) theory. Towards a realistic model along these lines, a class of orbifoldings based on $T^6/(\mathbb{Z}_N\times \mathbb{Z}_M)$ are proposed and explored, that can in principle break $E_8$ SYM down to the minimal supersymmetric standard model (MSSM), embedded in a larger group such as $E_6$, $SO(10)$ or $SU(5)$, together with other gauge group factors which can be broken by Wilson lines. A realistic model based on $T^6/(\mathbb{Z}_6\times \mathbb{Z}_2)$ is presented. The orbifold breaks $E_8$ SYM down to a Pati-Salam gauge group in 4d, together with other gauge groups, which are further broken to the SM by Wilson lines in the right-handed sneutrino directions, yielding proto-realistic fermion mass matrices, and experimental signals associated with a low Pati-Salam gauge group breaking scale.
high energy physics phenomenology
We implement a full nonlinear optimization method to fit continuum states with complex Gaussians. The application to a set of regular scattering Coulomb functions allows us to validate the numerical feasibility, to explore the range of convergence of the approach, and to demonstrate the relative superiority of complex over real Gaussian expansions. We then consider the photoionization of atomic hydrogen, and ionization by electron impact in the first Born approximation, for which the closed form cross sections serve as a solid benchmark. Using the proposed complex Gaussian representation of the continuum combined with a real Gaussian expansion for the initial bound state, all necessary matrix elements within a partial wave approach become analytical. The successful numerical comparison illustrates that the proposed all-Gaussian approach works efficiently for ionization processes of one-center targets.
physics
The prolific rise in autonomous systems has led to questions regarding their safe instantiation in real-world scenarios. Failures in safety-critical contexts such as human-robot interactions or even autonomous driving can ultimately lead to loss of life. In this context, this paper aims to provide a method by which one can algorithmically test and evaluate an autonomous system. Given a black-box autonomous system with some operational specifications, we construct a minimax problem based on control barrier functions to generate a family of test parameters designed to optimally evaluate whether the system can satisfy the specifications. To illustrate our results, we utilize the Robotarium as a case study for an autonomous system that claims to satisfy waypoint navigation and obstacle avoidance simultaneously. We demonstrate that the proposed test synthesis framework systematically finds those sequences of events (tests) that identify points of system failure.
electrical engineering and systems science
In this paper we study the problem of constructing spanners in a local manner, specifically in the Local Computation Model proposed by Rubinfeld et al. (ICS 2011). We provide an LCA for constructing $(2r-1)$-spanners with $\widetilde{O}(n^{1+1/r})$ edges and probe complexity of $\widetilde{O}(n^{1-1/r})$ $r \in \{2,3\}$, where $n$ denotes the number of vertices in the input graph. Up to polylogarithmic factors, in both cases the stretch factor is optimal (for the respective number of edges). In addition, our probe complexity for $r=2$, i.e., for constructing $3$-spanner is optimal up to polylogarithmic factors. Our result improves over the probe complexity of Parter et al. (ITCS 2019) that is $\widetilde{O}(n^{1-1/2r})$ for $r \in \{2,3\}$. For general $k\geq 1$, we provide an LCA for constructing $O(k^2)$-spanners with $\tilde{O}(n^{1+1/k})$ edges on expectation whose probe complexity is $O(n^{2/3}\Delta^2)$. This improves over the probe complexity of Parter et al. that is $O(n^{2/3}\Delta^4)$.
computer science
M-dwarf stars are prime targets for exoplanet searches because of their close proximity and favorable properties for both planet detection and characterization. However, the potential habitability and atmospheric characterization of these exoplanetary systems depends critically on the history of high-energy stellar radiation from X-rays to NUV, which drive atmospheric mass loss and photochemistry in the planetary atmospheres. With the Far Ultraviolet M-dwarf Evolution Survey (FUMES) we have assessed the evolution of the FUV radiation, specifically 8 prominent emission lines, including Ly$\alpha$, of M-dwarf stars with stellar rotation period and age. We demonstrate tight power-law correlations between the spectroscopic FUV features, and measure the intrinsic scatter of the quiescent FUV emissions. The luminosity evolution with rotation of these spectroscopic features is well described by a broken power-law, saturated for fast rotators, and decaying with increasing Rossby number, with a typical power-law slope of $-2$, although likely shallower for Ly$\alpha$. Our regression fits enable FUV emission line luminosity estimates relative to bolometric from known rotation periods to within $\sim$0.3 dex, across 8 distinct UV emission lines, with possible trends in the fit parameters as a function of source layer in the stellar atmosphere. Our detailed analysis of the UV luminosity evolution with age further shows that habitable zone planets orbiting lower-mass stars experience much greater high-energy radiative exposure relative the same planets orbiting more massive hosts. Around early-to-mid M-dwarfs these exoplanets, at field ages, accumulate up to 10-20$\times$ more EUV energy relative to modern Earth. Moreover, the bulk of this UV exposure likely takes place within the first Gyr of the stellar lifetime.
astrophysics
In this work we used the SVD to reconstruct pure epileptic transitory activities for the definition of the accurate sources responsible of the excessive discharge implied by the transitory hallmark activities in pharmaco resistant epilepsy. We applied firstly an automatic detection of the local peaks of the transitory activities through a thresholding the energy distrubition, then we performed the SVD on the detectable transitory (300 ms window) simulated events to recover only the non contaminated transitory activities by gamma oscillations, we calculate the precision of the SVD to evaluate the robustness of this technique in separation between transitory and gamma oscillations. In second phase we applied the SVD on small window of transitory activities from real MEG signal, we obtained dipolar topographic map for the pure transitory activities, however the time excitation for the recover of pure epileptic transitory activities among the gamma oscillations were very heavy and hence we propose to integrate the SVD on a dynamic partial reconfiguration. This integration were very useful, in fact we did obtained about 27 Times faster in the SVD execution.
electrical engineering and systems science
Wide-field radio-interferometer surveys at high resolution, relatively low frequency and with very high dynamic range requirement play a prominent role in current and forthcoming radio astronomy programmes, e.g., to search for signatures of the reionisation of neutral hydrogen. One of the challenges in achieving the required high dynamic range in this regime is that diffraction of the wavefront between elements of the interferometer is significantly different in different observed directions (the 'w-effect'). We propose a new algorithm to account for this effect, which improves the widely-used W-stacking method, so that the achievable dynamic range exceeds 10^6:1 at a practical computational cost. This proposed algorithm has been implemented in WSCLEAN and NIFTY packages by their respective authors.
astrophysics
Out-of-time-order correlators (OTOC) being explored as a measure of quantum chaos, is studied here in a coupled bipartite system. Each of the subsystems can be chaotic or regular and lead to very different OTOC growths both before and after the scrambling or Ehrenfest time. We present preliminary results on weakly coupled subsystems which have very different Lyapunov exponents. We also review the case when both the subsystems are strongly chaotic when a random matrix model can be pressed into service to derive an exponential relaxation to saturation.
quantum physics
We present the results from an Australian Square Kilometre Array Pathfinder search for radio variables on timescales of hours. We conducted an untargeted search over a 30 deg$^2$ field, with multiple 10-hour observations separated by days to months, at a central frequency of 945 MHz. We discovered six rapid scintillators from 15-minute model-subtracted images with sensitivity of $\sim 200\,\mu$Jy/beam; two of them are extreme intra-hour variables with modulation indices up to $\sim 40\%$ and timescales as short as tens of minutes. Five of the variables are in a linear arrangement on the sky with angular width $\sim 1$ arcmin and length $\sim 2$ degrees, revealing the existence of a huge plasma filament in front of them. We derived kinematic models of this plasma from the annual modulation of the scintillation rate of our sources, and we estimated its likely physical properties: a distance of $\sim 4$ pc and length of $\sim 0.1$ pc. The characteristics we observe for the scattering screen are incompatible with published suggestions for the origin of intra-hour variability leading us to propose a new picture in which the underlying phenomenon is a cold tidal stream. This is the first time that multiple scintillators have been detected behind the same plasma screen, giving direct insight into the geometry of the scattering medium responsible for enhanced scintillation.
astrophysics
We present a comprehensive extension of the latent position network model known as the random dot product graph to accommodate multiple graphs -- both undirected and directed -- which share a common subset of nodes, and propose a method for jointly embedding the associated adjacency matrices, or submatrices thereof, into a suitable latent space. Theoretical results concerning the asymptotic behaviour of the node representations thus obtained are established, showing that after the application of a linear transformation these converge uniformly in the Euclidean norm to the latent positions with Gaussian error. Within this framework, we present a generalisation of the stochastic block model to a number of different multiple graph settings, and demonstrate the effectiveness of our joint embedding method through several statistical inference tasks in which we achieve comparable or better results than rival spectral methods. Empirical improvements in link prediction over single graph embeddings are exhibited in a cyber-security example.
statistics
As an attempt to tackle the low-data-rate issue of the conventional LoRa systems, we propose two novel frequency-bin-index (FBI) LoRa schemes. In scheme I, the indices of starting frequency bins (SFBs) are utilized to carry the information bits. To facilitate the actual implementation, the SFBs of each LoRa signal are divided into several groups prior to the modulation process in the proposed FBI-LoRa system. To further improve the system flexibility, we formulate a generalized modulation scheme and propose scheme II by treating the SFB groups as an additional type of transmission entity. In scheme II, the combination of SFB indices and that of SFB group indices are both exploited to carry the information bits. We derive the theoretical expressions for bit-error-rate (BER) and throughput of the proposed FBI-LoRa system with two modulation schemes over additive white Gaussian noise (AWGN) and Rayleigh fading channels. Theoretical and simulation results show that the proposed FBI-LoRa schemes can significantly increases the transmission throughput compared with the existing LoRa systems at the expense of a slight loss in BER performance. Thanks to the appealing superiorities, the proposed FBI-LoRa system is a promising alternative for high-data-rate Internet of Things (IoT) applications.
electrical engineering and systems science
Multi-band superconductivity in topological semimetals are the paradigms of unconventional superconductors. Their exotic gap structures and topological properties have fascinated searching for material realizations and applications. In this paper, we focus on triple point fermions, a new type of band crossings, and we claim that their superconductivity uniquely stabilizes spin-triplet pairing. Unlike conventional superconductors and other multi band superconductors, such triplet superconductivity is the novel phenomena of triple point fermions where the spin-singlet pairing is strictly forbidden in the on-site interaction due to the Fermi statistics. We find that two distinct triplet superconductors, characterized by the presence and absence of time-reversal symmetry, are allowed which in principle can be controlled by tuning the chemical potential. For the triplet superconductor with time-reversal symmetry, we show that topologically protected nodal lines are realized. In contrast, for time-reversal broken case, the complication of topologically protected Bogoliubov Fermi surfaces emerges. Our theoretical study provides a new guidance for searching triplet superconductivities and their exotic implications.
condensed matter
If F is a type-definable family of commensurable subsets, subgroups or sub-vector spaces in a metric structure, then there is an invariant subset, subgroup or sub-vector space commensurable with F. This in particular applies to type-definable or hyper-definable objects in a classical first-order structure.
mathematics
The Ryu-Takayanagi (RT) formula has been a key ingredient in our understanding of holography. Recent work on TT deformations has also boosted our understanding of holography away from the conformal boundary of AdS. In this short note, we aim to refine some recent work demonstrating the success of the RT formula in TT deformed theories. We emphasize general arguments that justify the use of the RT formula in general holographic theories that obey a GKPW-like dictionary. In doing so, we clarify subtleties related to holographic counterterms and discuss the implications for holography in general spacetimes.
high energy physics theory
We derive and analyze the conformal Ward identities (CWI's) of a tensor 4-point function of a generic CFT in momentum space. The correlator involves the stress-energy tensor $T$ and three scalar operators $O$ ($TOOO$). We extend the reconstruction method for tensor correlators from 3- to 4-point functions, starting from the transverse traceless sector of the $TOOO$. We derive the structure of the corresponding CWI's in two different sets of variables, relevant for the analysis of the 1-to-3 (1 graviton $\to$ 3 scalars) and 2-to-2 (graviton + scalar $\to$ two scalars) scattering processes. The equations are all expressed in terms of a single form factor. In both cases, we discuss the structure of the equations and their possible behaviors in various asymptotic limits of the external invariants. A comparative analysis of the systems of equations for the $TOOO$ and those for the $OOOO$, both in the general (conformal) and dual-conformal/conformal (dcc) cases, is presented. We show that in all the cases the Lauricella functions are homogenous solutions of such systems of equations, also described as parametric 4K integrals of modified Bessel functions.
high energy physics theory
We present two-loop results for the quark condensate in an external magnetic field within chiral perturbation theory using coordinate space techniques. At finite temperature, we explore the impact of the magnetic field on the pion-pion interaction in the quark condensate for arbitrary pion masses and derive the correct weak magnetic field expansion in the chiral limit. At zero temperature, we provide the complete two-loop representation for the vacuum energy density and the quark condensate.
high energy physics phenomenology
In the quest for dynamic multimodal probing of a material's structure and functionality, it is critical to be able to quantify the chemical state on the atomic and nanoscale using element specific electronic and structurally sensitive tools such as electron energy loss spectroscopy (EELS). Ultrafast EELF, with combined energy, time, and spatial resolution in a transmission electron microscope, has recently enabled transformative studies of photo excited nanostructure evolution and mapping of evanescent electromagnetic fields. This article aims to describe the state of the art experimental techniques in this emerging field and its major uses and future applications.
physics
Spatio-temporal data sets are rapidly growing in size. For example, environmental variables are measured with ever-higher resolution by increasing numbers of automated sensors mounted on satellites and aircraft. Using such data, which are typically noisy and incomplete, the goal is to obtain complete maps of the spatio-temporal process, together with proper uncertainty quantification. We focus here on real-time filtering inference in linear Gaussian state-space models. At each time point, the state is a spatial field evaluated on a very large spatial grid, making exact inference using the Kalman filter computationally infeasible. Instead, we propose a multi-resolution filter (MRF), a highly scalable and fully probabilistic filtering method that resolves spatial features at all scales. We prove that the MRF matrices exhibit a particular block-sparse multi-resolution structure that is preserved under filtering operations through time. We also discuss inference on time-varying parameters using an approximate Rao-Blackwellized particle filter, in which the integrated likelihood is computed using the MRF. We compare the MRF to existing approaches in a simulation study and a real satellite-data application.
statistics
This paper is devoted to the resolution of singularities of holomorphic vector fields and of one-dimensional holomorphic foliations in dimension 3 and it has two main objectives. First, from the general perspective of one-dimensional foliations, we build upon the work of Cano-Roche-Spivakovsky and essentially complete it. As a consequence, we obtain a general resolution theorem comparable to the resolution theorem of McQuillan-Panazzolo but proved by means of rather different methods. The second objective of this paper consists of looking at a special class of singularities of foliations containing, in particular, all singularities of complete holomorphic vector fields on complex manifolds of dimension 3. We then prove that for this class of holomorphic foliations, there holds a much sharper resolution theorem. This second result was the initial motivation of this paper and it relies on the combination of the previous resolution theorems for (general) foliations with some classical material on asymptotic expansions for solutions of differential equations.
mathematics
We present a number of examples to illustrate the use of small quotient dessins as substitutes for their often much larger and more complicated Galois (minimal regular) covers. In doing so we employ several useful group-theoretic techniques, such as the Frobenius character formula for counting triples in a finite group, pointing out some common traps and misconceptions associated with them. Although our examples are all chosen from Hurwitz curves and groups, they are relevant to dessins of any type.
mathematics
The number of active and non active satellites in Earth orbit has dramatically increased in recent decades, requiring the development of novel surveillance techniques to monitor and track them. In this paper, we build upon previous non-coherent passive radar space surveillance demonstrations undertaken using the Murchison Widefield Array (MWA). We develop the concept of the Dynamic Signal to Noise Ratio Spectrum (DSNRS) in order to isolate signals of interest (reflections of FM transmissions of objects in orbit) and efficiently differentiate them from direct path reception events. We detect and track Alouette-2, ALOS, UKube-1, the International Space Station, and Duchifat-1 in this manner. We also identified out-of-band transmissions from Duchifat-1 and UKube-1 using these techniques, demonstrating the MWA's capability to look for spurious transmissions from satellites. We identify an offset from the locations predicted by the cataloged orbital parameters for some of the satellites, demonstrating the potential of using MWA for satellite catalog maintenance. These results demonstrate the capability of the MWA for Space Situational Awareness and we describe future work in this area.
astrophysics
Infrared absorption spectroscopy study of endohedral water molecule in a solid mixture of H$_2$O@C$_{60}$ and C$_{60}$ was carried out at liquid helium temperature. From the evolution of the spectra during the ortho-para conversion process, the spectral lines were identified as para- and ortho-water transitions. Eight vibrational transitions with rotational side peaks were observed in the mid-infrared: $\omega_1$, $\omega_2$, $\omega_3$, $2\omega_1$, $2\omega_2$, $\omega_1 +\omega_3$, $\omega_2 +\omega_3$, and $2\omega_2+\omega_3$. The vibrational frequencies $\omega_2$ and 2$\omega_2$ are lower by 1.6\% and the rest by 2.4\%, as compared to free \water/. A model consisting of a rovibrational Hamiltonian with the dipole and quadrupole moments of water interacting with the crystal field was used to fit the infrared absorption spectra. The electric quadrupole interaction with the crystal field lifts the degeneracy of the rotational levels. The finite amplitudes of the pure $v_1$ and $v_2$ vibrational transitions are consistent with the interaction of the water molecule dipole moment with a lattice-induced electric field. The permanent dipole moment of encapsulated \water/ is found to be $0.5\pm 0.1$ D as determined from the far-infrared rotational line intensities. The translational mode of the quantized center of mass motion of \water/ in the molecular cage of C$_{60}$ was observed at 110cm$^{-1}$ (13.6meV).
physics
Running quantum programs is fraught with challenges on on today's noisy intermediate scale quantum (NISQ) devices. Many of these challenges originate from the error characteristics that stem from rapid decoherence and noise during measurement, qubit connections, crosstalk, the qubits themselves, and transformations of qubit state via gates. Not only are qubits not "created equal", but their noise level also changes over time. IBM is said to calibrate their quantum systems once per day and reports noise levels (errors) at the time of such calibration. This information is subsequently used to map circuits to higher quality qubits and connections up to the next calibration point. This work provides evidence that there is room for improvement over this daily calibration cycle. It contributes a technique to measure noise levels (errors) related to qubits immediately before executing one or more sensitive circuits and shows that just-in-time noise measurements benefit late physical qubit mappings. With this just-in-time recalibrated transpilation, the fidelity of results is improved over IBM's default mappings, which only uses their daily calibrations. The framework assess two major sources of noise, namely readout errors (measurement errors) and two-qubit gate/connection errors. Experiments indicate that the accuracy of circuit results improves by 3-304% on average and up to 400% with on-the-fly circuit mappings based on error measurements just prior to application execution.
quantum physics
Fluorescence-detected Fourier transform (FT) spectroscopy is a technique in which the relative paths of an optical interferometer are controlled to excite a material sample, and the ensuing fluorescence is detected as a function of the interferometer path delay and relative phase. A common approach to enhance the signal-to-noise ratio in these experiments is to apply a continuous phase sweep to the relative optical path, and to detect the resulting modulated fluorescence using a phase-sensitive lock-in amplifier. In many important situations, the fluorescence signal is too weak to be measured using a lock-in amplifier, so that photon counting techniques are preferred. Here we introduce an approach to low-signal fluorescence-detected FT spectroscopy, in which individual photon counts are assigned to a modulated interferometer phase ('phase-tagged photon counting,' or PTPC), and the resulting data are processed to construct optical spectra. We studied the fluorescence signals of a molecular sample excited resonantly by a pulsed coherent laser over a range of photon flux and visibility levels. We compare the performance of PTPC to standard lock-in detection methods and establish the range of signal parameters over which meaningful measurements can be carried out. We find that PTPC generally outperforms the lock-in detection method, with the dominant source of measurement uncertainty being associated with the statistics of the finite number of samples of the photon detection rate.
physics
In this study, we develop a noncontact measurement system for monitoring the respiration of multiple people using millimeter-wave ultrawideband array radar. To separate the radar echoes of multiple people, conventional techniques cluster the radar echoes in the time, frequency, or spatial domain. Focusing on the measurement of the respiratory signals of multiple people, we propose a method called respiratory-space clustering in which individual differences in the respiratory rate are effectively exploited to accurately resolve the echoes from human bodies. The proposed respiratory-space clustering can separate echoes, even when people are located close to each other. In addition, the proposed method can be applied when the number of targets is unknown and can accurately estimate the number of people and their positions. We perform measurements under two scenarios involving five and seven participants to verify the performance of the proposed method, and quantitatively evaluate the estimation accuracy of the number of people and the respiratory intervals. The experimental results show that the root-mean-square error in estimating the respiratory interval is 172 ms on an average. The proposed method improves the estimation accuracy of the number of people by 85.0% compared to the conventional method, demonstrating the high-precision measurement of the respiration of several people.
electrical engineering and systems science
Robins 1997 introduced marginal structural models (MSMs), a general class of counterfactual models for the joint effects of time-varying treatment regimes in complex longitudinal studies subject to time-varying confounding. In his work, identification of MSM parameters is established under a sequential randomization assumption (SRA), which rules out unmeasured confounding of treatment assignment over time. We consider sufficient conditions for identification of the parameters of a subclass, Marginal Structural Mean Models (MSMMs), when sequential randomization fails to hold due to unmeasured confounding, using instead a time-varying instrumental variable. Our identification conditions require that no unobserved confounder predicts compliance type for the time-varying treatment. We describe a simple weighted estimator and examine its finite-sample properties in a simulation study. We apply the proposed estimator to examine the effect of delivery hospital on neonatal survival probability.
statistics
This paper considers the use of fully homomorphic encryption for the realisation of distributed formation control of multi-agent systems via edge computer. In our proposed framework, the distributed control computation in the edge computer uses only the encrypted data without the need for a reset mechanism that is commonly required to avoid error accumulation. Simulation results show that, despite the use of encrypted data on the controller and errors introduced by the quantization process prior to the encryption, the formation is able to converge to the desired shape. The proposed architecture offers insight on the mechanism for realising distributed control computation in an edge/cloud computer while preserving the privacy of local information coming from each agent.
electrical engineering and systems science
In this work, we show in pedagogical detail that the most singular contributions to the slow part of the asymptotic density-density correlation function of Luttinger liquids with fermions interacting mutually with only short-range forward scattering and also with localised scalar static impurities (where backward scattering takes place) has a compact analytical expression in terms of simple functions that have second order poles and involve only the scale-independent bare transmission and reflection coefficients. This proof uses conventional fermionic perturbation theory resummed to all orders, together with the idea that for such systems, the (connected) moments of the density operator all vanish beyond the second order - the odd ones vanish identically and the higher order even moments are less singular than the second order moment which is the only one included. This important result is the crucial input to the recently introduced "Non-Chiral Bosonization Technique" (NCBT) to study such systems. The results of NCBT cannot be easily compared with the results obtained using conventional bosonization as the former only extracts the most singular parts of the correlation functions albeit for arbitrary impurity strengths and mutual interactions. The latter, ambitiously attempts to study all the parts of the asymptotic correlation functions and is thereby unable to find simple analytical expressions and is forced to operate in the vicinity of the homogeneous system or the half line (the opposite extreme). For a fully homogeneous system or its antithesis viz. the half-line, all the higher order connected moments of the density vanish identically which means the results of chiral bosonization and NCBT ought to be the same and indeed they are.
condensed matter
Weak anti-localization offers an experimental tool to address spin--orbit coupling of two-dimensional oxide surfaces and interfaces via magneto-transport. To overcome the shortcomings of the formulation for single-band spin-1/2 electrons, we consider an effective three-band model that allows a decomposition into a pseudo-spin representation 1/2+3/2. Whereas the well-established spin-1/2 transport signature results from the singlet and triplet sectors in the Cooperon equation, a new structure originates from the quintet and septet sectors generated by the spin 3/2x3/2 representation.
condensed matter
Quasi parton distributions (quasi-PDFs) are currently under intense investigation. Quasi-PDFs are defined through spatial correlation functions and are thus accessible in lattice QCD. They gradually approach their corresponding standard (light-cone) PDFs as the hadron momentum increases. Recently, we investigated the concept of quasi-distributions in the case of generalized parton distributions (GPDs) by calculating the twist-2 vector GPDs in the scalar diquark spectator model. In the present work, we extend this study to the remaining six leading-twist GPDs. For large hadron momenta, all quasi-GPDs analytically reduce to the corresponding standard GPDs. We also study the numerical mismatch between quasi-GPDs and standard GPDs for finite hadron momenta. Furthermore, we present results for quasi-PDFs, and explore higher-twist effects associated with the parton momentum and the longitudinal momentum transfer to the target. We study the dependence of our results on the model parameters as well as the type of diquark. Finally, we discuss the lowest moments of quasi distributions, and elaborate on the relation between quasi-GPDs and the total angular momentum of quarks. The moment analysis suggests a preferred definition of several quasi-distributions.
high energy physics phenomenology
It is quite common that the structure of a time series changes abruptly. Identifying these change points and describing the model structure in the segments between these change points is of interest. In this paper, time series data is modelled assuming each segment is an autoregressive time series with possibly different autoregressive parameters. This is achieved using two main steps. The first step is to use a likelihood ratio scan based estimation technique to identify these potential change points to segment the time series. Once these potential change points are identified, modified parametric spectral discrimination tests are used to validate the proposed segments. A numerical study is conducted to demonstrate the performance of the proposed method across various scenarios and compared against other contemporary techniques.
statistics
Experimental results from literature show equidistant energy levels in thin Bi films on surfaces, suggesting a harmonic oscillator description. Yet this conclusion is by no means imperative, especially considering that any measurement only yields energy levels in a finite range and with a nonzero uncertainty. Within this study we review isospectral potentials from the literature and investigate the applicability of the harmonic oscillator hypothesis to recent measurements. First, we describe experimental results from literature by a harmonic oscillator model, obtaining a realistic size and depth of the resulting quantum well. Second, we use the shift-operator approach to calculate anharmonic non-polynomial potentials producing (partly) equidistant spectra. We discuss different potential types and interpret the possible modeling applications. Finally, by applying $n$th order perturbation theory we show that \textbf{exactly} equidistant eigenenergies cannot be achieved by polynomial potentials, except by the harmonic oscillator potential. In summary, we aim to give an overview over which conclusions may be drawn from the experimental determination of energy levels and which may not.
quantum physics
In this paper, we introduce a general scientific CCD simulation and test system which can meet the test requirements of different types of CCD chips from E2V company, such as measuring different signals output from modules of a CCD controller including power supply, fan, temperature control module, crystal oscillator, shutter and clock-bias generator. Furthermore, the video signal of the CCD detector can be simulated and superimposed with random noise to verify the performance of the video sampling circuit of the CCD controller. The simulation and test system was successful used for the CCD controller which was designed for E2V CCD47-20 detector.
physics
As an application in circuit quantum electrodynamics (cQED) coupled systems, superconducting resonators play an important role in high-sensitivity measurements in a superconductingsemiconductor hybrid architecture. Taking advantage of a high-impedance NbTiN resonator, we perform excited-state spectroscopy on a GaAs double quantum dot (DQD) by applying voltage pulses to one gate electrode. The pulse train modulates the DQD energy detuning and gives rise to charge state transitions at zero detuning. Benefiting from the outstanding sensitivity of the resonator, we distinguish different spin-state transitions in the energy spectrum according to the Pauli exclusion principle. Furthermore, we experimentally study how the interdot tunneling rate modifies the resonator response. The experimental results are consistent with the simulated spectra based on our model.
condensed matter
In this work, the order parameter and the two-site correlation functions are expressed properly using the decimation transformation process in the presence of an external field so that their applications lead to some significant physical results. Indeed, their applications produce or reproduce some relevant and important results which were included in cumbersome mathematics in the previous studies, if not in a form impossible to understand. The average magnetization or the order parameter $<\!\!\sigma\!\!> $ is expressed as $<\!\!\sigma_{0,i}\!\!>= <\!\!\tanh[ \kappa(\sigma_{1,i}+\sigma_{2,i}+\dots +\sigma_{z,i})+H]\!\!> $. Here, $\kappa$ is the coupling strength, $z$ is the number of nearest neighbors. $\sigma_{0,i}$ denotes the central spin at the $i^{th}$ site, while $\sigma_{l,i}$, $l=1,2,\dots,z$ are the nearest neighbor spins around the central spin. $H$ is the normalized external magnetic field. We show that the application of this relation to the 1D Ising model reproduces readily the previously obtained exact results in the absence of an external field. Furthermore, the three-site correlation functions of square and honeycomb lattices of the form $<\!\!\sigma_{1}\sigma_{2}\sigma_{3}\!\!>$ are analytically obtained. One finds that the three-site correlation functions are equal to $f(\kappa)\!\!<\!\!\sigma\!\!>$. Here $f(\kappa)$ depends on the lattice types and is an analytic function of coupling constant. This result indicates that the critical properties of three-site correlation functions of those lattices are the same as the corresponding order parameters $<\!\!\sigma\!\!>$ of those lattices. This will mean that the uniqueness of the average magnetization as an order parameter is questionable. ...
condensed matter
We propose an augmented Lagrangian preconditioner for a three-field stress-velocity-pressure discretization of stationary non-Newtonian incompressible flow with an implicit constitutive relation of power-law type. The discretization employed makes use of the divergence-free Scott-Vogelius pair for the velocity and pressure. The preconditioner builds on the work [P. E. Farrell, L. Mitchell, and F. Wechsung, SIAM J. Sci. Comput., 41 (2019), pp. A3073-A3096], where a Reynolds-robust preconditioner for the three-dimensional Newtonian system was introduced. The preconditioner employs a specialized multigrid method for the stress-velocity block that involves a divergence-capturing space decomposition and a custom prolongation operator. The solver exhibits excellent robustness with respect to the parameters arising in the constitutive relation, allowing for the simulation of a wide range of materials.
mathematics
Quantum memories are an important building block for quantum information processing. Ideally, these memories preserve the quantum properties of the input. We present general criteria for measures to evaluate the quality of quantum memories. Then, we introduce a quality measure based on coherence satisfying these criteria, which we characterize in detail for the qubit case. The measure can be estimated from sparse experimental data and may be generalized to characterize other building blocks, such as quantum gates and teleportation schemes.
quantum physics
This paper considers the difficulty in the set-system approach to generalizing graph theory. These difficulties arise categorically as the category of set-system hypergraphs is shown not to be cartesian closed and lacks enough projective objects, unlike the category of directed multigraphs (i.e. quivers). The category of incidence hypergraphs is introduced as a "graph-like" remedy for the set-system issues so that hypergraphs may be studied by their locally graphic behavior via homomorphisms that allow an edge of the domain to be mapped into a subset of an edge in the codomain. Moreover, it is shown that the category of quivers embeds into the category of incidence hypergraphs via a logical functor that is the inverse image of an essential geometric morphism between the topoi. Consequently, the quiver exponential is shown to be simply represented using incidence hypergraph homomorphisms.
mathematics
Higgs inflation is known to be a minimal extension of the Standard Model allowing for the description of the early Universe inflation. This model is considered as an effective field theory since it has a relatively low cutoff scale, thus requiring further extensions to be a valid description of the reheating phase. We present a novel unified approach to the problem of unitarization and UV completion of the Higgs inflation model without introducing new massive degrees of freedom. This approach is based on an analytic infinite derivative modification of the Higgs field kinetic term. We construct a unitary non-local UV completion of the original Higgs inflation model while the inflationary stage is kept stable with respect to quantum corrections.
high energy physics theory
Existing prescriptive compression strategies used in hearing aid fitting are designed based on gain averages from a group of users which are not necessarily optimal for a specific user. Nearly half of hearing aid users prefer settings that differ from the commonly prescribed settings. This paper presents a human-in-loop deep reinforcement learning approach that personalizes hearing aid compression to achieve improved hearing perception. The developed approach is designed to learn a specific user's hearing preferences in order to optimize compression based on the user's feedbacks. Both simulation and subject testing results are reported which demonstrate the effectiveness of the developed personalized compression.
electrical engineering and systems science
We present the discovery and follow-up observations of two CCSNe that occurred in the luminous infrared galaxy (LIRG), NGC3256. The first, SN2018ec, was discovered using the ESO HAWK-I/GRAAL adaptive optics seeing enhancer, and was classified as a Type Ic with a host galaxy extinction of $A_V=2.1^{+0.3}_{-0.1}$ mag. The second, AT2018cux, was discovered during the course of follow-up observations of SN2018ec, and is consistent with a sub-luminous Type IIP classification with an $A_V=2.1 \pm 0.4$ mag of host extinction. A third CCSN, PSNJ10275082-4354034 in NGC3256, has previously been reported in 2014, and we recovered the source in late time archival HST imaging. Based on template light-curve fitting, we favour a Type IIn classification for it with modest host galaxy extinction of $A_V=0.3^{+0.4}_{-0.3}$ mag. We also extend our study with follow-up data of the recent Type IIb SN2019lqo and Type Ib SN2020fkb that occurred in the LIRG system Arp299 with host extinctions of $A_V=2.1^{+0.1}_{-0.3}$ and $A_V=0.4^{+0.1}_{-0.2}$ mag, respectively. Motivated by the above, we inspected, for the first time, a sample of 29 CCSNe located within a projected distance of 2.5 kpc from the host galaxy nuclei in a sample of 16 LIRGs. We find that, if star formation within these galaxies is modelled assuming a global starburst episode and normal IMF, there is evidence of a correlation between the starburst age and the CCSN subtype. We infer that the two subgroups of 14 H-poor (Type IIb/Ib/Ic/Ibn) and 15 H-rich (Type II/IIn) CCSNe have different underlying progenitor age distributions, with the H-poor progenitors being younger at 3$\sigma$ significance. However, we do note that the available sample sizes of CCSNe and host LIRGs are so far small, and the statistical comparisons between subgroups do not take into account possible systematic or model errors related to the estimated starburst ages. (abridged)
astrophysics
Semiconductor nanowires are promising material systems for coming of age nanotechnology. The usage of the vapor solid solid (VSS) route, where the catalyst used for promoting axial growth of nanowire is a solid, offers certain advantages compared to the common vapor liquid solid (VLS) route (using liquid catalyst). The VSS growth of group-IV elemental nanowires have been investigated by other groups in situ during growth in a transmission electron microscope (TEM). Though it is known that compound nanowire growth has different dynamics compared to monoatomic semiconductors, the dynamics of VSS growth of compound nanowires has not been understood. Here we investigate VSS growth of compound nanowires by in situ microscopy, using Au-seeded GaAs as a model system. The growth kinetics and dynamics at the wire-catalyst interface by ledge-flow is studied and compared for liquid and solid catalysts at similar growth conditions. Here the temperature and thermal history of the system is manipulated to control the catalyst phase. In the first experiment discussed here we reduce the growth temperature in steps to solidify the initially liquid catalyst, and compare the dynamics between VLS and VSS growth observed at slightly different temperatures. In the second experiment we exploit thermal hysteresis of the system to obtain both VLS and VSS at the same temperature. The VSS growth rate is comparable or slightly slower than VLS growth. Unlike in the VLS case, during VSS growth we see several occasions where a new layer starts before the previous layer is completely grown, i.e. multilayer growth. Understanding the VSS growth mode enables better control of nanowire properties by widening the range of usable nanowire growth parameters.
condensed matter