text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
This paper presents an innovative approach to micro-phasor measurement unit (micro-PMU) placement in unbalanced distribution networks. The methodology accounts for the presence of single-and-two-phase laterals and acknowledges the fact that observing one phase in a distribution circuit does not translate to observing the other phases. Other practical constraints such as presence of distributed loads, unknown regulator/ transformer tap ratios, zero-injection phases (ZIPs), modern smart meters, and multiple switch configurations are also incorporated. The proposed micro-PMU placement problem is solved using integer linear programming (ILP), guaranteeing optimality of results. The uniqueness of the developed algorithm is that it not only minimizes the micro-PMU installations, but also identifies the minimum number of phases that must be monitored by them. | electrical engineering and systems science |
The Extreme Universe Space Observatory-Telescope Array (EUSO-TA) is a ground-based experiment, part of the JEM-EUSO (Joint Experiment Missions -- Extreme Universe Space Observatory) dedicated to the observation of Ultra High Energy Cosmic Rays (UHECRs) in parallel with the Telescope Array (TA) experiment. The main goal of EUSO-TA operations is to test the hardware and calibrate the EUSO detector to obtain optimal performance for cosmic ray observations. Apart from the artificial source calibration such as the Central Laser Facility (CLF), mobile lasers and UV diodes, natural signals from stars can be also used as a calibration source. This work presents the results of the calibration of the EUSO-TA detector. The influence of the atmosphere and of the detector parameters on star observations are discussed. Considering, stars as point-like sources with well known UV emission parameters, signal amplitudes from stars as well as the EUSO-TA detector point spread function were estimated. This unique calibration method could be used in future missions of the JEM-EUSO program such as EUSO-SPB2 (Super-Pressure Balloon). | astrophysics |
We show that the universal theory of the hyperfinite II$_1$ factor is not computable. The proof uses the recent result that MIP*=RE. Combined with an earlier observation of the authors, this yields a proof that the Connes Embedding Problem has a negative solution that avoids the equivalences with Kirchberg's QWEP Conjecture and Tsirelson's Problem.+ | mathematics |
The formulation of a fundamental theoretical description of disordered solids is a standing challenge in condensed matter physics. So far, the structure and dynamics of glasses is described in the literature on the basis of phenomenological models. We develop a theoretical framework for amorphous materials, relying on the basic principle that the disorder in a glass is similar to the disorder in a classical fluid, while the latter is mathematically encoded by noncommutative coordinates in the Lagrange formulation of fluid mechanics. Under this principle, we construct a model of the amorphous solids as fuzzy crystals and we establish an analytical theory of vibrations for glasses, that naturally contains the boson peak as a manifestation of an extended van Hove singularity. | condensed matter |
The effect of training data size on machine learning methods has been well investigated over the past two decades. The predictive performance of tree based machine learning methods, in general, improves with a decreasing rate as the size of training data increases. We investigate this in optimal trees ensemble (OTE) where the method fails to learn from some of the training observations due to internal validation. Modified tree selection methods are thus proposed for OTE to cater for the loss of training observations in internal validation. In the first method, corresponding out-of-bag (OOB) observations are used in both individual and collective performance assessment for each tree. Trees are ranked based on their individual performance on the OOB observations. A certain number of top ranked trees is selected and starting from the most accurate tree, subsequent trees are added one by one and their impact is recorded by using the OOB observations left out from the bootstrap sample taken for the tree being added. A tree is selected if it improves predictive accuracy of the ensemble. In the second approach, trees are grown on random subsets, taken without replacement-known as sub-bagging, of the training data instead of bootstrap samples (taken with replacement). The remaining observations from each sample are used in both individual and collective assessments for each corresponding tree similar to the first method. Analysis on 21 benchmark datasets and simulations studies show improved performance of the modified methods in comparison to OTE and other state-of-the-art methods. | statistics |
The soft bootstrap program aims to construct consistent effective field theories (EFT's) by recursively imposing the desired soft limit on tree-level scattering amplitudes through on-shell recursion relations. A prime example is the leading two-derivative operator in the EFT of $\text{SU} (N)\times \text{SU} (N)/\text{SU} (N)$ nonlinear sigma model (NLSM), where $ \mathcal{O} (p^2)$ amplitudes with an arbitrary multiplicity of external particles can be soft-bootstrapped. We extend the program to $ \mathcal{O} (p^4)$ operators and introduce the "soft blocks," which are the seeds for soft bootstrap. The number of soft blocks coincides with the number of independent operators at a given order in the derivative expansion and the incalculable Wilson coefficient emerges naturally. We also uncover a new soft-constructible EFT involving the "multi-trace" operator at the leading two-derivative order, which is matched to $\text{SO} (N+1)/ \text{SO} (N)$ NLSM. In addition, we consider Wess-Zumino-Witten (WZW) terms, the existence of which, or the lack thereof, depends on the number of flavors in the EFT, after a novel application of Bose symmetry. Remarkably, we find agreements with group-theoretic considerations on the existence of WZW terms in $\text{SU} (N)$ NLSM for $N\ge 3$ and the absence of WZW terms in $\text{SO} (N)$ NLSM for $N\neq 5$. | high energy physics theory |
This paper aims at detecting an accurate position of the main entrance of the buildings. The proposed approach relies on the fact that the GPS signals drop significantly when the user enters a building. Moreover, as most of the public buildings provide Wi-Fi services, the Wi-Fi received signal strength (RSS) can be utilized in order to detect the entrance of the buildings. The rationale behind this paper is that the GPS signals decrease as the user gets close to the main entrance and the Wi-Fi signal increases as the user approaches the main entrance. Several real experiments have been conducted in order to guarantee the feasibility of the proposed approach. The experiment results have shown an interesting result and the accuracy of the whole system was one meter | computer science |
In this paper, we propose adaptive techniques for multi-user multiple input and multiple output~(MU-MIMO) cellular communication systems, to solve the problem of energy efficient communications with heterogeneous delay-aware traffic. In order to minimize the total transmission power of the MU-MIMO, we investigate the relationship between the transmission power and the M-ary quadrature amplitude modulation~(MQAM) constellation size and get the energy efficient modulation for each transmission stream based on the minimum mean square error~(MMSE) receiver.Since the total power consumption is different for MU-MIMO and multi-user single input and multiple output~(MU-SIMO), by exploiting the intrinsic relationship among the total power consumption model, and heterogeneous delay-aware services, we propose an adaptive transmission strategy, which is a switching between MU-MIMO and MU-SIMO. Simulations show that in order to maximize the energy efficiency and consider different Quality of Service (QoS) of delay for the users simultaneously, the users should adaptively choose the constellation size for each stream as well as the transmission mode. | electrical engineering and systems science |
We propose a new mechanism for symmetry breaking in which, apart from particle degrees of freedom, topological degrees of freedom also emerge. In this method, a decomposition for the fields of the Yang-Mills-Higgs theory is introduced and Lagrangian is written based on new variables. This new Lagrangian does not change the dynamics of the theory, at least at the classical level. We study the spontaneous symmetry breaking for this new Lagrangian and show that how it works in Abelian and non-Abelian gauge theories. In the case of Abelian gauge theory our method adds nothing new to the so-called Higgs mechanism. However, in the non-Abelian case topological degrees of freedom, as classical fields, arise. Finally, we reacquire our results considering a new definition for the vacuum. | high energy physics theory |
We present a scheme for generating and manipulating three-mode squeezed states with genuine tripartite entanglement by injecting single-mode squeezed light into an array of coupled optical waveguides. We explore the possibility to selectively generate single-mode squeezing or multimode squeezing at the output of an elliptical waveguides array, determined solely by the input light polarization. We study the effect of losses in the waveguides array and show that quantum correlations and squeezing are preserved for realistic parameters. Our results show that arrays of optical waveguides are suitable platforms for generating multimode quantum light, which could lead to novel applications in quantum metrology. | quantum physics |
In Bayesian applications, there is a huge interest in rapid and accurate estimation of the posterior distribution, particularly for high dimensional or hierarchical models. In this article, we propose to use optimization to solve for a joint distribution (random transport plan) between two random variables, $\theta$ from the posterior distribution and $\beta$ from the simple multivariate uniform. Specifically, we obtain an approximate estimate of the conditional distribution $\Pi(\beta\mid \theta)$ as an infinite mixture of simple location-scale changes; applying the Bayes' theorem, $\Pi(\theta\mid\beta)$ can be sampled as one of the reversed transforms from the uniform, with the weight proportional to the posterior density/mass function. This produces independent random samples with high approximation accuracy, as well as nice theoretic guarantees. Our method shows compelling advantages in performance and accuracy, compared to the state-of-the-art Markov chain Monte Carlo and approximations such as variational Bayes and normalizing flow. We illustrate this approach via several challenging applications, such as sampling from multi-modal distribution, estimating sparse signals in high dimension, and soft-thresholding of a graph with a prior on the degrees. | statistics |
We study the effects of dimension-8 operators on Drell-Yan production of lepton pairs at the Large Hadron Collider (LHC). We identify a class of operators that leads to novel angular dependence not accounted for in current analyses. The observation of such effects would be a smoking-gun signature of new physics appearing at the dimension-8 level. We propose an extension of the currently used angular basis and show that these effects should be observable in future LHC analyses for realistic values of the associated dimension-8 Wilson coefficients. | high energy physics phenomenology |
We theoretically investigate a quasi-one-dimensional quantum wire, where the lowest two subbands are populated, in the presence of a helical magnetic field. We uncover a backscattering mechanism involving the helical magnetic field and Coulomb interaction between the electrons. The combination of these ingredients results in scattering resonances and partial gaps which give rise to non-standard plateaus and conductance dips at certain electron densities. The positions and values of these dips are independent of material parameters, serving as direct transport signatures of this mechanism. Our theory applies to generic quasi-one-dimensional systems, including a Kondo lattice and a quantum wire subject to intrinsic or extrinsic spin-orbit coupling. Observation of the universal conductance dips would identify a strongly correlated fermion system hosting fractional excitations, resembling the fractional quantum Hall states. | condensed matter |
Non-intrusive load monitoring (NILM) has been extensively researched over the last decade. The objective of NILM is to identify the power consumption of individual appliances and to detect when particular devices are on or off from measuring the power consumption of an entire house. This information allows households to receive customized advice on how to better manage their electrical consumption. In this paper, we present an alternative NILM method that breaks down the aggregated power signal into categories of appliances. The ultimate goal is to use this approach for demand-side management to estimate potential flexibility within the electricity consumption of households. Our method is implemented as an algorithm combining NILM and load profile simulation. This algorithm, based on a Markov model, allocates an activity chain to each inhabitant of the household, deduces from the whole-house power measurement and statistical data the appliance usage, generate the power profile accordingly and finally returns the share of energy consumed by each appliance category over time. To analyze its performance, the algorithm was benchmarked against several state-of-the-art NILM algorithms and tested on three public datasets. The proposed algorithm is unsupervised; hence it does not require any labeled data, which are expensive to acquire. Although better performance is shown for the supervised algorithms, our proposed unsupervised algorithm achieves a similar range of uncertainty while saving on the cost of acquiring labeled data. Additionally, our method requires lower computational power compared to most of the tested NILM algorithms. It was designed for low-sampling-rate power measurement (every 15 min), which corresponds to the frequency range of most common smart meters. | computer science |
We investigate supersymmetric models with left-right symmetry based on the group $SU(4)_{c} \times SU(2)_{L} \times SU(2)_{R}$ ($4$-$2$-$2$) with negative sign of bilinear Higgs potential parameter $\mu$ in the context of the latest experimental results. In the backdrop of experimental results from the Large Hadron Collider, we investigate the possibility of Yukawa unification in $4$-$2$-$2$ and find out the same is still not ruled out. Furthermore, this scenario also provides a satisfactory dark matter candidate. The current experimental bounds on sparticle masses, mass bounds on Higgs particle, updated phenomenological constraints from the rare decays of B meson and the anomalous magnetic moment of muon with the requirement of a Yukawa unified theory having $10 \%$ or better third family Yukawa unification are utilized to bound the parametric space of these models. | high energy physics phenomenology |
We introduce twist left-veering mapping classes of punctured surfaces. We prove that a twist left-veering open book supports an overtwisted contact structure and determine when the closed braid coming from the punctures is loose or virtually loose. | mathematics |
Kink model is developed to analyze the data where the regression function is twostage linear but intersects at an unknown threshold. In quantile regression with longitudinal data, previous work assumed that the unknown threshold parameters or kink points are heterogeneous across different quantiles. However, the location where kink effect happens tend to be the same across different quantiles, especially in a region of neighboring quantile levels. Ignoring such homogeneity information may lead to efficiency loss for estimation. In view of this, we propose a composite estimator for the common kink point by absorbing information from multiple quantiles. In addition, we also develop a sup-likelihood-ratio test to check the kink effect at a given quantile level. A test-inversion confidence interval for the common kink point is also developed based on the quantile rank score test. The simulation study shows that the proposed composite kink estimator is more competitive with the least square estimator and the single quantile estimator. We illustrate the practical value of this work through the analysis of a body mass index and blood pressure data set. | statistics |
We propose a qubit-environment entanglement measure which is tailored for evolutions that lead to pure dephasing of the qubit, such as are abundant in solid state scenarios. The measure can be calculated directly form the density matrix without minimization of any kind. In fact it does not require the knowledge of the full density matrix, and it is enough to know the initial qubit state and the states of the environment conditional on qubit pointer states. This yields a computational advantage over standard entanglement measures, which becomes large when there are no correlations between environmental components in the conditional environmental states. In contrast to all other measures of mixed state entanglement, the measure has a straightforward physical interpretation directly linking the amount of information about the qubit state which is contained in the environment to the amount of qubit-environmnent entanglement. This allows for a direct extension of the pure state interpretation of entanglement generated during pure dephasing to mixed states, even though pure-state conclusions about qubit decoherence are not transferable. | quantum physics |
Recent advances in social and mobile technology have enabled an abundance of digital traces (in the form of mobile check-ins, association of mobile devices to specific WiFi hotspots, etc.) revealing the physical presence history of diverse sets of entities (e.g., humans, devices, and vehicles). One challenging yet important task is to identify k entities that are most closely associated with a given query entity based on their digital traces. We propose a suite of indexing techniques and algorithms to enable fast query processing for this problem at scale. We first define a generic family of functions measuring the association between entities, and then propose algorithms to transform digital traces into a lower-dimensional space for more efficient computation. We subsequently design a hierarchical indexing structure to organize entities in a way that closely associated entities tend to appear together. We then develop algorithms to process top-k queries utilizing the index. We theoretically analyze the pruning effectiveness of the proposed methods based on a mobility model which we propose and validate in real life situations. Finally, we conduct extensive experiments on both synthetic and real datasets at scale, evaluating the performance of our techniques both analytically and experimentally, confirming the effectiveness and superiority of our approach over other applicable approaches across a variety of parameter settings and datasets. | computer science |
The search for an ideal single-photon source has generated significant interest in discovering novel emitters in materials as well as developing new manipulation techniques to gain better control over the emitters' properties. Quantum emitters in atomically thin two-dimensional (2D) materials have proven very attractive with high brightness, operation under ambient conditions, and the ability to be integrated with a wide range of electronic and photonic platforms. This perspective highlights some of the recent advances in quantum light generation from 2D materials, focusing on hexagonal boron nitride and transition metal dichalcogenides (TMDs). Efforts in engineering and deterministically creating arrays of quantum emitters in 2D materials, their electrical excitation, and their integration with photonic devices are discussed. Lastly, we address some of the challenges the field is facing and the near-term efforts to tackle them. We provide an outlook towards efficient and scalable quantum light generation from 2D materials towards controllable and addressable on-chip quantum sources. | physics |
In three dimensions, we consider the Einstein-Maxwell Lagrangian dressed by a nonminimally coupled scalar field in New Massive Gravity. For this theory, we provide two families of electrically charged Lifshitz black holes where their metric functions depend only on an integration constant. We calculate their masses using the quasilocal approach, as well as their entropy and electric charge. These charged configurations are interpreted as extremal in the sense that the mass vanishes identically while the entropy and electric charge are non zero thermodynamic quantities. Using these examples, we corroborate that the semiclassical entropy can be recovered through a charged Cardy-like formula, involving the corresponding magnetically charged solitons obtained by a double Wick rotation. Finally, the first law of thermodynamics, as well as the Smarr formula are also verified | high energy physics theory |
The Micali-Vazirani (MV) algorithm for maximum cardinality matching in general graphs, which was published in 1980 \cite{MV}, remains to this day the most efficient known algorithm for the problem. This paper gives the first complete and correct proof of this algorithm. Central to our proof are some purely graph-theoretic facts, capturing properties of minimum length alternating paths; these may be of independent interest. An attempt is made to render the algorithm easier to comprehend. | computer science |
We study the notion of zero-knowledge secure against quantum polynomial-time verifiers (referred to as quantum zero-knowledge) in the concurrent composition setting. Despite being extensively studied in the classical setting, concurrent composition in the quantum setting has hardly been studied. We initiate a formal study of concurrent quantum zero-knowledge. Our results are as follows: -Bounded Concurrent QZK for NP and QMA: Assuming post-quantum one-way functions, there exists a quantum zero-knowledge proof system for NP in the bounded concurrent setting. In this setting, we fix a priori the number of verifiers that can simultaneously interact with the prover. Under the same assumption, we also show that there exists a quantum zero-knowledge proof system for QMA in the bounded concurrency setting. -Quantum Proofs of Knowledge: Assuming quantum hardness of learning with errors (QLWE), there exists a bounded concurrent zero-knowledge proof system for NP satisfying quantum proof of knowledge property. Our extraction mechanism simultaneously allows for extraction probability to be negligibly close to acceptance probability (extractability) and also ensures that the prover's state after extraction is statistically close to the prover's state after interacting with the verifier (simulatability). The seminal work of [Unruh EUROCRYPT'12], and all its followups, satisfied a weaker version of extractability property and moreover, did not achieve simulatability. Our result yields a proof of quantum knowledge system for QMA with better parameters than prior works. | quantum physics |
Bayesian estimation is a powerful theoretical paradigm for the operation of quantum sensors. However, the Bayesian method for statistical inference generally suffers from demanding calibration requirements that have so far restricted its use to proof-of-principle experiments. In this theoretical study, we formulate parameter estimation as a classification task and use artificial neural networks to efficiently perform Bayesian estimation. We show that the network's posterior distribution is centered at the true (unknown) value of the parameter within an uncertainty given by the inverse Fisher information, representing the ultimate sensitivity limit for the given apparatus. When only a limited number of calibration measurements are available, our machine-learning based procedure outperforms standard calibration methods. Thus, our work paves the way for Bayesian quantum sensors which can benefit from efficient optimization methods, such as in adaptive schemes, and take advantage of complex non-classical states. These capabilities can significantly enhance the sensitivity of future devices. | quantum physics |
We aim to investigate the distribution function of $\langle Q_{Fe}\rangle$ at 1AU to check if it corresponds to a bimodal wind. We use data from SWICS instrument on board the ACE spacecraft along 20 years. We propose the bi-Gaussian function as the probability distribution function that fits the $\langle Q_{Fe}\rangle$ distribution. We study the evolution of the parameters of the bimodal distribution with the solar cycle. We compare the outliers of the sample with the existing catalogues of ICMEs and identify new ICMEs. The $\langle Q_{Fe}\rangle$ at 1 AU shows a bimodal distribution related to the solar cycle. Our results confirm that $\langle Q_{Fe}\rangle> 12 $ is a trustworthy proxy for ICME identification and a reliable signature in the ICME boundary definition. | astrophysics |
The discovery of new structural and functional materials is driven by phase identification, often using X-ray diffraction (XRD). Automation has accelerated the rate of XRD measurements, greatly outpacing XRD analysis techniques that remain manual, time-consuming, error-prone, and impossible to scale. With the advent of autonomous robotic scientists or self-driving labs, contemporary techniques prohibit the integration of XRD. Here, we describe a computer program for the autonomous characterization of XRD data, driven by artificial intelligence (AI), for the discovery of new materials. Starting from structural databases, we train an ensemble model using a physically accurate synthetic dataset, which output probabilistic classifications -- rather than absolutes -- to overcome the overconfidence in traditional neural networks. This AI agent behaves as a companion to the researcher, improving accuracy and offering significant time savings. It was demonstrated on a diverse set of organic and inorganic materials characterization challenges. This innovation is directly applicable to inverse design approaches, robotic discovery systems, and can be immediately considered for other forms of characterization such as spectroscopy and the pair distribution function. | condensed matter |
We investigate the dichotomy between jetted and non-jetted Active Galactic Nuclei (AGNs), focusing on the fundamental differences of these two classes in the accretion physics onto the central supermassive black hole (SMBH). Our aim is to study and constrain the structure, kinematics and physical state of the nuclear environment in the Broad Line Radio Galaxy (BLRG) PKS 2251+11. The high X-ray luminosity and the relative proximity make such AGN an ideal candidate for a detailed analysis of the accretion regions in radio galaxies. We performed a spectral and timing analysis of a $\sim$64 ks observation of PKS 2251+11 in the X-ray band with XMM-Newton. We modeled the spectrum considering an absorbed power law superimposed to a reflection component. We performed a time-resolved spectral analysis to search for variability of the X-ray flux and of the individual spectral components. We found that the power law has a photon index $\Gamma=1.8\pm 0.1$, absorbed by an ionized partial covering medium with a column density $N_H=(10.1\pm 0.8) \times 10^{23}$ cm$^{-2}$, a ionization parameter $\log{\xi}=1.3\pm 0.1$ erg s$^{-1}$ cm and a covering factor $f\simeq90\%$. Considering a density of the absorber typical of the Broad Line Region (BLR), its distance from the central SMBH is of the order of $r\sim 0.1$ pc. An Fe K$\alpha$ emission line is found at 6.4 keV, whose intensity shows variability on time scales of hours. We derived that the reflecting material is located at a distance $r\gtrsim600r_s$, where $r_s$ is the Schwarzschild radius. Concerning the X-ray properties, we found that PKS 2251+11 does not differ significantly from the non-jetted AGNs, confirming the validity of the unified model in describing the inner regions around the central SMBH, but the lack of information regarding the state of the very innermost disk and SMBH spin still leave unconstrained the origin of the jet. | astrophysics |
Mildly-relativistic shocks that are embedded in colliding magnetohydrodynamic flows are prime sites for relativistic particle acceleration and production of strongly variable, polarized multi-wavelength emission from relativistic jet sources such as blazars and gamma-ray bursts. The principal energization mechanisms at these shocks are diffusive shock acceleration and shock drift acceleration. In recent work, we had self-consistently coupled shock acceleration and radiation transfer simulations in blazar jets in a basic one-zone scenario. These one-zone models revealed that the observed spectral energy distributions (SEDs) of blazars strongly constrain the nature of the shock layer hydromagnetic turbulence. In this paper, we expand our previous work by including full time dependence and treating two zones, one being the site of acceleration, and the second being a larger emission zone. This construction is applied to multiwavelength flares of the flat spectrum radio quasar 3C 279, fitting snap-shot SEDs and generating light curves that are consistent with observed variability timescales. We also present a generic study for the typical flaring behavior of the BL Lac object Mrk 501. The model predicts correlated variability across all wavebands, but cross-band time lags depending on the type of blazar (FSRQ vs. BL Lac), as well as distinctive spectral hysteresis patterns in all wavelength bands, from mm radio waves to gamma-rays. These evolutionary signatures serve to provide diagnostics on the competition between acceleration and radiative cooling. | astrophysics |
Continuum models for time-reversal (TR) invariant topological insulators (TIs) in $d \geq 3$ dimensions are provided by harmonic oscillators coupled to certain $SO(d)$ gauge fields. These models are equivalent to the presence of spin-orbit (SO) interaction in the oscillator Hamiltonians at a critical coupling strength (equivalent to the harmonic oscillator frequency) and leads to flat Landau Level (LL) spectra and therefore to infinite degeneracy of either the positive or the negative helicity states depending on the sign of the SO coupling. Generalizing the results of Haaker et al. to $d \geq 4$, we construct vector operators commuting with these Hamiltonians and show that $SO(d,2)$ emerges as the non-compact extended dynamical symmetry. Focusing on the model in four dimensions, we demonstrate that the infinite degeneracy of the flat spectra can be fully explained in terms of the discrete unitary representations of $SO(4,2)$, i.e. the {\it doubletons}. The degeneracy in the opposite helicity branch is finite, but can still be explained exploiting the complex conjugate {\it doubleton} representations. Subsequently, the analysis is generalized to $d$ dimensions, distinguishing the cases of odd and even $d$. We also determine the spectrum generating algebra in these models and briefly comment on the algebraic organization of the LL states w.r.t to an underlying "deformed" AdS geometry as well as on the organization of the surface states under open boundary conditions in view of our results. | high energy physics theory |
We report on an experiment demonstrating entanglement swapping of time-frequency entangled photons. We perform a frequency-resolved Bell-state measurement on the idler photons from two independent entangled photon pairs, which projects the signal photons onto a two-color Bell state. We verify entanglement in this heralded state using two-photon interference and observing quantum beating without the use of filters, indicating the presence of two-color entanglement. Our method could lend itself to use as a highly-tunable source of frequency-bin entangled single photons. | quantum physics |
We apply the CP to CAVs for the first time experimentally. Firstly, we introduce the propagation properties of a CAV with the LOCP in the process of measuring TCs, and measure the TCs of CAVs in the self-focusing areas experimentally. Secondly, we shape the CAVs via the HOCP and discuss the manipulaiton of singularities. | physics |
Segmentation and (segment) labeling are generally treated separately in lexical semantics, raising issues due to their close inter-dependence and necessitating joint annotation. We therefore investigate the lexical semantic recognition task of multiword expression segmentation and supersense disambiguation, unifying several previously-disparate styles of lexical semantic annotation. We evaluate a neural CRF model along all annotation axes available in version 4.3 of the STREUSLE corpus: lexical unit segmentation (multiword expressions), word-level syntactic tags, and supersense classes for noun, verb, and preposition/possessive units. As the label set generalizes that of previous tasks (DiMSUM, PARSEME), we additionally evaluate how well the model generalizes to those test sets, with encouraging results. By establishing baseline models and evaluation metrics, we pave the way for comprehensive and accurate modeling of lexical semantics. | computer science |
The electronic structure of the unconventional superconductor UTe$_2$ was studied by resonant photoelectron spectroscopy (RPES) and angle-resolved photoelectron spectroscopy (ARPES) with soft X-ray synchrotron radiation. The partial $\mathrm{U}~5f$ density of states of UTe$_2$ were imaged by the $\mathrm{U}~4d$--$5f$ RPES and it was found that the $\mathrm{U}~5f$ state has an itinerant character, but there exists an incoherent peak due to the strong electron correlation effects. Furthermore, an anomalous admixture of the $\mathrm{U}~5f$ states into the $\mathrm{Te}~5p$ bands was observed at a higher binding energy, which cannot be explained by band structure calculations. On the other hand, the band structure of UTe$_2$ was obtained by ARPES and its overall band structure were mostly explained by band structure calculations. These results suggest that the $\mathrm{U}~5f$ states of UTe$_2$ have itinerant but strongly-correlated nature with enhanced hybridization with the $\mathrm{Te}~5p$ states. | condensed matter |
The prevalence of multivariate space-time data collected from monitoring networks and satellites or generated from numerical models has brought much attention to multivariate spatio-temporal statistical models, where the covariance function plays a key role in modeling, inference, and prediction. For multivariate space-time data, understanding the spatio-temporal variability, within and across variables, is essential in employing a realistic covariance model. Meanwhile, the complexity of generic covariances often makes model fitting very challenging, and simplified covariance structures, including symmetry and separability, can reduce the model complexity and facilitate the inference procedure. However, a careful examination of these properties is needed in real applications. In the work presented here, we formally define these properties for multivariate spatio-temporal random fields and use functional data analysis techniques to visualize them, hence providing intuitive interpretations. We then propose a rigorous rank-based testing procedure to conclude whether the simplified properties of covariance are suitable for the underlying multivariate space-time data. The good performance of our method is illustrated through synthetic data, for which we know the true structure. We also investigate the covariance of bivariate wind speed, a key variable in renewable energy, over a coastal and an inland area in Saudi Arabia. | statistics |
Depth images captured by off-the-shelf RGB-D cameras suffer from much stronger noise than color images. In this paper, we propose a method to denoise the depth images in RGB-D images by color-guided graph filtering. Our iterative method contains two components: color-guided similarity graph construction, and graph filtering on the depth signal. Implemented in graph vertex domain, filtering is accelerated as computation only occurs among neighboring vertices. Experimental results show that our method outperforms state-of-art depth image denoising methods significantly both on quality and efficiency. | electrical engineering and systems science |
We introduce different types of quenches to probe the non-equilibrium dynamics and multiple collective modes of bilayer fractional quantum Hall states. We show that applying an electric field in one layer induces oscillations of a spin-1 degree of freedom, whose frequency matches the long-wavelength limit of the dipole mode. On the other hand, oscillations of the long-wavelength limit of the quadrupole mode, i.e., the spin-2 graviton, as well as the combination of two spin-1 states, can be activated by a sudden change of band mass anisotropy. We construct an effective field theory to describe the quench dynamics of these collective modes. In particular, we derive the dynamics for both the spin-2 and the spin-1 states and demonstrate their excellent agreement with numerics. | condensed matter |
Quarkonia production in different high-energy processes has recently been proposed in order to probe gluon transverse-momentum-dependent parton distribution and fragmentation functions (TMDs in general). However, no proper factorization theorems have been derived for the discussed processes, but rather just ansatzs, whose main assumption is the factorization of the two soft mechanisms present in the process: soft-gluon radiation and the formation of the bound state. In this paper it is pointed out that, at low transverse momentum, these mechanisms are entangled and thus encoded in a new kind of non-perturbative hadronic quantities beyond the TMDs: the TMD shape functions. This is illustrated by deriving the factorization theorem for the process $pp\to \eta_{c,b}$ at low transverse momentum. | high energy physics phenomenology |
The composition in terms of nuclear species of the primary cosmic ray flux is largely uncertain in the knee region and above, where only indirect measurements are available. The predicted fluxes of high-energy leptons from cosmic ray air showers are influenced by this uncertainty. Different models have been proposed. Similarly, these uncertainties affect the measurement of lepton fluxes in very large volume neutrino telescopes. Uncertainties in the cosmic ray interaction processes, mainly deriving from the limited amount of experimental data covering the particle physics at play, could also produce similar differences in the observable lepton fluxes and are affected as well by large uncertainties. In this paper we analyse how considering different models for the primary cosmic ray composition affects the expected rates in the current generation of very large volume neutrino telescopes (ANTARES and IceCube). We observe that, a certain degree of discrimination between composition fits can be already achieved with the current IceCube data sample, even though in a model-dependent way. The expected improvements in the energy reconstruction achievable with the next generation neutrino telescopes is be expected to make these instruments more sensitive to the differences between models. | astrophysics |
The advantages of sequential Monte Carlo (SMC) are exploited to develop parameter estimation and model selection methods for GARCH (Generalized AutoRegressive Conditional Heteroskedasticity) style models. It provides an alternative method for quantifying estimation uncertainty relative to classical inference. Even with long time series, it is demonstrated that the posterior distribution of model parameters are non-normal, highlighting the need for a Bayesian approach and an efficient posterior sampling method. Efficient approaches for both constructing the sequence of distributions in SMC, and leave-one-out cross-validation, for long time series data are also proposed. Finally, an unbiased estimator of the likelihood is developed for the Bad Environment-Good Environment model, a complex GARCH-type model, which permits exact Bayesian inference not previously available in the literature. | statistics |
The paper examines the early growth of supermassive black holes (SMBHs) in cosmological hydrodynamic simulations with different BH seeding scenarios. Employing the constrained Gaussian realization, we reconstruct the initial conditions in the large-volume BlueTides simulation and run them to $z=6$ to cross-validate that the method reproduces the first quasars and their environments. Our constrained simulations in a volume of $(15\, h^{-1}{\rm Mpc})^3$ successfully recover the evolution of large-scale structure and the stellar and BH masses in the vicinity of a $\sim10^{12}\, M_{\odot}$ halo which we identified in BlueTides at $z\sim7$ hosting a $\sim10^9\, M_{\odot}$ SMBH. Among our constrained simulations, only the ones with a low-tidal field and high-density peak in the initial conditions induce the fastest BH growth required to explain the $z>6$ quasars. We run two sets of simulations with different BH seed masses of $5\times10^3$, $5\times10^4$, and $5\times10^5\, h^{-1}M_{\odot}$, (a) with the same ratio of halo to BH seed mass and (b) with the same halo threshold mass. At $z=6$, all the SMBHs converge in mass to $\sim10^9\, M_{\odot}$ except for the one with the smallest seed in (b) undergoing critical BH growth and reaching $10^8$ -- $10^9\, M_{\odot}$, albeit with most of the growth in (b) delayed compared to set (a). The finding of eight BH mergers in the small-seed scenario (four with masses $10^4$ -- $10^6\, M_{\odot}$ at $z>12$), six in the intermediate-seed scenario, and zero in the large-seed scenario suggests that the vast BHs in the small-seed scenario merge frequently during the early phases of the growth of SMBHs. The increased BH merger rate for the low-mass BH seed and halo threshold scenario provides an exciting prospect for discriminating BH formation mechanisms with the advent of multi-messenger astrophysics and next-generation gravitational wave facilities. | astrophysics |
We present the sympathetic eruption of a standard and a blowout coronal jets originating from two adjacent coronal bright points (CBP1 and CBP2) in a polar coronal hole, using soft X-ray and extreme ultraviolet observations respectively taken by the Hinode and the Solar Dynamic Observatory. In the event, a collimated jet with obvious westward lateral motion firstly launched from CBP1, during which a small bright point appeared around CBP1's east end, and magnetic flux cancellation was observed within the eruption source region. Based on these characteristics, we interpret the observed jet as a standard jet associated with photosperic magnetic flux cancellation. About 15 minutes later, the westward moving jet spire interacted with CBP2 and resulted in magnetic reconnection between them, which caused the formation of the second jet above CBP2 and the appearance of a bright loop system in-between the two CBPs. In addition, we observed the writhing, kinking, and violent eruption of a small kink structure close to CBP2's west end but inside the jet-base, which made the second jet brighter and broader than the first one. These features suggest that the second jet should be a blowout jet triggered by the magnetic reconnection between CBP2 and the spire of the first jet. We conclude that the two successive jets were physically connected to each other rather than a temporal coincidence, and this observation also suggests that coronal jets can be triggered by external eruptions or disturbances, besides internal magnetic activities or magnetohydrodynamic instabilities. | astrophysics |
In a previous paper Affine symmetries of the equivariant quantum cohomology of rational homogeneous spaces, a general formula was given for the multiplication by some special Schubert classes in the quantum cohomology of any homogeneous space. Although this formula is correct in the non equivariant setting, the stated equivariant version was wrong. We provide corrections for the equivariant formula, thus giving a correct argument for the non equivariant formula. We also give new formulas in the equivariant homology of the affine grassmannian that could lead to Pieri type formulas. | mathematics |
We present an investigation into the optical properties of Nd$_{4}$Ni$_{3}$O$_{8}$ at different temperatures from 300 down to 5~K over a broad frequency range. The optical conductivity at 5~K is decomposed into IR-active phonons, a far-infrared band $\alpha$, a mid-infrared band $\beta$, and a high-energy absorption edge. By comparing the measured optical conductivity to first-principles calculations and the optical response of other nickelates, we find that Nd$_{4}$Ni$_{3}$O$_{8}$ features evident charge-stripe fluctuations. While the $\alpha$ band may be related to impurities, the $\beta$ band and the high-frequency absorption edge can be attributed to electronic transitions between the gapped Ni-$d_{x^2-y^2}$ bands due to fluctuating charge stripes and the onset of transitions involving other high-energy bands, respectively. Furthermore, an analysis of the temperature-dependent optical spectral weight reveals a $T^{2}$ law, which is likely to originate from strong correlation effects. | condensed matter |
In dark matter axion searches, quantum uncertainty manifests as a fundamental noise source, limiting the measurement of the quadrature observables used for detection. We use vacuum squeezing to circumvent the quantum limit in a search for a new particle. By preparing a microwave-frequency electromagnetic field in a squeezed state and near-noiselessly reading out only the squeezed quadrature, we double the search rate for axions over a mass range favored by recent theoretical projections. We observe no signature of dark matter axions in the combined $16.96-17.12$ and $17.14-17.28\space\mu\text{eV}/c^2$ mass window for axion-photon couplings above $g_{\gamma} = 1.38\times g_{\gamma}^\text{KSVZ}$, reporting exclusion at the 90% level. | quantum physics |
Given a finitely generated group $\Gamma$ with finite generating set $R$, we introduce the notion of an $R$-directed Anosov representation. This is a weakening of the notion of Anosov representations. Our main theorem gives a procedure to construct $R$-directed Anosov representations using Fock-Goncharov positivity. As an application of our main theorem, we construct large families of primitive stable representations from $F_2$ to $\mathrm{PGL}(V)$, including non-discrete and non-faithful examples. | mathematics |
We concisely derive chiral magnetic effect through Wigner function approach for chiral fermion system. Then we derive chiral magnetic effect through solving the Landau levels of chiral fermions in detail. The procedures of second quantization and ensemble average lead to the equation of chiral magnetic effect for righthand and lefthand fermion systems. Chiral magnetic effect only comes from the contribution of the lowest Landau level. We carefully analyze the lowest Landau level, and find that all righthand (chirality is +1) fermions move along positive z-direction and all lefthand (chirality is -1) fermions move along negative z-direction. From this picture chiral magnetic effect can be explained clearly in a microscopic way. | high energy physics theory |
The virtues of resolvent algebras, compared to other approaches for the treatment of canonical quantum systems, are exemplified by infinite systems of non-relativistic bosons. Within this framework, equilibrium states of trapped and untrapped bosons are defined on a fixed C*-algebra for all physically meaningful values of the temperature and chemical potential. Moreover, the algebra provides the tools for their analysis without having to rely on 'ad hoc' prescriptions for the test of pertinent features, such as the appearance of Bose-Einstein condensates. The method is illustrated in case of non-interacting systems in any number of spatial dimensions and sheds new light on the appearance of condensates. Yet the framework also covers interactions and thus provides a universal basis for the analysis of bosonic systems. | quantum physics |
We present a python-based tool to detect the occultation of background sources by foreground Solar coronal mass ejections. The tool takes as input standard celestial coordinates of the source and translates those to the Helioprojective plane, and is thus well suited for use with a wide variety of background astronomical sources. This tool provides an easy means to search through a large archival dataset for such crossings and relies on the well-tested Astropy and Sunpy modules. | astrophysics |
We study the scalar, electromagnetic and gravitational perturbations of planar AdS$_4$ black holes with NUT charge. In the context of the AdS/CFT correspondence, these solutions describe a thermal quantum field theory embedded in a G\"odel-type universe with closed time-like curves. For a given temperature and NUT charge, two different planar Taub-NUT solutions exist, but we show that only the one with a positive specific heat contributes to the Euclidean saddle point in the path integral. By using the Newman-Penrose formalism, we then derive the master equations satisfied by scalar, electromagnetic and gravitational perturbations in this background, and show that the corresponding equations are separable. Interestingly, the solutions pile up in the form of Landau levels, and hence are characterized by a single quantum number $q$. We determine the appropriate boundary conditions satisfied by the master variables and using these we compute the quasinormal modes of scalar and gravitational perturbations. On the other hand, electromagnetic perturbations depend on a free parameter whose determination is problematic. We find that all the scalar and gravitational QNM frequencies lie in the lower half of the complex plane, indicating that these Taub-NUT spacetimes are stable. We discuss the implications of these results in the light of the AdS/CFT correspondence. | high energy physics theory |
Radiofrequency field inhomogeneity is a significant issue in imaging large fields of view in high- and ultrahigh-field MRI. Passive shimming with coupled coils or dielectric pads is the most common approach at 3 T. We introduce and test light and compact metasurface, providing the same homogeneity improvement in clinical abdominal imaging at 3 T as a conventional dielectric pad. The metasurface comprising a periodic structure of copper strips and parallel-plate capacitive elements printed on a flexible polyimide substrate supports propagation of slow electromagnetic waves similar to a high-permittivity slab. We compare the metasurface operating inside a transmit body birdcage coil to the state-of-the-art pad by numerical simulations and in vivo study on healthy volunteers. Numerical simulations with different body models show that the local minimum of B1+ causing a dark void in the abdominal domain is removed by the metasurface with comparable resulting homogeneity as for the pad without noticeable SAR change. In vivo results confirm similar homogeneity improvement and demonstrate the stability to body mass index. The light, flexible, and cheap metasurface can replace a relatively heavy and expensive pad based on the aqueous suspension of barium titanate in abdominal imaging at 3 T. | physics |
We study the error of the number of unimodular lattice points that fall into a dilated and translated parallelogram. By using an article from Skriganov, we see that this error can be compared to an ergodic sum that involves the discrete geodesic flow over the space of unimodular lattices. With the right normalization, we show, by using tools from a previous work of Fayad and Dolgopyat, that a certain point process converges in law towards a Poisson process and deduce that the ergodic sum converges towards a Cauchy centered law when the unimodular lattice is distributed according to the normalized Haar measure. Strong from this experience, we apply the same kind of approach, with more difficulties, to the study of the asymptotic behaviour of the error and show that this error, normalized by $\log(t)$ with $t$ the factor of dilatation of the parallelogram, also converges in law towards a Cauchy centered law when the dilatation parameter tends to infinity and when the lattice and the vector of translation are random. In a next article, we will show that, in the case of a ball in dimension $d$ superior or equal to $2$, the error, normalized by $t^{\frac{d-1}{2}}$ with $t$ the factor of dilatation of the ball, converges in law when $t \rightarrow \infty$ and the limit law admits a moment of order $1$. | mathematics |
Natural muscles provide mobility in response to nerve impulses. Electromyography (EMG) measures the electrical activity of muscles in response to a nerve's stimulation. In the past few decades, EMG signals have been used extensively in the identification of user intention to potentially control assistive devices such as smart wheelchairs, exoskeletons, and prosthetic devices. In the design of conventional assistive devices, developers optimize multiple subsystems independently. Feature extraction and feature description are essential subsystems of this approach. Therefore, researchers proposed various hand-crafted features to interpret EMG signals. However, the performance of conventional assistive devices is still unsatisfactory. In this paper, we propose a deep learning approach to control prosthetic hands with raw EMG signals. We use a novel deep convolutional neural network to eschew the feature-engineering step. Removing the feature extraction and feature description is an important step toward the paradigm of end-to-end optimization. Fine-tuning and personalization are additional advantages of our approach. The proposed approach is implemented in Python with TensorFlow deep learning library, and it runs in real-time in general-purpose graphics processing units of NVIDIA Jetson TX2 developer kit. Our results demonstrate the ability of our system to predict fingers position from raw EMG signals. We anticipate our EMG-based control system to be a starting point to design more sophisticated prosthetic hands. For example, a pressure measurement unit can be added to transfer the perception of the environment to the user. Furthermore, our system can be modified for other prosthetic devices. | electrical engineering and systems science |
Inferring the causal effect of a treatment on an outcome in an observational study requires adjusting for observed baseline confounders to avoid bias. However, adjusting for all observed baseline covariates, when only a subset are confounders of the effect of interest, is known to yield potentially inefficient and unstable estimators of the treatment effect. Furthermore, it raises the risk of finite-sample bias and bias due to model misspecification. For these stated reasons, confounder (or covariate) selection is commonly used to determine a subset of the available covariates that is sufficient for confounding adjustment. In this article, we propose a confounder selection strategy that focuses on stable estimation of the treatment effect. In particular, when the propensity score model already includes covariates that are sufficient to adjust for confounding, then the addition of covariates that are associated with either treatment or outcome alone, but not both, should not systematically change the effect estimator. The proposal, therefore, entails first prioritizing covariates for inclusion in the propensity score model, then using a change-in-estimate approach to select the smallest adjustment set that yields a stable effect estimate. The ability of the proposal to correctly select confounders, and to ensure valid inference of the treatment effect following data-driven covariate selection, is assessed empirically and compared with existing methods using simulation studies. We demonstrate the procedure using three different publicly available datasets commonly used for causal inference. | statistics |
We show that the presence of a magnetic monopole in position space gives rise to a violation of the fermion number conservation in chiral matter. Using the chiral kinetic theory, we derive a model-independent expression of such a violation in nonequilibrium many-body systems of chiral fermions. In local thermal equilibrium at finite temperature and chemical potential, in particular, this violation is proportional to the chemical potential with a topologically quantized coefficient. These consequences are due to the interplay between the Dirac monopole in position space and the Berry monopole in momentum space. Our mechanism can be applied to study the roles of magnetic monopoles in the nonequilibrium evolution of the early Universe. | high energy physics theory |
Missing data is a crucial issue when applying machine learning algorithms to real-world datasets. Starting from the simple assumption that two batches extracted randomly from the same dataset should share the same distribution, we leverage optimal transport distances to quantify that criterion and turn it into a loss function to impute missing data values. We propose practical methods to minimize these losses using end-to-end learning, that can exploit or not parametric assumptions on the underlying distributions of values. We evaluate our methods on datasets from the UCI repository, in MCAR, MAR and MNAR settings. These experiments show that OT-based methods match or out-perform state-of-the-art imputation methods, even for high percentages of missing values. | statistics |
We consider a quantum-mechanical system, finite or extended, initially in its ground-state, exposed to a time-dependent potential pulse, with a slowly varying envelope and a carrier frequency $\omega_0$. By working out a rigorous solution of the time-dependent Schr\"odinger equation in the high-$\omega_0$ limit, we show that the linear response is completely suppressed after the switch-off of the pulse. We show, at the same time, that to the lowest order in $\omega_0^{-1}$, observables are given in terms of the linear density response function $\chi(\rv,\rv',\omega)$, despite the problem's nonlinearity. We propose a new spectroscopic technique based on these findings, which we name the Nonlinear High-Frequency Pulsed Spectroscopy (NLHFPS). An analysis of the jellium slab and sphere models reveals very high surface sensitivity of NLHFPS, which produces a richer excitation spectrum than accessible within the linear-response regime. Combining the advantages of the extraordinary surface sensitivity, the absence of constraints by the conventional dipole selection rules, and the ease of theoretical interpretation by means of the linear response time-dependent density functional theory, NLHFPS has the potential to evolve into a powerful characterization method in nanoscience and nanotechnology. | condensed matter |
Quantum thermodynamics aims at investigating both the emergence and the limits of the laws of thermodynamics from a quantum mechanical microscopic approach. In this scenario, thermodynamic processes with no heat exchange, namely, adiabatic transformations, can be implemented through quantum evolutions in closed systems, even though the notion of a closed system is always an idealization and approximation. Here, we begin by theoretically discussing thermodynamic adiabatic processes in open quantum systems, which evolve non-unitarily under decoherence due to its interaction with its surrounding environment. From a general approach for adiabatic non-unitary evolution, we establish heat and work in terms of the underlying Liouville superoperator governing the quantum dynamics. As a consequence, we derive the conditions that an adiabatic open-system quantum dynamics implies in the absence of heat exchange, providing a connection between quantum and thermal adiabaticity. Moreover, we determine families of decohering systems exhibiting the same maximal heat exchange, which imply in classes of thermodynamic adiabaticity in open systems. We then approach the problem experimentally using a hyperfine energy-level quantum bit of an Ytterbium $^{171}$Yb$^+$ trapped ion, which provides a work substance for thermodynamic processes, allowing for the analysis of heat and internal energy throughout a controllable engineered dynamics. | quantum physics |
We address continuous-time quantum walks on graphs, and discuss whether and how quantum-limited measurements on the walker may extract information on the tunnelling amplitude between the nodes of the graphs. For a few remarkable families of graphs, we evaluate the ultimate quantum bound to precision, i.e. we compute the quantum Fisher information (QFI), and assess the performances of incomplete measurements, i.e. measurements performed on a subset of the graph's nodes. We also optimize the QFI over the initial preparation of the walker and find the optimal measurement achieving the ultimate precision in each case. As the topology of the graph is changed, a non-trivial interplay between the connectivity and the achievable precision is uncovered. | quantum physics |
We successfully demonstrated experimentally the electrical-field-mediated control of the spin of electrons confined in an SOI Quantum Dot (QD) device fabricated with a standard CMOS process flow. Furthermore, we show that the Back-Gate control in SOI devices enables switching a quantum bit (qubit) between an electrically-addressable, yet charge noise-sensitive configuration, and a protected configuration. | condensed matter |
In this paper we introduce a Transformer-based approach to video object segmentation (VOS). To address compounding error and scalability issues of prior work, we propose a scalable, end-to-end method for VOS called Sparse Spatiotemporal Transformers (SST). SST extracts per-pixel representations for each object in a video using sparse attention over spatiotemporal features. Our attention-based formulation for VOS allows a model to learn to attend over a history of multiple frames and provides suitable inductive bias for performing correspondence-like computations necessary for solving motion segmentation. We demonstrate the effectiveness of attention-based over recurrent networks in the spatiotemporal domain. Our method achieves competitive results on YouTube-VOS and DAVIS 2017 with improved scalability and robustness to occlusions compared with the state of the art. Code is available at https://github.com/dukebw/SSTVOS. | computer science |
We consider the effects of electron-hole interaction, 2D confinement and applied electric field on direct allowed transitions in III-V semiconductors, with InGaAs as a study case. Instead of Coulomb interaction, we use Gaussian potential. It is finite at the origin and has a finite effective range, which allows for a more efficient numerical solution of Schr\"{o}dinger equation. Yet, we can expect electroabsorption phenomena to remain qualitatively similar to the ones observed for Coulomb excitons. Moreover, we use variation of parameters to fit both position and magnitude of the first absorption peak to the Coulomb case. We combine and compare several numerical and approximate methods, including spectral expansion, finite differences, separation of variables and variational approximation. We find that separation of variables approach works only for quantum well widths smaller than the exciton radius. After separation of variables, finite difference solution of the resulting interaction equation gives a much better agreement with the full spectral solution than naive variational approximation. We observe that electric field has a critical effect on the magnitudes of exciton absorption peaks, suppressing previously allowed transitions and enhancing forbidden ones. Moreover, for excited states, initially suppressed transitions are enhanced again at higher field strengths. | physics |
Embedding is a useful technique to project a high-dimensional feature into a low-dimensional space, and it has many successful applications including link prediction, node classification and natural language processing. Current approaches mainly focus on static data, which usually lead to unsatisfactory performance in applications involving large changes over time. How to dynamically characterize the variation of the embedded features is still largely unexplored. In this paper, we introduce a dynamic variational embedding (DVE) approach for sequence-aware data based on recent advances in recurrent neural networks. DVE can model the node's intrinsic nature and temporal variation explicitly and simultaneously, which are crucial for exploration. We further apply DVE to sequence-aware recommender systems, and develop an end-to-end neural architecture for link prediction. | computer science |
It has been suggested that the cosmic history might repeat in cycles, with an infinite series of similar aeons in the past and the future. Here, we instead propose that the cosmic history repeats itself exactly, constructing a universe on a periodic temporal history, which we call Periodic Time Cosmology. In particular, the primordial power spectrum, convolved with the transfer function throughout the cosmic history, would form the next aeon's primordial power spectrum. By matching the big bang to the infinite future using a conformal rescaling (a la Penrose), we uniquely determine the primordial power spectrum, in terms of the transfer function up to two free parameters. While nearly scale invariant with a red tilt on large scales, using Planck and Baryonic Acoustic Oscillation observations, we find the minimal model is disfavoured compared to a power-law power spectrum at $5.1\sigma$. However, extensions of $\Lambda$CDM cosmic history change the large scale transfer function and can provide better relative fits to the data. For example, the best fit seven parameter model for our Periodic Time Cosmology, with $w=-1.024$ for dark energy equation of state, is only disfavoured relative to a power-law power spectrum (with the same number of parameters) at $1.8\sigma$ level. Therefore, consistency between cosmic history and initial conditions provides a viable description of cosmological observations in the context of Periodic Time Cosmology. | astrophysics |
Quasiparticle states in Dirac systems with complex impurity potentials are investigated. It is shown that an impurity site with loss leads to a nontrivial distribution of the local density of states (LDOS). While the real part of defect potential induces a well-pronounced peak in the density of states (DOS), the DOS is either weakly enhanced at small frequencies or even forms a peak at the zero frequency for a lattice in the case of non-Hermitian impurity. As for the spatial distribution of the LDOS, it is enhanced in the vicinity of impurity but shows a dip at a defect itself when the potential is sufficiently strong. The results for a two-dimensional hexagonal lattice demonstrate the characteristic trigonal-shaped profile for the LDOS. The latter acquires a double-trigonal pattern in the case of two defects placed at neighboring sites. The effects of non-Hermitian impurities could be tested both in photonic lattices and certain condensed matter setups. | condensed matter |
We evaluate analytically the quantum discord for a large family of multi qubit states. It is interesting to note that the quantum discord of three qubits and five qubits is the same, as is the quantum discord of two qubits and six qubits. We discover that the quantum discord of this family states can be concluded into three categories. The level surfaces of the quantum discord in the three categories is shown through images. Furthermore, we investigated the dynamic behavior of quantum discord under decoherence. For the odd partite systems, we prove the frozen phenomenon of quantum discord does not exist under the phase flip channel, while it can be found in the even partite systems. | quantum physics |
In this article we study a theory of support varieties over a skew complete intersection $R$, i.e. a skew polynomial ring modulo an ideal generated by a sequence of regular normal elements. We compute the derived braided Hochschild cohomology of $R$ relative to the skew polynomial ring and show its action on $\mathrm{Ext}_R(M,N)$ is noetherian for finitely generated $R$-modules $M$ and $N$ respecting the braiding of $R$. When the parameters defining the skew polynomial ring are roots of unity we use this action to define a support theory. In this setting applications include a proof of the Generalized Auslander-Reiten Conjecture and that $R$ possesses symmetric complexity. | mathematics |
The long-standing quest to determine the superconducting order of Sr$_2$RuO$_4$ (SRO) has received renewed attention after recent nuclear magnetic resonance (NMR) Knight shift experiments have cast doubt on the possibility of spin-triplet pairing in the superconducting state. As a putative solution, encompassing a body of experiments conducted over the years, a $d+ig$-wave order parameter caused by an accidental near-degeneracy has been suggested [S. A. Kivelson et al., npj Quantum Materials $\bf{5}$, 43 (2020)]. Here we develop a general Ginzburg--Landau theory for multiband superconductors. We apply the theory to SRO and predict the relative size of the order parameter components. The heat capacity jump expected at the onset of the second order parameter component is found to be above the current threshold deduced by the experimental absence of a second jump. Our results tightly restrict theories of $d+ig$ order, and other candidates caused by a near-degeneracy, in SRO. We discuss possible solutions to the problem. | condensed matter |
Janus solutions are constructed in $d=3$, ${\cal N}=8$ gauged supergravity. We find explicit half-BPS solutions where two scalars in the $SO(8,1)/SO(8)$ coset have a nontrivial profile. These solutions correspond on the CFT side to an interface with a position-dependent expectation value for a relevant operator and a source which jumps across the interface for a marginal operator. | high energy physics theory |
Accurate chemical abundance measurements of X-ray emitting atmospheres pervading massive galaxies, galaxy groups, and clusters provide essential information on the star formation and chemical enrichment histories of these large scale structures. Although the collisionally ionised nature of the intracluster medium (ICM) makes these abundance measurements relatively easy to derive, underlying spectral models can rely on different atomic codes, which brings additional uncertainties on the inferred abundances. Here, we provide a simple, yet comprehensive comparison between the codes SPEXACT v3.0.5 (cie model) and AtomDB v3.0.9 (vapec model) in the case of moderate, CCD-like resolution spectroscopy. We show that, in cool plasmas ($kT \lesssim 2$ keV), systematic differences up to $\sim$20% for the Fe abundance and $\sim$45% for the O/Fe, Mg/Fe, Si/Fe, and S/Fe ratios may still occur. Importantly, these discrepancies are also found to be instrument-dependent, at least for the absolute Fe abundance. Future improvements in these two codes will be necessary to better address questions on the ICM enrichment. | astrophysics |
We calculate contributions to the one-loop renormalization in the spinor sector of the minimal Lorentz-violating extended QED in the second order in Lorentz-breaking parameters. From the renormalizability viewpoint, we show that the inclusion of some of the Lorentz-breaking terms in the model is linked to the presence of others. We also demonstrate that the Ward identities are satisfied up to this order. | high energy physics theory |
We give a construction of general holomorphic quarter BPS operators in $ \mathcal{N}=4$ SYM at weak coupling with $U(N)$ gauge group at finite $N$. The construction employs the M\"obius inversion formula for set partitions, applied to multi-symmetric functions, alongside computations in the group algebras of symmetric groups. We present a computational algorithm which produces an orthogonal basis for the physical inner product on the space of holomorphic operators. The basis is labelled by a $U(2)$ Young diagram, a $U(N)$ Young diagram and an additional plethystic multiplicity label. We describe precision counting results of quarter BPS states which are expected to be reproducible from dual computations with giant gravitons in the bulk, including a symmetry relating sphere and AdS giants within the quarter BPS sector. In the case $n \leq N$ ($n$ being the dimension of the composite operator) the construction is analytic, using multi-symmetric functions and $U(2)$ Clebsch-Gordan coefficients. Counting and correlators of the BPS operators can be encoded in a two-dimensional topological field theory based on permutation algebras and equipped with appropriate defects. | high energy physics theory |
Many famous combinatorial numbers can be placed in the following generalized triangular array $[T_{n,k}]_{n,k\ge 0}$ satisfying the recurrence relation: \begin{equation*} T_{n,k}=\lambda(a_0n+a_1k+a_2)T_{n-1,k}+(b_0n+b_1k+b_2)T_{n-1,k-1}+\frac{d(da_1-b_1)}{\lambda}(n-k+1)T_{n-1,k-2} \end{equation*} with $T_{0,0}=1$ and $T_{n,k}=0$ unless $0\le k\le n$. For $n\geq0$, denote by $T_n(q)$ its row-generating functions. In this paper, we consider the $\textbf{x}$-Stieltjes moment property and $3$-$\textbf{x}$-log-convexity of $(T_n(q))_{n\geq0}$ and the linear transformation of $T_{n,k}$ preserving Stieltjes moment properties of sequences. Using total positivity, we develop various criteria for $\textbf{x}$-Stieltjes moment property and $r$-$\textbf{x}$-log-convexity based on a four-term recursive array and Jacobi continued fraction expressions of generating functions. We apply our criteria to the $\textbf{x}$-Stieltjes moment property and $3$-$\textbf{x}$-log-convexity of $T_n(q)$ after we get the Jacobi continued fraction expression of $\sum_{n\geq0}T_n(q)t^n$. With the help of a criterion of Wang and Zhu [Adv. in Appl. Math. (2016)], we show that the corresponding linear transformation of $T_{n,k}$ preserve Stieltjes moment properties of sequences. Finally, we present some related famous examples including factorial numbers, Whitney numbers, Stirling permutations, minimax trees and peak statistics. | mathematics |
We numerically compute the density of states (DOS) of interacting disordered zigzag graphene nanoribbon (ZGNR) having midgap states showing $e/2$ fractional edge charges. The computed Hartree-Fock DOS is linear at the critical disorder strength where the gap vanishes. This implies an $I\mbox{-}V$ curve of $I\propto V^2$. Thus, $I\mbox{-}V$ curve measurement may yield evidence of fractional charges in interacting disordered ZGNR. We show that even a weak disorder potential acts as a singular perturbation on zigzag edge electronic states, producing drastic changes in the energy spectrum. Spin-charge separation and fractional charges play a key role in the reconstruction of edge antiferromagnetism. Our results show that an interacting disordered ZGNR is a topologically ordered Mott-Anderson insulator. | condensed matter |
We have developed spin-resolved resonant electron energy-loss spectroscopy (SR-rEELS) in the primary energy of 0.3--1.5 keV, which corresponds to the core excitations of $2p\to3d$ absorption of transition metals and $3d\to4f$ absorption of rare earths. Element-specific carrier and valence plasmons can be observed by using the resonance enhancement of core absorptions. Spin-resolved plasmons were also observed using a spin-polarized electron source from a GaAs/GaAsP strained superlattice photocathode. Furthermore, this primary energy corresponds to an electron penetration depth of 1 to 10 nm and thus provides bulk-sensitive EELS spectra. The methodology is expected to complement the element-selective observation of elementary excitations by resonant inelastic x-ray scattering and resonant photoelectron spectroscopy. | physics |
We propose a new factorized approach to QED radiative corrections (RCs) for inclusive and semi-inclusive deep-inelastic scattering to systematically account for QED and QCD radiation contributions to both processes on equal footing. The new treatment utilizes factorization to achieve this by resumming logarithmically enhanced QED radiation into universal lepton distribution and fragmentation (or jet) functions. Our framework provides a uniform treatment of RCs for extracting three-dimensional hadron structure from high-energy lepton-hadron scattering at current and future facilities, such as the Electron-Ion Collider. | high energy physics phenomenology |
The description of the long-term dynamics of highly elliptic orbits under third-body perturbations may require an expansion of the disturbing function in series of the semi-major axes ratio up to higher orders. To avoid dealing with long series in trigonometric functions, we refer the motion to the apsidal frame and efficiently remove the short-period effects of this expansion in vectorial form up to an arbitrary order. We then provide the variation equations of the two fundamental vectors of the Keplerian motion by analogous vectorial recurrences, which are free from singularities and take a compact form useful for the numerical propagation of the flow in mean elements. | astrophysics |
We propose an analogue of Dubrovin's conjecture for the case where Fano manifolds have quantum connections of exponential type. It includes the case where the quantum cohomology rings are not necessarily semisimple. The conjecture is described as an isomorphism of two linear algebraic structures, which we call "mutation systems". Given such a Fano manifold $X$, one of the structures is given by the Stokes structure of the quantum connection of $X$, and the other is given by a semiorthogonal decomposition of the derived category of coherent sheaves on $X$. We also prove the conjecture for a class of smooth Fano complete intersections in a projective space. | mathematics |
Graph-based causal discovery methods aim to capture conditional independencies consistent with the observed data and differentiate causal relationships from indirect or induced ones. Successful construction of graphical models of data depends on the assumption of causal sufficiency: that is, that all confounding variables are measured. When this assumption is not met, learned graphical structures may become arbitrarily incorrect and effects implied by such models may be wrongly attributed, carry the wrong magnitude, or mis-represent direction of correlation. Wide application of graphical models to increasingly less curated "big data" draws renewed attention to the unobserved confounder problem. We present a novel method that aims to control for the latent space when estimating a DAG by iteratively deriving proxies for the latent space from the residuals of the inferred model. Under mild assumptions, our method improves structural inference of Gaussian graphical models and enhances identifiability of the causal effect. In addition, when the model is being used to predict outcomes, it un-confounds the coefficients on the parents of the outcomes and leads to improved predictive performance when out-of-sample regime is very different from the training data. We show that any improvement of prediction of an outcome is intrinsically capped and cannot rise beyond a certain limit as compared to the confounded model. We extend our methodology beyond GGMs to ordinal variables and nonlinear cases. Our R package provides both PCA and autoencoder implementations of the methodology, suitable for GGMs with some guarantees and for better performance in general cases but without such guarantees. | statistics |
External pilot trials of complex interventions are used to help determine if and how a confirmatory trial should be undertaken, providing estimates of parameters such as recruitment, retention and adherence rates. The decision to progress to the confirmatory trial is typically made by comparing these estimates to pre-specified thresholds known as progression criteria, although the statistical properties of such decision rules are rarely assessed. Such assessment is complicated by several methodological challenges, including the simultaneous evaluation of multiple endpoints, complex multi-level models, small sample sizes, and uncertainty in nuisance parameters. In response to these challenges, we describe a Bayesian approach to the design and analysis of external pilot trials. We show how progression decisions can be made by minimising the expected value of a loss function, defined over the whole parameter space to allow for preferences and trade-offs between multiple parameters to be articulated and used in the decision making process. The assessment of preferences is kept feasible by using a piecewise constant parameterisation of the loss function, the parameters of which are chosen at the design stage to lead to desirable operating characteristics. We describe a flexible, yet computationally intensive, nested Monte Carlo algorithm for estimating operating characteristics. The method is used to revisit the design of an external pilot trial of a complex intervention designed to increase the physical activity of care home residents. | statistics |
In this paper, a special decision surface for the weakly-supervised sound event detection (SED) and a disentangled feature (DF) for the multi-label problem in polyphonic SED are proposed. We approach SED as a multiple instance learning (MIL) problem and utilize a neural network framework with a pooling module to solve it. General MIL approaches include two kinds: the instance-level approaches and embedding-level approaches. We present a method of generating instance-level probabilities for the embedding level approaches which tend to perform better than the instance-level approaches in terms of bag-level classification but can not provide instance-level probabilities in current approaches. Moreover, we further propose a specialized decision surface (SDS) for the embedding-level attention pooling. We analyze and explained why an embedding-level attention module with SDS is better than other typical pooling modules from the perspective of the high-level feature space. As for the problem of the unbalanced dataset and the co-occurrence of multiple categories in the polyphonic event detection task, we propose a DF to reduce interference among categories, which optimizes the high-level feature space by disentangling it based on class-wise identifiable information and obtaining multiple different subspaces. Experiments on the dataset of DCASE 2018 Task 4 show that the proposed SDS and DF significantly improve the detection performance of the embedding-level MIL approach with an attention pooling module and outperform the first place system in the challenge by 6.6 percentage points. | computer science |
In this paper we study classical solutions to the zero--flux attraction--repulsion chemotaxis--system \begin{equation}\label{ProblemAbstract} \tag{$\Diamond$} \begin{cases} u_{ t}=\Delta u -\chi \nabla \cdot (u\nabla v)+\xi \nabla \cdot (u\nabla w) & \textrm{in }\Omega\times (0,t^*), \\ 0=\Delta v+\alpha u-\beta v & \textrm{in } \Omega\times (0,t^*),\\ 0=\Delta w+\gamma u-\delta w & \textrm{in } \Omega\times (0,t^*),\\ \end{cases} \end{equation} where $\Omega$ is a smooth and bounded domain of $\mathbb{R}^2$, $t^*$ is the blow--up time and $\alpha,\beta,\gamma,\delta,\chi,\xi$ are positive real numbers. From the literature it is known that under a proper interplay between the above parameters and suitable smallness assumptions on the initial data $u({\bf x},0)=u_0\in C^0(\bar{\Omega})$, system \eqref{ProblemAbstract} has a unique classical solution which becomes unbounded as $t\nearrow t^*$. The main result of this investigation is to provide an explicit lower bound for $t^*$ estimated in terms of $\int_\Omega u_0^2 d{\bf x}$ and attained by means of well--established techniques based on ordinary differential inequalities. | mathematics |
Self-consistent-field (SCF) approximations formulated using Hartree-Fock (HF) or Kohn-Sham Density Functional Theory (KS-DFT) both have the potential to yield multiple solutions. However, the formal relationship between multiple solutions identified using HF or KS-DFT remains generally unknown. We investigate the connection between multiple SCF solutions for HF or KS-DFT by introducing a parametrised functional that scales between the two representations. Using the hydrogen molecule and a model of electron transfer, we continuously map multiple solutions from the HF potential to a KS-DFT description. We discover that multiple solutions can coalesce and vanish as the functional changes, forming a direct analogy with the disappearance of real HF solutions along a change in molecular structure. To overcome this disappearance of solutions, we develop a complex-analytic extension of DFT - the "holomorphic DFT" approach - that allows every SCF stationary state to be analytically continued across all molecular structures and exchange-correlation functionals. | physics |
In this paper we study the loss of precision of numerical methods discretizing anisotropic problems and propose alternative approaches free from this drawback. The deterioration of the accuracy is observed when the coordinates and the mesh are unrelated to the anisotropy direction. While this issue is commonly addressed by increasing the scheme approximation order, we demonstrate that, though the gains are evident, the precision of these numerical methods remain far from optimal and limited to moderate anisotropy strengths. This is analysed and explained by an amplification of the approximation error related to the anisotropy strength. We propose an approach consisting in the introduction of an auxiliary variable aimed at removing the amplification of the discretization error. By this means the precision of the numerical approximation is demonstrated to be independent of the anisotropy strength. | mathematics |
We have measured the hot-electron induced demagnetization of a [Co/Pt]2 multilayer in M(x nm)/Cu(100 nm)/[Co(0.6 nm)/Pt(1.1 nm)]2 samples depending on the nature of the capping layer M and its thickness x. We found out that a Pt layer is more efficient than [Co/Pt]X, Cu or MgO layers in converting IR photon pulses into hot-electron pulses at a given laser power. We also found out that the maximum relative demagnetization amplitude is reached for M(x) = Pt (7 nm). Our experimental results show qualitative agreement with numerical simulations based on the superdiffusive spin transport model. We concluded that the maximum relative demagnetization amplitude, which corresponds to the highest photon conversion into hot-electrons, is an interplay between the IR penetration depth and the hot-electron inelastic mean free path within the capping layer. | condensed matter |
We perform micromagnetic simulations to investigate the propagation of spin-wave beams through spin-wave optical elements. Despite spin-wave propagation in magnetic media being strongly anisotropic, we use axicons to excite spinwave Bessel-Gaussian beams and gradient-index lenses to focus spin waves in analogy to conventional optics with light in isotropic media. Moreover, we demonstrate spin-wave Fourier optics using gradient-index lenses. These results contribute to the growing field of spin-wave optics. | physics |
M82 hosts two well-known ultraluminous X-ray sources (ULXs). X-1, an intermediate-mass black hole (IMBH) candidate, and X-2, an ultraluminous X-ray pulsar (ULXP). Here we present a broadband X-ray spectral analysis of both sources based on ten observations made simultaneously with Chandra and NuSTAR. Chandra provides the high spatial resolution to resolve the crowded field in the 0.5--8 keV band, and NuSTAR provides the sensitive hard X-ray spectral data, extending the bandpass of our study above 10 keV. The observations, taken in the period 2015--2016, cover a period of flaring from X-1, allowing us to study the spectral evolution of this source with luminosity. During four of these observations, X-2 was found to be at a low flux level, allowing an unambiguous view of the emission from X-1. We find that the broadband X-ray emission from X-1 is consistent with that seen in other ULXs observed in detail with NuSTAR, with a spectrum that includes a broadened disk-like component and a high-energy tail. We find that the luminosity of the disk scales with inner disk temperature as L~T^-3/2 contrary to expectations of a standard accretion disk and previous results. These findings rule out a thermal state for sub-Eddington accretion and therefore do not support M82 X-1 as an IMBH candidate. We also find evidence that the neutral column density of the material in the line of sight increases with L$_X$, perhaps due to an increased mass outflow with accretion rate. For X-2, we do not find any significant spectral evolution, but we find the spectral parameters of the phase-averaged broadband emission are consistent with the pulsed emission at the highest X-ray luminosities. | astrophysics |
We consider $m$-divisible non-crossing partitions of $\{1,2,\ldots,mn\}$ with the property that for some $t\leq n$ no block contains more than one of the first $t$ integers. We give a closed formula for the number of multi-chains of such non-crossing partitions with prescribed number of blocks. Building on this result, we compute Chapoton's $M$-triangle in this setting and conjecture a combinatorial interpretation for the $H$-triangle. This conjecture is proved for $m=1$. | mathematics |
We demonstrate a quantum key distribution implementation over deployed dark telecom fibers with polarisation-entangled photons generated at the O-band. One of the photons in the pairs are propagated through 10km of deployed fiber while the others are detected locally. Polarisation drifts experienced by the photons propagating through the fibers are compensated with liquid crystal variable retarders. This ensures continuous and stable QKD operation with an average QBER of 6.4% and a final key rate of 109 bits/s. | quantum physics |
We present the most detailed data-driven exploration of cloud opacity in a substellar object to-date. We have tested over 60 combinations of cloud composition and structure, particle size distribution, scattering model, and gas phase composition assumptions against archival $1-15 {\rm \mu m}$ spectroscopy for the unusually red L4.5~dwarf 2MASSW~J2224438-015852 using the Brewster retrieval framework. We find that, within our framework, a model that includes enstatite and quartz cloud layers at shallow pressures, combined with a deep iron cloud deck fits the data best. This models assumes a Hansen distribution for particle sizes for each cloud, and Mie scattering. We retrieved particle effective radii of $\log_{10} a {\rm (\mu m)} = -1.41^{+0.18}_{-0.17}$ for enstatite, $-0.44^{+0.04}_{-0.20}$ for quartz, and $-0.77^{+0.05}_{-0.06}$ for iron. Our inferred cloud column densities suggest ${\rm (Mg/Si)} = 0.69^{+0.06}_{-0.08}$ if there are no other sinks for magnesium or silicon. Models that include forsterite alongside, or in place of, these cloud species are strongly rejected in favour of the above combination. We estimate a radius of $0.75 \pm 0.02$ Rjup, which is considerably smaller than predicted by evolutionary models for a field age object with the luminosity of 2M2224-0158. Models which assume vertically constant gas fractions are consistently preferred over models that assume thermochemical equilibrium. From our retrieved gas fractions we infer ${\rm [M/H]} = +0.38^{+0.07}_{-0.06}$ and ${\rm C/O} = 0.83^{+0.06}_{-0.07}$. Both these values are towards the upper end of the stellar distribution in the Solar neighbourhood, and are mutually consistent in this context. A composition toward the extremes of the local distribution is consistent with this target being an outlier in the ultracool dwarf population. | astrophysics |
We present fully analytic results for all master integrals for the three-loop banana graph with four equal and non-zero masses. The results are remarkably simple and all integrals are expressed as linear combinations of iterated integrals of modular forms of uniform weight for the same congruence subgroup as for the two-loop equal-mass sunrise graph. We also show how to write the results in terms of elliptic polylogarithms evaluated at rational points. | high energy physics theory |
Recently, a camera or an image sensor receiver based optical wireless communications (OWC) techniques have attracted particular interest in areas such as the internet of things, indoor localization, motion capture, and intelligent transportation systems. As a supplementary technique of high-speed OWC based on photo-detectors, communications hinging on image sensors as receivers do not need much modification to the current infrastructure, such that the implementation complexity and cost are quite low. Therefore, in this paper, we present a comprehensive survey of optical camera communication (OCC) techniques, and their use in localization, navigation, and motion capture. This survey is distinguishable from the existing reviews on this topic by covering multiple aspects of OCC and its various applications. The first part of the paper focuses on the standardization, channel characterization, modulation, coding, and synchronization of OCC systems while the second part of the article presents the literature on OCC based localization, navigation, and motion capture. Finally, in the last part of the paper, we present the challenges and future research directions of OCC. | electrical engineering and systems science |
Spectral decomposition of the covariance operator is one of the main building blocks in the theory and applications of Gaussian processes. Unfortunately it is notoriously hard to derive in a closed form. In this paper we consider the eigenproblem for Gaussian bridges. Given a {\em base} process, its bridge is obtained by conditioning the trajectories to start and terminate at the given points. What can be said about the spectrum of a bridge, given the spectrum of its base process? We show how this question can be answered asymptotically for a family of processes, including the fractional Brownian motion. | mathematics |
Mass cytometry technology enables the simultaneous measurement of over 40 proteins on single cells. This has helped immunologists to increase their understanding of heterogeneity, complexity, and lineage relationships of white blood cells. Current statistical methods often collapse the rich single-cell data into summary statistics before proceeding with downstream analysis, discarding the information in these multivariate datasets. In this article, our aim is to exhibit the use of statistical analyses on the raw, uncompressed data thus improving replicability, and exposing multivariate patterns and their associated uncertainty profiles. We show that multivariate generative models are a valid alternative to univariate hypothesis testing. We propose two models: a multivariate Poisson log-normal mixed model and a logistic linear mixed model. We show that these models are complementary and that either model can account for different confounders. We use Hamiltonian Monte Carlo to provide Bayesian uncertainty quantification. Our models applied to a recent pregnancy study successfully reproduce key findings while quantifying increased overall protein-to-protein correlations between first and third trimester. | statistics |
Stars that pass too close to a super-massive black hole may be disrupted by strong tidal forces. OGLE16aaa is one such tidal disruption event (TDE) which rapidly brightened and peaked in the optical/UV bands in early 2016 and subsequently decayed over the rest of the year. OGLE16aaa was detected in an XMM-Newton X-ray observation on June 9, 2016 with a flux slightly below the Swift/XRT upper limits obtained during the optical light curve peak. Between June 16-21, 2016, Swift/XRT also detected OGLE16aaa and based on the stacked spectrum, we could infer that the X-ray luminosity had jumped up by more than a factor of ten in just one week. No brightening signal was seen in the simultaneous optical/UV data to cause the X-ray luminosity to exceed the optical/UV one. A further XMM-Newton observation on November 30, 2016 showed that almost a year after the optical/UV peak, the X-ray emission was still at an elevated level, while the optical/UV flux decay had already leveled off to values comparable to those of the host galaxy. In all X-ray observations, the spectra were nicely modeled with a 50-70 eV thermal component with no intrinsic absorption, with a weak X-ray tail seen only in the November 30 XMM-Newton observation. The late-time X-ray behavior of OGLE16aaa strongly resembles the tidal disruption events ASASSN-15oi and AT2019azh. We were able to pinpoint the time delay between the initial optical TDE onset and the X-ray brightening to $182 \pm 5$ days, which may possibly represent the timescale between the initial circularization of the disrupted star around the super-massive black hole and the subsequent delayed accretion. Alternatively, the delayed X-ray brightening could be related to a rapid clearing of a thick envelope that covers the central X-ray engine during the first six months. | astrophysics |
In this paper we introduce a family of stochastic gradient estimation techniques based of the perturbative expansion around the mean of the sampling distribution. We characterize the bias and variance of the resulting Taylor-corrected estimators using the Lagrange error formula. Furthermore, we introduce a family of variance reduction techniques that can be applied to other gradient estimators. Finally, we show that these new perturbative methods can be extended to discrete functions using analytic continuation. Using this technique, we derive a new gradient descent method for training stochastic networks with binary weights. In our experiments, we show that the perturbative correction improves the convergence of stochastic variational inference both in the continuous and in the discrete case. | statistics |
Ultrasound imaging is safe, relatively affordable, and capable of real-time performance. One application of this technology is to visualize and to characterize human tongue shape and motion during a real-time speech to study healthy or impaired speech production. Due to the noisy nature of ultrasound images with low-contrast characteristic, it might require expertise for non-expert users to recognize organ shape such as tongue surface (dorsum). To alleviate this difficulty for quantitative analysis of tongue shape and motion, tongue surface can be extracted, tracked, and visualized instead of the whole tongue region. Delineating the tongue surface from each frame is a cumbersome, subjective, and error-prone task. Furthermore, the rapidity and complexity of tongue gestures have made it a challenging task, and manual segmentation is not a feasible solution for real-time applications. Employing the power of state-of-the-art deep neural network models and training techniques, it is feasible to implement new fully-automatic, accurate, and robust segmentation methods with the capability of real-time performance, applicable for tracking of the tongue contours during the speech. This paper presents two novel deep neural network models named BowNet and wBowNet benefits from the ability of global prediction of decoding-encoding models, with integrated multi-scale contextual information, and capability of full-resolution (local) extraction of dilated convolutions. Experimental results using several ultrasound tongue image datasets revealed that the combination of both localization and globalization searching could improve prediction result significantly. Assessment of BowNet models using both qualitatively and quantitatively studies showed them outstanding achievements in terms of accuracy and robustness in comparison with similar techniques. | electrical engineering and systems science |
Here we report the effect of structural and superconductivity properties on Ru doped CuIr2Te4 telluride chalcogenide. XRD results suggest that the CuIr2-xRuxTe4 maintain the disordered trigonal structure with space group P3m1 (No. 164) for x less than 0.3. The lattice constants, a and c, both decrease with increasing Ru content. Temperature-dependent resistivity, magnetic susceptibility and specific-heat measurements are performed to characterize the superconducting properties systematically. Our results suggest that the optimal doping level for superconductivity in CuIr2-xRuxTe4 is x = 0.05, where Tc is 2.79 K with the Sommerfeld constant gamma of 11.52 mJ mol-1 K-2 and the specific-heat anomaly at the superconducting transition, is approximately 1.51, which is higher than the BCS value of 1.43, indicating CuIr1.95Ru0.05Te4 is a strongly electron-phonon coupled superconductor. The values of lower critical filed and upper critical field calculated from isothermal magnetization and magneto-transport measurements are 0.98 KOe and 2.47 KOe respectively, signifying that the compound is clearly a type-II superconductor. Finally, a dome-like shape superconducting Tcs vs. x content phase diagram is established, where the charge density wave disappears at x = 0.03 while superconducting transition temperature (Tc) rises until it reaches its peak at x = 0.05, then, with decreasing when x reaches 0.3. This feature of the competition between CDW and the superconductivity could be caused by tuning the Fermi surface and density of states with Ru chemical doping. | condensed matter |
We present a procedure to accelerate the relaxation of an open quantum system towards its equilibrium state. The control protocol, termed Shortcut to Equilibration, is obtained by reverse-engineering the non-adiabatic master equation. This is a non-unitary control task aimed at rapidly changing the entropy of the system. Such a protocol serves as a shortcut to an abrupt change in the Hamiltonian, i.e., a quench. As an example, we study the thermalization of a particle in a harmonic well. We observe that for short protocols there is a three orders of magnitude improvement in accuracy. | quantum physics |
In this note, we consider entanglement and Renyi entropies for spatial subsystems of a boundary conformal field theory (BCFT) or of a CFT in a state constructed using a Euclidean BCFT path integral. Holographic calculations suggest that these entropies undergo phase transitions as a function of time or parameters describing the subsystem; these arise from a change in topology of the RT surface. In recent applications to black hole physics, such transitions have been seen to govern whether or not the bulk entanglement wedge of a (B)CFT region includes a portion of the black hole interior and have played a crucial role in understanding the semiclassical origin of the Page curve for evaporating black holes. In this paper, we reproduce these holographic results via direct (B)CFT calculations. Using the replica method, the entropies are related to correlation functions of twist operators in a Euclidean BCFT. These correlations functions can be expanded in various channels involving intermediate bulk or boundary operators. Under certain sparseness conditions on the spectrum and OPE coefficients of bulk and boundary operators, we show that the twist correlators are dominated by the vacuum block in a single channel, with the relevant channel depending on the position of the twists. These transitions between channels lead to the holographically observed phase transitions in entropies. | high energy physics theory |
Background: Deep learning has great potential to assist with detecting and triaging critical findings such as pneumoperitoneum on medical images. To be clinically useful, the performance of this technology still needs to be validated for generalizability across different types of imaging systems. Materials and Methods: This retrospective study included 1,287 chest X-ray images of patients who underwent initial chest radiography at 13 different hospitals between 2011 and 2019. The chest X-ray images were labelled independently by four radiologist experts as positive or negative for pneumoperitoneum. State-of-the-art deep learning models (ResNet101, InceptionV3, DenseNet161, and ResNeXt101) were trained on a subset of this dataset, and the automated classification performance was evaluated on the rest of the dataset by measuring the AUC, sensitivity, and specificity for each model. Furthermore, the generalizability of these deep learning models was assessed by stratifying the test dataset according to the type of the utilized imaging systems. Results: All deep learning models performed well for identifying radiographs with pneumoperitoneum, while DenseNet161 achieved the highest AUC of 95.7%, Specificity of 89.9%, and Sensitivity of 91.6%. DenseNet161 model was able to accurately classify radiographs from different imaging systems (Accuracy: 90.8%), while it was trained on images captured from a specific imaging system from a single institution. This result suggests the generalizability of our model for learning salient features in chest X-ray images to detect pneumoperitoneum, independent of the imaging system. | electrical engineering and systems science |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.