text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
In theory, Polar codes do not exhibit an error floor under successive-cancellation (SC) decoding. In practice, frame error rate (FER) down to $10^{-12}$ has not been reported with a real SC list (SCL) decoder hardware. This paper presents an asymmetric adaptive SCL (A2SCL) decoder, implemented in real hardware, for high-throughput and ultra-reliable communications. We propose to concatenate multiple SC decoders with an SCL decoder, in which the numbers of SC/SCL decoders are balanced with respect to their area and latency. In addition, a novel unequal-quantization technique is adopted. The two optimizations are crucial for improving SCL throughput within limited chip area. As an application, we build a link-level FPGA emulation platform to measure ultra-low FERs of 3GPP NR Polar codes (with parity-check and CRC bits). It is flexible to support all list sizes up to $8$, code lengths up to $1024$ and arbitrary code rates. With the proposed hardware, decoding speed is 7000 times faster than a CPU core. For the first time, FER as low as $10^{-12}$ is measured and quantization effect is analyzed. | computer science |
Bose-Einstein correlations in proton-proton collisions at the LHC are well descried by the formula with two different scales. It is shown for the first time that the pions are produced by few small size sources distributed over a much larger area in impact parameter space occupied by the interaction amplitude. The dependence of the two radii obtained in this procedure on the charged particle density and the mean transverse momentum of the pion/hadron in the correlated pair are discussed. | high energy physics phenomenology |
There is no complete description of the emission physics during the prompt phase in gamma-ray bursts. Spectral analyses, however, indicate that many spectra are narrower than what is expected for non-thermal emission models. Here, we reanalyse the sample of 37 bursts in \citet{Yu2019}, by fitting the narrowest time-resolved spectrum in each burst. We perform model comparison between a photospheric and a synchrotron emission model based on Bayesian evidence. We choose to compare the shape of the narrowest expected spectra: emission from the photosphere in a non-dissipative flow and slow-cooled synchrotron emission from a narrow electron distribution. We find that the photospheric spectral shape is preferred by $54 \pm 8 \%$ of the spectra (20/37), while $38 \pm 8 \%$ of the spectra (14/37) prefer the synchrotron spectral shape; three spectra are inconclusive. We hence conclude that GRB spectra are indeed very narrow and that more than half of the bursts have a photospheric emission episode. We also find that a third of all analysed spectra, not only prefer, but are also compatible with a non-dissipative photosphere, confirming previous similar findings. Furthermore, we notice that the spectra, that prefer the photospheric model, all have a low-energy power-law indices $\alpha > -0.5$. This means that $\alpha$ is a good estimator of which model is preferred by the data. Finally, we argue that the spectra which statistically prefer the synchrotron model, could equally well be caused by subphotospheric dissipation. If that is the case, photospheric emission during the early, prompt phase would be even more dominant. | astrophysics |
Objectives: To evaluate the performance of an Artificial Intelligence (AI) system (Pegasus, Visulytix Ltd., UK), at the detection of Diabetic Retinopathy (DR) from images captured by a handheld portable fundus camera. Methods: A cohort of 6,404 patients (~80% with diabetes mellitus) was screened for retinal diseases using a handheld portable fundus camera (Pictor Plus, Volk Optical Inc., USA) at the Mexican Advanced Imaging Laboratory for Ocular Research. The images were graded for DR by specialists according to the Scottish DR grading scheme. The performance of the AI system was evaluated, retrospectively, in assessing Referable DR (RDR) and Proliferative DR (PDR) and compared to the performance on a publicly available desktop camera benchmark dataset. Results: For RDR detection, Pegasus performed with an 89.4% (95% CI: 88.0-90.7) Area Under the Receiver Operating Characteristic (AUROC) curve for the MAILOR cohort, compared to an AUROC of 98.5% (95% CI: 97.8-99.2) on the benchmark dataset. This difference was statistically significant. Moreover, no statistically significant difference was found in performance for PDR detection with Pegasus achieving an AUROC of 94.3% (95% CI: 91.0-96.9) on the MAILOR cohort and 92.2% (95% CI: 89.4-94.8) on the benchmark dataset. Conclusions: Pegasus showed good transferability for the detection of PDR from a curated desktop fundus camera dataset to real-world clinical practice with a handheld portable fundus camera. However, there was a substantial, and statistically significant, decrease in the diagnostic performance for RDR when using the handheld device. | electrical engineering and systems science |
In this paper, we study the evaporation dynamics of the Sachdev-Ye-Kitaev (SYK) model, with an initial temperature $T_\chi$, by coupling it to a thermal bath with lower temperature $T_\psi<T_\chi$ modeled by a larger SYK model. The coupling between the small system and the bath is turned on at time $t=0$. Then the system begins to envolve and finally becomes thermalized. Using the Keldysh approach, we analyze the relaxation process of the system for different temperatures and couplings. For marginal or irrelevant coupling, after a short-time energy absorption, we find a smooth thermalization of the small system where the energy relaxes before the system become thermalized. The relaxation rate of effective temperature is found to be bounded by $T$, while the energy thermalization rate increases without saturation when increasing the coupling strength. On the contrary, for the relevant coupling case, both energy and effective temperature show oscillations. We find this oscillations frequency to be coincident with the excitation energy of a Majorana operator. | condensed matter |
We study the four-top final state at the LHC as a probe for New Physics (NP) effects due to new particles that couple predominantly to the top quark and whose masses are below the top-quark-pair production threshold. We consider simple NP models containing a new particle with either spin 0, spin 1, or spin 2, and find benchmark points compatible with current experimental results. We find that interference effects between NP and QED amplitudes can be large, pointing out the necessity of NLO contributions to be explicitly computed and taken into account when NP is present. We examine kinematic differences between these models and the Standard Model (SM) at the parton level and the reconstructed level. In the latter case, we focus on events selected requiring two same-sign leptons and multiple jets. We investigate how the different Lorentz structure of the light NP affects the kinematic hardness, the polarization, the spin correlations, and the angular distributions of the parton-level and/or final-state particles. We find that spin-2 light NP would be identified by harder kinematics than the SM. We also show that the angular separation between the same-sign leptons is a sensitive observable for spin-0 NP. The spin-0 and spin-2 NP cases would also yield a signal in $t\bar t \gamma\gamma$ with the invariant mass of the photons indicating the mass of the new particle. The spin-1 NP would be identified through an excess in four-top signal and slight or not modification in other observables, as for instance the lack of signal in $t\bar t \gamma\gamma$ due to the Landau-Yang theorem. We comment on the opportunities that would open from the kinematic reconstruction of some of the top quarks in the four-top state. Our results provide new handles to probe for light top-philic NP as part of the ongoing experimental program of searches for four-top production at the LHC Run 2 and beyond. | high energy physics phenomenology |
We study the out-of-equilibrium dynamics of a two-dimensional paraxial fluid of light using a near-resonant laser propagating through a hot atomic vapor. We observe a double shock-collapse instability: a shock (gradient catastrophe) for the velocity, as well as an annular collapse singularity for the density. We find experimental evidence that this instability results from the combined effect of the nonlocal photon-photon interaction and the linear photon losses. The theoretical analysis based on the method of characteristics reveals the main counterintuitive result that dissipation (photon losses) is responsible for an unexpected enhancement of the collapse instability. Detailed analytical modeling makes it possible to evaluate the nonlocality range of the interaction. The nonlocality is controlled by adjusting the atomic vapor temperature and is seen to increase dramatically when the atomic density becomes much larger than one atom per cubic wavelength. Interestingly, such a large range of the nonlocal photon-photon interaction has not been observed in an atomic vapor so far and its microscopic origin is currently unknown. | condensed matter |
This paper proposed a distributed filter for spatially interconnected systems (SISs), which considers missing measurements in the sensors of sub-systems. An SIS is established by many similar sub-systems that directly interact or communicate with connective neighbors. Despite that the interactions are simple and tractable, the overall SIS can perform rich and complex behaviors. In actual projects, sensors of sub-systems in a sensor network may break down sometimes, which causes parts of the measurements unavailable unexpectedly. In this work, distributed characteristics of SISs are described by Andrea model and the losses of measurements are assumed to occur with known probabilities. Experimental results confirm that, this filtering method can be effectively employed for the state estimation of SISs, when missing measurements occur. | electrical engineering and systems science |
We investigate, both experimentally and theoretically, the static geometric properties of a harmonically trapped Bose-Einstein condensate of ${}^6$Li$_2$ molecules in laser speckle potentials. Experimentally, we measure the in-situ column density profiles and the corresponding transverse cloud widths over many laser speckle realizations. We compare the measured widths with a theory that is non-perturbative with respect to the disorder and includes quantum fluctuations. Importantly, for small disorder strengths we find quantitative agreement with the perturbative approach of Huang and Meng, which is based on Bogoliubov theory. For strong disorder our theory perfectly reproduces the geometric mean of the measured transverse widths. However, we also observe a systematic deviation of the individual measured widths from the theoretically predicted ones. In fact, the measured cloud aspect ratio monotonously decreases with increasing disorder strength, while the theory yields a constant ratio. We attribute this discrepancy to the utilized local density approximation, whose possible failure for strong disorder suggests a potential future improvement. | condensed matter |
The generation and manipulation of ultracold atomic ensembles in the quantum regime require the application of dynamically controllable microwave fields with ultra-low noise performance. Here, we present a low-phase-noise microwave source with two independently controllable output paths. Both paths generate frequencies in the range of $6.835\,$GHz $\pm$ $25\,$MHz for hyperfine transitions in $^{87}$Rb. The presented microwave source combines two commercially available frequency synthesizers: an ultra-low-noise oscillator at $7\,$GHz and a direct digital synthesizer for radiofrequencies. We demonstrate a low integrated phase noise of $580\,\mu$rad in the range of $10\,$Hz to $100\,$kHz and fast updates of frequency, amplitude and phase in sub-$\mu$s time scales. The highly dynamic control enables the generation of shaped pulse forms and the deployment of composite pulses to suppress the influence of various noise sources. | quantum physics |
Cancelable biometric schemes generate secure biometric templates by combining user specific tokens and biometric data. The main objective is to create irreversible, unlinkable, and revocable templates, with high accuracy in matching. In this paper, we cryptanalyze two recent cancelable biometric schemes based on a particular locality sensitive hashing function, index-of-max (IoM): Gaussian Random Projection-IoM (GRP-IoM) and Uniformly Random Permutation-IoM (URP-IoM). As originally proposed, these schemes were claimed to be resistant against reversibility, authentication, and linkability attacks under the stolen token scenario. We propose several attacks against GRP-IoM and URP-IoM, and argue that both schemes are severely vulnerable against authentication and linkability attacks. We also propose better, but not yet practical, reversibility attacks against GRP-IoM. The correctness and practical impact of our attacks are verified over the same dataset provided by the authors of these two schemes. | computer science |
The first stable object to develop in the low-mass star formation process has long been predicted to be the first hydrostatic core (FHSC). Despite much effort, it has still yet to be definitively observed in nature. More specific observational signatures are required to enable observers to distinguish the FHSC from young, faint, but more evolved protostars. Here we present synthetic spectral line observations for CO, SO, CS and HCO$^+$ that were calculated from radiation (magneto)hydrodynamical models, chemical modelling and Monte Carlo radiative transfer. HCO$^+$ $(1-0)$ and SO $(8_7 - 7_6)$ spectra of the FHSC show variations for observations at a low inclination which may allow a candidate FHSC to be distinguished from a more evolved object. We find that the FHSC outflow is unlikely to be detectable with ALMA, which would discount the observed sources with slow outflows that are currently identified as candidate FHSCs. We compare the results of simulated ALMA observations with observed candidate FHSCs and recommend Oph A SM1N and N6-mm as the most promising candidates to follow up. | astrophysics |
Intensity squeezing, i.e., photon number fluctuations below the shot noise limit, is a fundamental aspect of quantum optics and has wide applications in quantum metrology. It was predicted in 1979 that the intensity squeezing could be observed in resonance fluorescence from a two-level quantum system. Yet, its experimental observation in solid states was hindered by inefficiencies in generating, collecting and detecting resonance fluorescence. Here, we report the intensity squeezing in a single-mode fibre-coupled resonance fluorescence single-photon source based on a quantum dot-micropillar system. We detect pulsed single-photon streams with 22.6% system efficiency, which show subshot-noise intensity fluctuation with an intensity squeezing of 0.59 dB. We estimate a corrected squeezing of 3.29 dB at the first lens. The observed intensity squeezing provides the last piece of the fundamental picture of resonance fluorescence; which can be used as a new standard for optical radiation and in scalable quantum metrology with indistinguishable single photons. | quantum physics |
We study the van der Waals interaction between Rydberg alkali-metal atoms with fine structure ($n^2L_j$; $L\leq 2$) and heteronuclear alkali-metal dimers in the ground rovibrational state ($X^1\Sigma^+$; $v=0$, $J=0$). We compute the associated $C_6$ dispersion coefficients of atom-molecule pairs involving $^{133}$Cs and $^{85}$Rb atoms interacting with KRb, LiCs, LiRb, and RbCs molecules. The obtained dispersion coefficients can be accurately fitted to a state-dependent polynomial $O(n^7)$ over the range of principal quantum numbers $40\leq n\leq 150$. For all atom-molecule pairs considered, Rydberg states $n^2S_j$ and $n^2P_j$ result in attractive $1/R^6$ potentials. In contrast, $n^2D_j$ states can give rise to repulsive potentials for specific atom-molecule pairs. The interaction energy at the LeRoy distance approximately scales as $n^{-5}$ for $n>40$. For intermediate values of $n\lesssim40$, both repulsive and attractive interaction energies in the order of $ 10-100 \,\mu$K can be achieved with specific atomic and molecular species. The accuracy of the reported $C_6$ coefficients is limited by the quality of the atomic quantum defects, with relative errors $\Delta C_6/C_6$ estimated to be no greater than 1\% on average. | physics |
The coexistence of ferroelectric and topological orders in two-dimensional (2D) atomic crystals allows non-volatile and switchable quantum spin Hall states. Here we offer a general design principle for 2D bilayer heterostructures that can host ferroelectricity and nontrivial band topology simultaneously using only topologically trivial building blocks. The built-in electric field arising from the out-of-plane polarization across the heterostrucuture enables a robust control of the band gap size and band inversion strength, which can be utilized to manipulate topological phase transitions. Using first-principles calculations, we demonstrate a series of bilayer heterostructures are 2D ferroelectric topological insulators (2DFETIs) characterized with a direct coupling between band topology and polarization state. We propose a few 2DFETI-based quantum electronics including domain-wall quantum circuits and topological memristor. | condensed matter |
Motivated by recent experimental findings, we study the contribution of a quantum critical optical phonon branch to the thermal conductivity of a paraelectric system. We consider the proximity of the optical phonon branch to transverse acoustic phonon branch and calculate its contribution to the thermal conductivity within the Kubo formalism. We find a low temperature power law dependence of the thermal conductivity as $T^{\alpha}$, with $1 < \alpha < 2$, (lower than $T^3$ behavior) due to optical phonons near the quantum critical point. This result is in accord with the experimental findings and indicates the importance of quantum fluctuations in the thermal conduction in these materials. | condensed matter |
The Wadge hierarchy was originally defined and studied only in the Baire space (and some other zero-dimensional spaces). We extend it here to arbitrary topological spaces by providing a set-theoretic definition of all its levels. We show that our extension behaves well in second countable spaces and especially in quasi-Polish spaces. In particular, all levels are preserved by continuous open surjections between second countable spaces which implies e.g. several Hausdorff-Kuratowski-type theorems in quasi-Polish spaces. In fact, many results hold not only for the Wadge hierarchy of sets but also for its extension to Borel functions from a space to a countable better quasiorder Q. | mathematics |
Suppose that we wish to estimate a user's preference vector $w$ from paired comparisons of the form "does user $w$ prefer item $p$ or item $q$?," where both the user and items are embedded in a low-dimensional Euclidean space with distances that reflect user and item similarities. Such observations arise in numerous settings, including psychometrics and psychology experiments, search tasks, advertising, and recommender systems. In such tasks, queries can be extremely costly and subject to varying levels of response noise; thus, we aim to actively choose pairs that are most informative given the results of previous comparisons. We provide new theoretical insights into the benefits and challenges of greedy information maximization in this setting, and develop two novel strategies that maximize lower bounds on information gain and are simpler to analyze and compute respectively. We use simulated responses from a real-world dataset to validate our strategies through their similar performance to greedy information maximization, and their superior preference estimation over state-of-the-art selection methods as well as random queries. | statistics |
We review very nice mechanism to generate masses to all the neutrinos in the Minimal Supersymmetric Standard Model (MSSM) without break $R$-parity. In consequence we get viable axion as Dark Matter candidate and at the same time solve the $\mu$-problem in a similar way as done in the Next to Minimal Supersymmetric Model (NMSSM). | high energy physics phenomenology |
Research has shown that widely used deep neural networks are vulnerable to carefully crafted adversarial perturbations. Moreover, these adversarial perturbations often transfer across models. We hypothesize that adversarial weakness is composed of three sources of bias: architecture, dataset, and random initialization. We show that one can decompose adversarial examples into an architecture-dependent component, data-dependent component, and noise-dependent component and that these components behave intuitively. For example, noise-dependent components transfer poorly to all other models, while architecture-dependent components transfer better to retrained models with the same architecture. In addition, we demonstrate that these components can be recombined to improve transferability without sacrificing efficacy on the original model. | statistics |
This paper proposes a framework to select the best-suited battery for co-optimizing for peak demand shaving, energy arbitrage and increase self-sufficiency in the context of power network in Madeira, Portugal. Feed-in-tariff for electricity network in Madeira is zero, which implies consumers with excess production should locally consume the excess generation rather than wasting it. Further, the power network {operator} applies a peak power contract for consumers which imposes an upper bound on the peak power seen by the power grid interfaced by energy meter. We investigate the value of storage in Madeira, using four different types of prosumers, categorized based on the relationship between their inelastic load and renewable generation. We observe that the marginal increase in the value of storage deteriorates with increase in size and ramping capabilities. We propose the use of profit per cycle per unit of battery capacity and expected payback period as indices for selecting the best-suited storage parameters to ensure profitability. This mechanism takes into account the consumption and generation patterns, profit, storage degradation, and cycle and calendar life of the battery. We also propose the inclusion of a friction coefficient in the original co-optimization formulation to increase the value of storage by reducing the operational cycles and eliminate low returning transactions. | electrical engineering and systems science |
Kernel methods offer the flexibility to learn complex relationships in modern, large data sets while enjoying strong theoretical guarantees on quality. Unfortunately, these methods typically require cubic running time in the data set size, a prohibitive cost in the large-data setting. Random feature maps (RFMs) and the Nystrom method both consider low-rank approximations to the kernel matrix as a potential solution. But, in order to achieve desirable theoretical guarantees, the former may require a prohibitively large number of features J+, and the latter may be prohibitively expensive for high-dimensional problems. We propose to combine the simplicity and generality of RFMs with a data-dependent feature selection scheme to achieve desirable theoretical approximation properties of Nystrom with just O(log J+) features. Our key insight is to begin with a large set of random features, then reduce them to a small number of weighted features in a data-dependent, computationally efficient way, while preserving the statistical guarantees of using the original large set of features. We demonstrate the efficacy of our method with theory and experiments--including on a data set with over 50 million observations. In particular, we show that our method achieves small kernel matrix approximation error and better test set accuracy with provably fewer random features than state-of-the-art methods. | statistics |
The increasing penetration of embedded renewables makes forecasting net-load, consumption less embedded generation, a significant and growing challenge. Here a framework for producing probabilistic forecasts of net-load is proposed with particular attention given to the tails of predictive distributions, which are required for managing risk associated with low-probability events. Only small volumes of data are available in the tails, by definition, so estimation of predictive models and forecast evaluation requires special attention. We propose a solution based on a best-in-class load forecasting methodology adapted for net-load, and model the tails of predictive distributions with the Generalised Pareto Distribution, allowing its parameters to vary smoothly as functions of covariates. The resulting forecasts are shown to be calibrated and sharper than those produced with unconditional tail distributions. In a use-case inspired evaluation exercise based on reserve setting, the conditional tails are shown to reduce the overall volume of reserve required to manage a given risk. Furthermore, they identify periods of high risk not captured by other methods. The proposed method therefore enables user to both reduce costs and avoid excess risk. | statistics |
Novel low-power wireless technologies and IoT applications open the door to the Industrial Internet of Things (IIoT). In this new paradigm, Wireless Sensor Networks (WSNs) must fulfil, despite energy and transmission power limitations, the challenging communication requirements of advanced manufacturing processes and technologies. In industrial networks, this is possible thanks to the availability of network infrastructure and the presence of a network coordinator that efficiently allocates the available radio resources. In this work, we consider a WSN that simultaneously transmits measurements of Networked Control Systems' (NCSs) dynamics to remote state estimators over a shared packet-erasure channel. We develop a minimum transmission power control (TPC) policy for the coordination of the wireless medium by formulating an infinite horizon Markov decision process (MDP) optimization problem. We compute the policy using an approximate value iteration algorithm and provide an extensive evaluation of its parameters in different interference scenarios and NCSs dynamics. The evaluation results present a comprehensive characterization of the algorithm's performance, proving that it can flexibly adapt to arbitrary use cases. | electrical engineering and systems science |
In electronic health records (EHRs), latent subgroups of patients may exhibit distinctive patterning in their longitudinal health trajectories. For such data, growth mixture models (GMMs) enable classifying patients into different latent classes based on individual trajectories and hypothesized risk factors. However, the application of GMMs is hindered by the special missing data problem in EHRs, which manifests two patient-led missing data processes: the visit process and the response process for an EHR variable conditional on a patient visiting the clinic. If either process is associated with the process generating the longitudinal outcomes, then valid inferences require accounting for a nonignorable missing data mechanism. We propose a Bayesian shared parameter model that links GMMs of multiple longitudinal health outcomes, the visit process, and the response process of each outcome given a visit using a discrete latent class variable. Our focus is on multiple longitudinal health outcomes for which there can be a clinically prescribed visit schedule. We demonstrate our model in EHR measurements on early childhood weight and height z-scores. Using data simulations, we illustrate the statistical properties of our method with respect to subgroup-specific or marginal inferences. We built the R package EHRMiss for model fitting, selection, and checking. | statistics |
Molybdenum disulfide (MoS$_2$) nanosheet is a two-dimensional material with high electron mobility and with high potential for applications in catalysis and electronics. We synthesized MoS$_2$ nanosheets using a one-pot wet-chemical synthesis route with and without Re-doping. Atom probe tomography revealed that 3.8 at.% Re is homogeneously distributed within the Re-doped sheets. Other impurities are found also integrated within the material: light elements including C, N, O, and Na, locally enriched up to 0.1 at.%, as well as heavy elements such as V and W. Analysis of the non-doped sample reveals that the W and V likely originate from the Mo precursor. | condensed matter |
Programmable control of the inductive electric field enables advanced operations of reversed-field pinch (RFP) plasmas in the Madison Symmetric Torus (MST) device and further develops the technical basis for ohmically heated fusion RFP plasmas. MST's poloidal and toroidal magnetic fields ($B_\text{p}$ and $B_\text{t}$) can be sourced by programmable power supplies (PPSs) based on integrated-gate bipolar transistors (IGBT). In order to provide real-time simultaneous control of both $B_\text{p}$ and $B_\text{t}$ circuits, a time-independent integrated model is developed. The actuators considered for the control are the $B_\text{p}$ and $B_\text{t}$ primary currents produced by the PPSs. The control system goal will be tracking two particular demand quantities that can be measured at the plasma surface ($r=a$): the plasma current, $I_\text{p} \sim B_\text{p}(a)$, and the RFP reversal parameter, $F\sim B_\text{t}(a)/\Phi$, where $\Phi$ is the toroidal flux in the plasma. The edge safety factor, $q(a)\propto B_t(a)$, tends to track $F$ but not identically. To understand the responses of $I_\text{p}$ and $F$ to the actuators and to enable systematic design of control algorithms, dedicated experiments are run in which the actuators are modulated, and a linearized dynamic data-driven model is generated using a system identification method. We perform a series of initial real-time experiments to test the designed feedback controllers and validate the derived model predictions. The feedback controllers show systematic improvements over simpler feedforward controllers. | physics |
The detection of almost 100% linearly polarized emission from the fast radio burst source FRB 121102 implies coherent emission of relativistic electrons moving in perpendicular to the ambient magnetic field. The origin of such a particle distribution is very intriguing. Given that FRB 121102 is likely driven by a neutron star, we explored orbits of charged particles trapped in a dipole magnetic field (the St\"ormer problem). Most previous studies focused on particles with relatively low energies so that the guiding center approximation may be applied. High energy particles usually have chaotic orbits except those on a periodic orbit or near stable periodic orbits. Via evaluation of the maximum Lyapunov exponent of orbits of particles launched from the equatorial plane with an axial velocity (the angular velocity sets the length and energy scales of the system), we found prominent regions of quasi-periodic orbits around stable periodic orbits in the equatorial plane at high-energies. Particles in these orbits oscillate around the equatorial plane and their radial distance from the dipole can vary by a factor of/,2. Relativistic electrons in such orbits may be responsible for the almost 100% polarized emission from FRB 121102. | high energy physics phenomenology |
Purpose: Magnetic resonance imaging (MRI) exams include multiple series with varying contrast and redundant information. For instance, T2-FLAIR contrast is based upon tissue T2 decay and the presence of water, also present in T2- and diffusion-weighted contrasts. T2-FLAIR contrast can be hypothetically modeled through deep learning models trained with diffusion- and T2-weighted acquisitions. Methods: Diffusion-, T2-, T2-FLAIR-, and T1-weighted brain images were acquired in 15 individuals. A convolutional neural network was developed to generate a T2-FLAIR image from other contrasts. Two datasets were withheld from training for validation. Results: Inputs with physical relationships to T2-FLAIR contrast most significantly impacted performance. The best model yielded results similar to acquired T2-FLAIR images, with a structural similarity index of 0.909, and reproduced pathology excluded from training. Synthetic images qualitatively exhibited lower noise and increased smoothness compared to acquired images. Conclusion: This suggests that with optimal inputs, deep learning based contrast generation performs well with creating synthetic T2-FLAIR images. Feature engineering on neural network inputs, based upon the physical basis of contrast, impacts the generation of synthetic contrast images. A larger, prospective clinical study is needed. | physics |
We propose a method using supervised machine learning to estimate velocity fields from particle images having missing regions due to experimental {limitations.} As a first example, a velocity field around a square cylinder at Reynolds number of ${\rm Re}_D=300$ is considered. To train machine learning models, we utilize artificial particle images (APIs) as the input data, which mimic the images of the particle image velocimetry (PIV). The output data are the velocity fields, and the correct answers for them are given by a direct numerical simulation (DNS). We examine two types of the input data: APIs without missing regions (i.e., full APIs) and APIs with missing regions (lacked APIs). The missing regions in the lacked APIs are assumed following the exact experimental situation in our wind tunnel setup. The velocity fields estimated from both full and lacked APIs are in great agreement with the reference DNS data in terms of various statistical assessments. We further apply these machine learned models trained with the DNS data to experimental particle images so that their applicability to the exact experimental situation can be investigated. The velocity fields estimated by the machine learned models contain approximately 40 folds denser data than that with the conventional cross-correlation method. This finding suggests that we may be able to obtain finer and hidden structures of the flow field which cannot be resolved with the conventional cross-correlation method. We also found that even the complex flow structures are hidden due to the alignment of two square cylinders, the machine learned model is able to estimate the field in the missing region reasonably well. The present results indicate a great potential of the proposed machine learning based method as a new data reconstruction method for PIV. | physics |
In the multiple testing context, we utilize vine copulae for optimizing the effective number of tests. It is well known that for the calibration of multiple tests (for control of the family-wise error rate) the dependencies between the marginal tests are of utmost importance. It has been shown in previous work, that positive dependencies between the marginal tests can be exploited in order to derive a relaxed Sidak-type multiplicity correction. This correction can conveniently be expressed by calculating the corresponding "effective number of tests" for a given (global) significance level. This methodology can also be applied to blocks of test statistics so that the effective number of tests can be calculated by the sum of the effective numbers of tests for each block. In the present work, we demonstrate how the power of the multiple test can be optimized by taking blocks with high inner-block dependencies. The determination of those blocks will be performed by means of an estimated vine copula model. An algorithm is presented which uses the information of the estimated vine copula to make a data-driven choice of appropriate blocks in terms of (estimated) dependencies. Numerical experiments demonstrate the usefulness of the proposed approach. | statistics |
Some new findings for chaos-based wireless communication systems have been identified recently. First, chaos has proven to be the optimal communication waveform because chaotic signals can achieve the maximum signal to noise ratio at receiver with the simplest matched filter. Second, the information transmitted in chaotic signals is not modified by the multipath wireless channel. Third, chaos properties can be used to relief inter-symbol interference (ISI) caused by multipath propagation. Although recent work has reported the method of obtaining the optimal threshold to eliminate the ISI in chaos-based wireless communication, its practical implementation is still a challenge. By knowing the channel parameters and all symbols, especially the future symbol to be transmitted in advance, it is almost an impossible task in the practical communication systems. Owning to Artificial intelligence (AI) recent developments, Convolutional Neural Network (CNN) with deep learning structure is being proposed to predict future symbols based on the received signal, so as to further reduce ISI and obtain better bit error rate (BER) performance as compared to that used the existing sub-optimal threshold. The feature of the method involves predicting the future symbol and obtaining a better threshold suitable for time variant channel. Numerical simulation and experimental results validate our theory and the superiority of the proposed method. | electrical engineering and systems science |
Non-orthogonal multiple access (NOMA) is considered to be one of the best candidates for future networks due to its ability to serve multiple users using the same resource block. Although early studies have focused on transmission reliability and energy efficiency, recent works are considering cooperation among the nodes. The cooperative NOMA techniques allow the user with a better channel (near user) to act as a relay between the source and the user experiencing poor channel (far user). This paper considers the link security aspect of energy harvesting cooperative NOMA users. In particular, the near user applies the decode-and-forward (DF) protocol for relaying the message of the source node to the far user in the presence of an eavesdropper. Moreover, we consider that all the devices use power-splitting architecture for energy harvesting and information decoding. We derive the analytical expression of intercept probability. Next, we employ deep learning based optimization to find the optimal power allocation factor. The results show the robustness and superiority of deep learning optimization over conventional iterative search algorithm. | electrical engineering and systems science |
Monocular depth prediction is an important task in scene understanding. It aims to predict the dense depth of a single RGB image. With the development of deep learning, the performance of this task has made great improvements. However, two issues remain unresolved: (1) The deep feature encodes the wrong farthest region in a scene, which leads to a distorted 3D structure of the predicted depth; (2) The low-level features are insufficient utilized, which makes it even harder to estimate the depth near the edge with sudden depth change. To tackle these two issues, we propose the Boundary-induced and Scene-aggregated network (BS-Net). In this network, the Depth Correlation Encoder (DCE) is first designed to obtain the contextual correlations between the regions in an image, and perceive the farthest region by considering the correlations. Meanwhile, the Bottom-Up Boundary Fusion (BUBF) module is designed to extract accurate boundary that indicates depth change. Finally, the Stripe Refinement module (SRM) is designed to refine the dense depth induced by the boundary cue, which improves the boundary accuracy of the predicted depth. Several experimental results on the NYUD v2 dataset and \xff{the iBims-1 dataset} illustrate the state-of-the-art performance of the proposed approach. And the SUN-RGBD dataset is employed to evaluate the generalization of our method. Code is available at https://github.com/XuefengBUPT/BS-Net. | computer science |
ICME approaches provide decision support for materials design by establishing quantitative process-structure-property relations. Confidence in the decision support, however, must be achieved by establishing uncertainty bounds in ICME model chains. The quantification and propagation of uncertainty in computational materials science, however, remains a rather unexplored aspect of computational materials science approaches. Moreover, traditional uncertainty propagation frameworks tend to be limited in cases with computationally expensive simulations. A rather common and important model chain is that of CALPHAD-based thermodynamic models of phase stability coupled to phase field models for microstructure evolution. Propagation of uncertainty in these cases is challenging not only due to the sheer computational cost of the simulations but also because of the high dimensionality of the input space. In this work, we present a framework for the quantification and propagation of uncertainty in a CALPHAD-based elasto-chemical phase field model. We motivate our work by investigating the microstructure evolution in Mg$_2$(Si$_x$Sn$_{1-x}$) thermoelectric materials. We first carry out a Markov Chain Monte Carlo-based inference of the CALPHAD model parameters for this pseudobinary system and then use advanced sampling schemes to propagate uncertainties across a high-dimensional simulation input space. Through high-throughput phase field simulations we generate 200,000 time series of synthetic microstructures and use machine learning approaches to understand the effects of propagated uncertainties on the microstructure landscape of the system under study. The microstructure dataset has been curated in the Open Phase-field Microstructure Database (OPMD), available at \href{http://microstructures.net}{http://microstructures.net}. | condensed matter |
In the framework of a general scalar-tensor theory, where the scalar field is non-minimally coupled to the five-dimensional Ricci scalar curvature, we investigate the emergence of complete brane-world solutions. By assuming a variety of forms for the coupling function, we solve the field equations in the bulk, and determine in an analytic way the form of the gravitational background and scalar field in each case. The solutions are always characterized by a regular scalar field, a finite energy-momentum tensor, and an exponentially decaying warp factor even in the absence of a negative bulk cosmological constant. The space-time on the brane is described by the Schwarzschild solution leading to either a non-homogeneous black-string solution in the bulk, when the mass parameter $M$ is non-zero, or a regular anti-de Sitter space-time, when $M=0$. We construct physically-acceptable solutions by demanding in addition a positive effective gravitational constant on our brane, a positive total energy-density for our brane and the validity of the weak energy condition in the bulk. We find that, although the theory does not allow for all three conditions to be simultaneously satisfied, a plethora of solutions emerge which satisfy the first two, and most fundamental, conditions. | high energy physics theory |
Large scale complex systems, such as social networks, electrical power grid, database structure, consumption pattern or brain connectivity, are often modeled using network graphs. Valuable insight can be gained by measuring the similarity between network graphs in order to make quantitative comparisons. Since these networks can be very large, scalability and efficiency of the algorithm are key concerns. More importantly, for graphs with unknown labeling, this graph similarity problem requires exponential time to solve using existing algorithms. In this paper, we propose a quantum walk inspired algorithm, which provides a solution to the graph similarity problem without prior knowledge on graph labeling. This algorithm is capable of distinguishing between minor structural differences, such as between strongly regular graphs with the same parameters. The algorithm has polynomial complexity, scaling with $O(n^9)$. | quantum physics |
We show that an extremely generic class of two-dimensional conformal field theories (CFTs) contains a sector described by the Schwarzian theory. This applies to theories with no additional symmetries and large central charge, but does not require a holographic dual. Specifically, we use bootstrap methods to show that in the grand canonical ensemble, at low temperature with a chemical potential sourcing large angular momentum, the density of states and correlation functions are determined by the Schwarzian theory, up to parametrically small corrections. In particular, we compute out-of-time-order correlators in a controlled approximation. For holographic theories, these results have a gravitational interpretation in terms of large, near-extremal rotating BTZ black holes, which have a near horizon throat with nearly AdS$_2 \times S^1$ geometry. The Schwarzian describes strongly coupled gravitational dynamics in the throat, which can be reduced to Jackiw-Teitelboim (JT) gravity interacting with a $U(1)$ field associated to transverse rotations, coupled to matter. We match the physics in the throat to observables at the AdS$_3$ boundary, reproducing the CFT results. | high energy physics theory |
We propose a method to suppress the chemical reactions between ultracold bosonic ground-state $^{23}$Na$^{87}$Rb molecules based on optical shielding. By applying a laser with a frequency blue-detuned from the transition between the lowest rovibrational level of the electronic ground state $X^1\Sigma^+ (v_X=0, j_X=0)$, and the long-lived excited level $b^3\Pi_0 (v_b=0, j_b=1)$, the long-range dipole-dipole interaction between the colliding molecules can be engineered, leading to a dramatic suppression of reactive and photoinduced inelastic collisions, for both linear and circular laser polarizations. We demonstrate that the spontaneous emission from $b^3\Pi_0 (v_b=0, j_b=1)$ does not deteriorate the shielding process. This opens the possibility for a strong increase of the lifetime of cold molecule traps, and for an efficient evaporative cooling. We also anticipate that the proposed mechanism is valid for alkali-metal diatomics with sufficiently large dipole-dipole interactions. | condensed matter |
The nature of the pseudogap phase of cuprates remains a major puzzle. Although there are indications that this phase breaks various symmetries, there is no consensus on its fundamental nature. Although Fermi-surface, transport and thermodynamic signatures of the pseudogap phase are reminiscent of a transition into a phase with antiferromagnetic order, there is no evidence for an associated long-range magnetic order. Here we report measurements of the thermal Hall conductivity $\kappa_{\rm xy}$ in the normal state of four different cuprates (Nd-LSCO, Eu-LSCO, LSCO, and Bi2201) and show that a large negative $\kappa_{\rm xy}$ signal is a property of the pseudogap phase, appearing with the onset of that phase at the critical doping $p^*$. Since it is not due to charge carriers -- as it persists when the material becomes an insulator, at low doping -- or magnons -- as it exists in the absence of magnetic order -- or phonons -- since skew scattering is very weak, we attribute this $\kappa_{\rm xy}$ signal to exotic neutral excitations, presumably with spin chirality. The thermal Hall conductivity in the pseudogap phase of cuprates is reminiscent of that found in insulators with spin-liquid states. In the Mott insulator LCO, it attains the highest known magnitude of any insulator. | condensed matter |
In this paper, a new parametrization of the relative motion between two satellites orbiting a central body is presented. The parametrization is based on the nodal elements: a set of angles describing the orbit geometry with respect to the relative line of nodes. These are combined with classical orbital elements to yield a nonsingular relative motion description. The exact nonlinear, perturbed dynamic model resulting from the new parametrization is established. The proposed parameter set captures the fundamental Keplerian invariants, while retaining a simple relationship with local orbital coordinates. An angles-only relative navigation filter and a collision avoidance scheme are devised by exploiting these features. The navigation solution is validated on a case study of an asteroid flyby mission. It is shown that a collision can be detected early on in the estimation process, which allows one to issue a timely evasive maneuver. | electrical engineering and systems science |
We discuss how introducing an equilibrium frame, in which a given Hamiltonian has balanced loss and gain terms, can reveal PT symmetry hidden in non-Hermitian Hamiltonians of dissipative systems. Passive PT-symmetric Hamiltonians, in which only loss is present and gain is absent, can also display exceptional points, just like PT-symmetric systems, and therefore are extensively investigated. We demonstrate that non-Hermitian Hamiltonians, which can be divided into a PT-symmetric term and a term commuting with the Hamiltonian, posses hidden PT symmetries. These symmetries become apparent in the equilibrium frame. We also show that the number of eigenstates having exactly the same value in an exceptional point is usually smaller in the initial frame than in the equilibrium frame. This property is associated with a constant of motion embedded in the second part of the Hamiltonian. | quantum physics |
Most MRI liver segmentation methods use a structural 3D scan as input, such as a T1 or T2 weighted scan. Segmentation performance may be improved by utilizing both structural and functional information, as contained in dynamic contrast enhanced (DCE) MR series. Dynamic information can be incorporated in a segmentation method based on convolutional neural networks in a number of ways. In this study, the optimal input configuration of DCE MR images for convolutional neural networks (CNNs) is studied. The performance of three different input configurations for CNNs is studied for a liver segmentation task. The three configurations are I) one phase image of the DCE-MR series as input image; II) the separate phases of the DCE-MR as input images; and III) the separate phases of the DCE-MR as channels of one input image. The three input configurations are fed into a dilated fully convolutional network and into a small U-net. The CNNs were trained using 19 annotated DCE-MR series and tested on another 19 annotated DCE-MR series. The performance of the three input configurations for both networks is evaluated against manual annotations. The results show that both neural networks perform better when the separate phases of the DCE-MR series are used as channels of an input image in comparison to one phase as input image or the separate phases as input images. No significant difference between the performances of the two network architectures was found for the separate phases as channels of an input image. | electrical engineering and systems science |
Stochastic gradient descent with momentum (SGDm) is one of the most popular optimization algorithms in deep learning. While there is a rich theory of SGDm for convex problems, the theory is considerably less developed in the context of deep learning where the problem is non-convex and the gradient noise might exhibit a heavy-tailed behavior, as empirically observed in recent studies. In this study, we consider a \emph{continuous-time} variant of SGDm, known as the underdamped Langevin dynamics (ULD), and investigate its asymptotic properties under heavy-tailed perturbations. Supported by recent studies from statistical physics, we argue both theoretically and empirically that the heavy-tails of such perturbations can result in a bias even when the step-size is small, in the sense that \emph{the optima of stationary distribution} of the dynamics might not match \emph{the optima of the cost function to be optimized}. As a remedy, we develop a novel framework, which we coin as \emph{fractional} ULD (FULD), and prove that FULD targets the so-called Gibbs distribution, whose optima exactly match the optima of the original cost. We observe that the Euler discretization of FULD has noteworthy algorithmic similarities with \emph{natural gradient} methods and \emph{gradient clipping}, bringing a new perspective on understanding their role in deep learning. We support our theory with experiments conducted on a synthetic model and neural networks. | statistics |
Light, beyond-the-standard-model particles $X$ in the 1-100 MeV mass range can be produced in nuclear and hadronic reactions but would have to decay electromagnetically. We show that simple and well-understood low-energy hadronic processes can be used as a tool to study $X$ production and decay. In particular, the pion capture process $\pi^- p \to X n \to e^+ e^- n$ can be used in a new experimental setup to search for anomalies in the angular distribution of the electron-positron pair, which could signal the appearance of dark photons, axion-like particles and other exotic states. This process can be used to decisively test the hypothesis of a new particle produced in the $^7{\rm Li}+p$ reaction. We also discuss a variety of other theoretically clean hadronic processes, such as $p+{\rm D(T)}$ fusion, as a promising source of $X$ particles. | high energy physics phenomenology |
We apply XCLUMPY, an X-ray spectral model from a clumpy torus in an active galactic nucleus (AGN), to the broadband X-ray spectra of 10 obscured AGNs observed with both Suzaku and NuSTAR. The infrared spectra of these AGNs were analyzed with the CLUMPY code. Since XCLUMPY adopts the same clump distribution as that in the CLUMPY, we can directly compare the torus parameters obtained from the X-ray spectra and those from the infrared ones. The torus angular widths determined from the infrared spectra ($\sigma_{\mathrm{IR}}$) are systematically larger than those from the X-ray data ($\sigma_{\mathrm{X}}$); the difference ($\sigma_{\mathrm{IR}}-\sigma_{\mathrm{X}}$) correlates with the inclination angle determined from the X-ray spectrum. These results can be explained by the contribution from dusty polar outflows to the observed infrared flux, which becomes more significant at higher inclinations (more edge-on views). The ratio of the hydrogen column density and V-band extinction in the line of sight absorber shows large scatter ($\simeq$1 dex) around the Galactic value, suggesting that a significant fraction of AGNs have dust-rich circumnuclear environments. | astrophysics |
We propose a learning-based, distributionally robust model predictive control approach towards the design of adaptive cruise control (ACC) systems. We model the preceding vehicle as an autonomous stochastic system, using a hybrid model with continuous dynamics and discrete, Markovian inputs. We estimate the (unknown) transition probabilities of this model empirically using observed mode transitions and simultaneously determine sets of probability vectors (ambiguity sets) around these estimates, that contain the true transition probabilities with high confidence. We then solve a risk-averse optimal control problem that assumes the worst-case distributions in these sets. We furthermore derive a robust terminal constraint set and use it to establish recursive feasibility of the resulting MPC scheme. We validate the theoretical results and demonstrate desirable properties of the scheme through closed-loop simulations. | electrical engineering and systems science |
Lately there has been a lot of discussion about why deep learning algorithms perform better than we would theoretically suspect. To get insight into this question, it helps to improve our understanding of how learning works. We explore the core problem of generalization and show that long-accepted Occam's razor and parsimony principles are insufficient to ground learning. Instead, we derive and demonstrate a set of relativistic principles that yield clearer insight into the nature and dynamics of learning. We show that concepts of simplicity are fundamentally contingent, that all learning operates relative to an initial guess, and that generalization cannot be measured or strongly inferred, but that it can be expected given enough observation. Using these principles, we reconstruct our understanding in terms of distributed learning systems whose components inherit beliefs and update them. We then apply this perspective to elucidate the nature of some real world inductive processes including deep learning. | computer science |
Deep learning has achieved notable success in various fields, including image and speech recognition. One of the factors in the successful performance of deep learning is its high feature extraction ability. In this study, we focus on the adaptivity of deep learning; consequently, we treat the variable exponent Besov space, which has a different smoothness depending on the input location $x$. In other words, the difficulty of the estimation is not uniform within the domain. We analyze the general approximation error of the variable exponent Besov space and the approximation and estimation errors of deep learning. We note that the improvement based on adaptivity is remarkable when the region upon which the target function has less smoothness is small and the dimension is large. Moreover, the superiority to linear estimators is shown with respect to the convergence rate of the estimation error. | statistics |
Wireless data aggregation (WDA), referring to aggregating data distributed at devices (e.g., sensors and smartphone), is a common operation in 5G-and-beyond machine-type communications to support Internet-of-Things (IoT), which lays the foundation for diversified applications such as distributed sensing, learning, and control. Conventional WDA techniques that are designed based on a separated-communication-and-computation principle encounter difficulty in accommodating the massive access under the limited radio resource and stringent latency constraints imposed by emerging applications (e.g, auto-driving). To address this issue, over-the-air computation (AirComp) is being developed as a new WDA solution by seamlessly integrating computation and communication. By exploiting the waveform superposition property of a multiple-access channel, AirComp turns the air into a computer for computing and communicating functions of distributed data at many devices, thereby allowing low-latency WDA over massive devices. In view of growing interests on AirComp, this article provides a timely overview of the technology by introducing basic principles, discussing advanced techniques and applications, and identifying promising research opportunities. | computer science |
We study $\mathrm{AdS}_3\times S^3/\mathbb{Z}_k\times {\tilde S}^3/\mathbb{Z}_{k'}$ solutions to M-theory preserving $\mathcal{N}=(0,4)$ supersymmetries, arising as near-horizon limits of M2-M5 brane intersections ending on M5'-branes, with both types of five-branes placed on A-type singularities. Solutions in this class asymptote locally to $\mathrm{AdS}_7/\mathbb{Z}_k\times {\tilde S}^3/\mathbb{Z}_{k'}$, and can thus be interpreted as holographic duals to surface defect CFTs within the $\mathcal{N}=(1,0)$ 6d CFT dual to this solution. Upon reduction to Type IIA, we obtain a new class of solutions of the form $\mathrm{AdS}_3\times S^3/\mathbb{Z}_k\times S^2 \times \Sigma_2$ preserving (0,4) supersymmetries. We construct explicit 2d quiver CFTs dual to these solutions, describing D2-D4 surface defects embedded within the 6d (1,0) quiver CFT dual to the $\mathrm{AdS}_7/\mathbb{Z}_k$ solution to massless IIA. Finally, in the massive case, we show that the recently constructed $\mathrm{AdS}_3\times S^2\times \mathrm{CY}_2$ solutions with $\mathcal{N}=(0,4)$ supersymmetries gain a defect interpretation when $\mathrm{CY}_2=T^4$ as surface CFTs originating from D2-NS5-D6 defects embedded within the 5d CFT dual to the Brandhuber-Oz $\mathrm{AdS}_6$ background. | high energy physics theory |
The computational kernel in solving the $S_N$ transport equations is the parallel sweep, which corresponds to directly inverting a block lower triangular linear system that arises in discretizations of the linear transport equation. Existing parallel sweep algorithms are fairly efficient on structured grids, but still have polynomial scaling, $P^{1/d}$ for $d$ dimensions and $P$ processors. Moreover, an efficient scalable parallel sweep algorithm for use on general unstructured meshes remains elusive. Recently, a classical algebraic multigrid (AMG) method based on approximate ideal restriction (AIR) was developed for nonsymmetric matrices and shown to be an effective solver for linear transport. Motivated by the superior scalability of AMG methods (logarithmic in $P$) as well as the simplicity with which AMG methods can be used in most situations, including on arbitrary unstructured meshes, this paper investigates the use of parallel AIR (pAIR) for solving the $S_N$ transport equations with source iteration in place of parallel sweeps. Results presented in this paper show that pAIR is a robust and scalable solver. Although sweeps are still shown to be much faster than pAIR on a structured mesh of a unit cube, pAIR is shown to perform similarly on both a structured and unstructured mesh, and offers a new, simple, black box alternative to parallel transport sweeps. | physics |
In this study, X-ray absorption spectroscopy (XAS) experiments for Ni45Co5Mn36.7In13.3 metamagnetic shape memory alloy were performed under high magnetic fields up to 12 T using a pulsed magnet. Field-induced reverse transformation to austenite phase caused considerable changes in the magnetic circular dichroism (MCD) signals and the magnetic moments of the ferromagnetic coupling between Mn, Ni, and Co were determined. The spin magnetic moment, Mspin, and orbital magnetic moment, Morb, of Mn atom in the induced austenite ferromagnetic phase, estimated based on the magneto-optical sum rule, were 3.2 and 0.13 {\mu}B, respectively, resulting in an Morb / Mspin ratio of 0.04. In the element-specific magnetization curves recorded at 150 K, metamagnetic behavior associated with the field-induced reverse transformation is clearly observed and reverse transformation finishing magnetic field and martensitic transformation starting magnetic field are detected. There was almost no difference in the magnetically averaged XAS spectrum for Mn-L2,3 edges between in the martensite and in the magnetic field-induced austenite phases, however, it was visible for Ni, indicating that Ni 3d-electrons mainly contribute to martensitic transformation. | condensed matter |
The potentiality of composition graded AlMgSi wires for optimized combination of electrical conductivity and torsion strength has been investigated. Composition graded wires were obtained by co-drawing commercially pure Al with an AlMgSi alloy followed by diffusion annealing. Diffusion gradients and local hardening response to precipitation treatments were evaluated thanks to nano-indentation measurements. Resulting microstructures with spatial gradients of nanoscaled precipitates were characterized by transmission electron microscopy. Finally, it is shown that such graded structures give rise to an improved combination of electrical conductivity and mechanical strength in torsion as compared to the predictions based on a classical rule of mixture. | condensed matter |
The quantum sine-Gordon model is the simplest massive interacting integrable quantum field theory whose two-particle scattering matrix is generally non-diagonal. As such, it is a model that has been extensively studied, especially in the context of the bootstrap programme. In this paper we compute the form factors of a special local field known as the branch point twist field, whose correlation functions are building blocks for measures of entanglement. We consider the attractive regime where the theory posesses a particle spectrum consisting of a soliton, an antisoliton (of opposite $U(1)$ charges) and several (neutral) breathers. In the breather sector we exploit the fusion procedure to compute form factors of heavier breathers from those of lighter ones. We apply our results to the study of the entanglement dynamics after a small mass quench and for short times. We show that in the presence of two or more breathers the von Neumann and R\'enyi entropies display undamped oscillations in time, whose frequencies are proportional to the even breather masses and whose amplitudes are proportional to the breather's one-particle form factor. | high energy physics theory |
Swampland Conjectures have attracted quite some interest in the Cosmological Community. They have been shown to have wide ranging implications , like Constraints on Inflationary Models, Primordial Black Holes etc. to name a few. A particularly revealing insight on dark energy also shows that one can have the dark energy equation of state for a quintessence scenario to be signficantly different than -1 after one takes into account the refined dS conjecture. Another interesting issue with the swampland conjectures is that they have been shown to be incompatible with single field inflationary models in GR based cosmology. In our previous work we have, however, showed that single field inflationary models are quite compatible with swampland conjectures in their usual string theoretic form in a large class of modified cosmological scenarios. Building on that work, we now show that in modified cosmological scenarios where the early universe expansion was driven by single field inflation , one can have the dark energy equation of state to be significantly different from -1 even if we just take into account the original dS conjecture, let alone the refined form of that. We thereby show that one does not need to apply a step function approach towards inflation in order to have an observable distinction between constant and non constant dark energy models in the context of the swampland conjectures. | astrophysics |
Compressed Sensing (CS) based channel estimation techniques have recently emerged as an effective way to acquire the channel of millimeter-wave (mmWave) systems with a small number of measurements. These techniques, however, are based on prior knowledge of transmit and receive array manifolds, and assume perfect antenna arrays at both the transmitter and the receiver. In the presence of antenna imperfections, the geometry and response of the arrays are modified. This distorts the CS measurement matrix and results in channel estimation errors. This paper studies the effects of both transmit and receive antenna imperfections on the mmWave channel estimate. A relay-aided solution which corrects for errors caused by faulty transmit arrays is then proposed. Simulation results demonstrate the effectiveness of the proposed solution and show that comparable channel estimates can be obtained when compared to systems with perfect antennas without the need for additional training overhead. | electrical engineering and systems science |
Sequence discriminative training criteria have long been a standard tool in automatic speech recognition for improving the performance of acoustic models over their maximum likelihood / cross entropy trained counterparts. While previously a lattice approximation of the search space has been necessary to reduce computational complexity, recently proposed methods use other approximations to dispense of the need for the computationally expensive step of separate lattice creation. In this work we present a memory efficient implementation of the forward-backward computation that allows us to use uni-gram word-level language models in the denominator calculation while still doing a full summation on GPU. This allows for a direct comparison of lattice-based and lattice-free sequence discriminative training criteria such as MMI and sMBR, both using the same language model during training. We compared performance, speed of convergence, and stability on large vocabulary continuous speech recognition tasks like Switchboard and Quaero. We found that silence modeling seriously impacts the performance in the lattice-free case and needs special treatment. In our experiments lattice-free MMI comes on par with its lattice-based counterpart. Lattice-based sMBR still outperforms all lattice-free training criteria. | electrical engineering and systems science |
Many important problems in astrophysics, space physics, and geophysics involve flows of (possibly ionized) gases in the vicinity of a spherical object, such as a star or planet. The geometry of such a system naturally favors numerical schemes based on a spherical mesh. Despite its orthogonality property, the polar (latitude-longitude) mesh is ill suited for computation because of the singularity on the polar axis, leading to a highly non-uniform distribution of zone sizes. The consequences are (a) loss of accuracy due to large variations in zone aspect ratios, and (b) poor computational efficiency from a severe limitations on the time stepping. Geodesic meshes, based on a central projection using a Platonic solid as a template, solve the anisotropy problem, but increase the complexity of the resulting computer code. We describe a new finite volume implementation of Euler and MHD systems of equations on a triangular geodesic mesh (TGM) that is accurate up to fourth order in space and time and conserves the divergence of magnetic field to machine precision. The paper discusses in detail the generation of a TGM, the domain decomposition techniques, three-dimensional conservative reconstruction, and time stepping. | physics |
The Transactional Interpretation has been subject at various times to a challenge based on a type of thought experiment first proposed by Maudlin. It has been argued by several authors that such experiments do not in fact constitute a significant problem for the transactional picture. The purpose of this work is to point out that, when the relativistic level of the interpretation is considered, Maudlin-type challenges cannot even be mounted, since the putative 'slow-moving offer wave,' taken as subject to contingent confirmation, does not exist. This is a consequence of the Davies relativistic quantum-mechanical version of the direct-action theory together with the asymmetry between fermionic field sources and bosonic fields. The Maudlin challenge therefore evaporates completely when the relativistic level of the theory is taken into account. | quantum physics |
Certain classes of strongly correlated systems promise high thermopower efficiency, but a full understanding of correlation effects on the Seebeck coefficient is lacking. This is partly due to limitations of Boltzmann-type approaches. One needs a formula for the thermopower that allows separate investigations of the kinetic and potential energy contributions to the evolution with temperature and doping of the thermopower. Here we address this issue by deriving for Hubbard-like interactions a formula for the thermopower that separates the potential from the kinetic energy contribution and facilitates a better understanding of correlation effects on the Seebeck coefficient. As an example, the thermopower of the one-band Hubbard model is calculated from dynamical mean-field. For interactions in both the intermediate and strong correlation limit, the contributions from kinetic and potential energy nearly cancel. | condensed matter |
One emerging application of machine learning methods is the inference of galaxy cluster masses. In this note, machine learning is used to directly combine five simulated multiwavelength measurements in order to find cluster masses. This is in contrast to finding mass estimates for each observable, normally by using a scaling relation, and then combining these scaling law based mass estimates using a likelihood. We also illustrate how the contributions of each observable to the accuracy of the resulting mass measurement can be compared via model-agnostic Importance Permutation values. Thirdly, as machine learning relies upon the accuracy of the training set in capturing observables, their correlations, and the observational selection function, and as the machine learning training set originates from simulations, two tests of whether a simulation's correlations are consistent with observations are suggested and explored as well. | astrophysics |
Nonadiabatic geometric quantum computation (NGQC) has been developed to realize fast and robust geometric gate. However, the conventional NGQC is that all of the gates are performed with exactly the sameamount of time, whether the geometric rotation angle is large or small, due to the limitation of cyclic condition. Here, we propose an unconventional scheme, called nonadiabatic noncyclic geometric quantum computation(NNGQC), that arbitrary single- and two-qubit geometric gate can be constructed via noncyclic non-Abeliangeometric phase. Consequently, this scheme makes it possible to accelerate the implemented geometric gatesagainst the effects from the environmental decoherence. Furthermore, this extensible scheme can be applied invarious quantum platforms, such as superconducting qubit and Rydberg atoms. Specifically, for single-qubit gate,we make simulations with practical parameters in neutral atom system to show the robustness of NNGQC and also compare with NGQC using the recent experimental parameters to show that the NNGQC can significantly suppress the decoherence error. In addition, we also demonstrate that nontrivial two-qubit geometric gate can berealized via unconventional Rydberg blockade regime within current experimental technologies. Therefore, ourscheme provides a promising way for fast and robust neutral-atom-based quantum computation. | quantum physics |
At the present time, there are a number of measurements of $B$-decay observables that disagree with the predictions of the standard model. These discrepancies have been seen in processes governed by two types of decay: (i) $b \to s \mu^+ \mu^-u$ and (ii) $b \to c \tau^- {\bar\nu}$. In this talk, I review the experimental results, as well as the proposed new-physics explanations. We may be seeing the first signs of physics beyond the standard model. | high energy physics phenomenology |
Recent years have seen a surge in the number of data leaks despite aggressive information-containment measures deployed by cloud providers. When attackers acquire sensitive data in a secure cloud environment, covert communication channels are a key tool to exfiltrate the data to the outside world. While the bulk of prior work focused on covert channels within a single CPU, they require the spy (transmitter) and the receiver to share the CPU, which might be difficult to achieve in a cloud environment with hundreds or thousands of machines. This work presents Bankrupt, a high-rate highly clandestine channel that enables covert communication between the spy and the receiver running on different nodes in an RDMA network. In Bankrupt, the spy communicates with the receiver by issuing RDMA network packets to a private memory region allocated to it on a different machine (an intermediary). The receiver similarly allocates a separate memory region on the same intermediary, also accessed via RDMA. By steering RDMA packets to a specific set of remote memory addresses, the spy causes deep queuing at one memory bank, which is the finest addressable internal unit of main memory. This exposes a timing channel that the receiver can listen on by issuing probe packets to addresses mapped to the same bank but in its own private memory region. Bankrupt channel delivers 74Kb/s throughput in CloudLab's public cloud while remaining undetectable to the existing monitoring capabilities, such as CPU and NIC performance counters. | computer science |
Joint optimization of scheduling and estimation policies is considered for a system with two sensors and two non-collocated estimators. Each sensor produces an independent and identically distributed sequence of random variables, and each estimator forms estimates of the corresponding sequence with respect to the mean-squared error sense. The data generated by the sensors is transmitted to the corresponding estimators, over a bandwidth-constrained wireless network that can support a single packet per time slot. The access to the limited communication resources is determined by a scheduler who decides which sensor measurement to transmit based on both observations. The scheduler has an energy-harvesting battery of limited capacity, which couples the decision-making problem in time. Despite the overall lack of convexity of the team decision problem, it is shown that this system admits globally optimal scheduling and estimation strategies under the assumption that the distributions of the random variables at the sensors are symmetric and unimodal. Additionally, the optimal scheduling policy has a structure characterized by a threshold function that depends on the time index and energy level. A recursive algorithm for threshold computation is provided. | electrical engineering and systems science |
Quantum deletions, which are harder to correct than erasure errors, occur in many realistic settings. It is therefore pertinent to develop quantum coding schemes for quantum deletion channels. To date, not much is known about which explicit quantum error correction codes can combat quantum deletions. We note that {\em any} permutation-invariant quantum code that has a distance of $t+1$ can correct $t$ quantum deletions for any positive integer $t$ in both the qubit and the qudit setting. Leveraging on coding properties of permutation-invariant quantum codes under erasure errors, we derive corresponding coding bounds for permutation-invariant quantum codes under quantum deletions. We focus our attention on a specific family of $N$-qubit permutation-invariant quantum codes, which we call shifted gnu codes, and show that their encoding and decoding algorithms can be performed in $O(N)$ and $O(N^2)$. | quantum physics |
Martinus Veltman was the first to point out the inconsistency of the experimental value for the decay rate of $\pi^0\rightarrow\gamma\gamma$ and its calculation by J. Steinberger with the very successful concept of the pion as the (pseudo)Nambu-Goldstone boson of the spontaneously broken global axial symmetry of strong interactions. That inconsistency has been resolved by J. Bell and R. Jackiw in their famous paper on the chiral anomalies. We review the connection between the decay amplitudes of an axion into two gauge bosons in Abelian vector-like and chiral gauge theories. The axion is the Nambu-Goldstone boson of a spontaneously broken axial global symmetry of the theory. Similarly as for the vector-like gauge theory, also in the chiral one the axion decay amplitude is uniquely determined by the anomaly of the current of that global symmetry. Certain subtlety in the calculation of the anomaly in chiral gauge theories is emphasised. | high energy physics phenomenology |
Percolation, describing critical behaviors of phase transition in a geometrical context, prompts wide investigations in natural and social networks as a fundamental model. The introduction of quantum-intrinsic interference and tunneling brings percolation into quantum regime with more fascinating phenomena and unique features, which, however, hasn't been experimentally explored yet. Here we present an experimental demonstration of quantum transport in hexagonal percolation lattices by successfully mapping such large-scale porous structures into a photonic chip using femtosecond laser direct writing techniques. A quantum percolation threshold of 80% is observed in the prototyped laser-written lattices with up to 1,600 waveguides, which is significantly larger than the classical counterpart of 63%. We also investigate the spatial confinement by localization parameters and exhibit the transition from ballistic to diffusive propagation with the decrease of the occupation probability. Direct observation of quantum percolation may deepen the understanding of the relation among materials, quantum transport, geometric quenching, disorder and localization, and inspire applications for quantum technologies. | quantum physics |
Rapid advancement in the observation of cosmic strings has been made in recent years placing increasingly stringent constraints on their properties, with $G\mu\lesssim 10^{-11}$ from Pulsar Timing Array (PTA). Cosmic string loops with low string tension clump in the Galaxy due to slow loop decay and low gravitational recoil, resulting in great enhancement to loop abundance in the Galaxy. With an average separation of down to just a fraction of a kpc, and the total power of gravitational wave (GW) emission dominated by harmonic modes spanning a wide angular scale, resolved loops located in proximity are powerful, persistent, and highly monochromatic sources of GW with a harmonic signature not replicated by any other sources, making them prime targets for direct detection by the upcoming Laser Interferometer Space Antenna (LISA), whose frequency range is well-matched. Unlike detection of bursts where the detection rate scales with loop abundance, the detection rate for harmonic signal is the result of a complex interplay between the strength of GW emission, loop abundance, and other sources of noise, and is most suitably studied through numerical simulations. We develop a robust and flexible framework for simulating loops in the Galaxy for predicting direct detection of harmonic signal from resolved loops by LISA. Our simulation reveals that the most accessible region in the parameter space consists of large loops $\alpha=0.1$ with low tension $10^{-21}\lesssim G\mu\lesssim 10^{-19}$. Direct detection of field theory cosmic strings is unlikely, with the detection probability $p_{\mathrm{det}}\lesssim 2\%$ for a 1-year mission. An extension suggests that direct detection of cosmic superstrings with a low intercommutation probability is very promising. Searching for harmonic GW signal from resolved loops through LISA observations will potentially place physical constraints on string theory. | astrophysics |
We study the phase diagram and quantum critical region of one of the fundamental models for electronic correlations: the periodic Anderson model. Employing the recently developed dynamical vertex approximation, we find a phase transition between a zero-temperature antiferromagnetic insulator and a Kondo insulator. In the quantum critical region, we determine a critical exponent $\gamma\!=\!2$ for the antiferromagnetic susceptibility. At higher temperatures, we have free spins with $\gamma\!=\!1$ instead, whereas at lower temperatures, there is an even stronger increase and suppression of the susceptibility below and above the quantum critical point, respectively. | condensed matter |
Quantum hypothesis testing is one of the most fundamental problems in quantum information theory, with crucial implications in areas like quantum sensing, where it has been used to prove quantum advantage in a series of binary photonic protocols, e.g., for target detection or memory cell readout. In this work, we generalize this theoretical model to the multi-partite setting of barcode decoding and pattern recognition. We start by defining a digital image as an array or grid of pixels, each pixel corresponding to an ensemble of quantum channels. Specializing each pixel to a black and white alphabet, we naturally define an optical model of barcode. In this scenario, we show that the use of quantum entangled sources, combined with suitable measurements and data processing, greatly outperforms classical coherent-state strategies for the tasks of barcode data decoding and classification of black and white patterns. Moreover, introducing relevant bounds, we show that the problem of pattern recognition is significantly simpler than barcode decoding, as long as the minimum Hamming distance between images from different classes is large enough. Finally, we theoretically demonstrate the advantage of using quantum sensors for pattern recognition with the nearest neighbor classifier, a supervised learning algorithm, and numerically verify this prediction for handwritten digit classification. | quantum physics |
We investigate the bursts of electromagnetic and scalar radiation resulting from the collision, and merger of oscillons made from axion-like particles using 3+1 dimensional lattice simulations of the coupled axion-gauge field system. The radiation into photons is suppressed before the merger. However, it becomes the dominant source of energy loss after the merger if a resonance condition is satisfied. Conversely, the radiation in scalar waves is dominant during initial merger phase but suppressed after the merger. The backreaction of scalar and electromagnetic radiation is included in our simulations. We evolve the system long enough to see that the resonant photon production extracts a significant fraction of the initial axion energy, and again falls out of the resonance condition. We provide a parametric understanding of the time, and energy scales involved in the process and discuss observational prospects of detecting the electromagnetic signal. | astrophysics |
Machine learning techniques enhanced by noisy intermediate-scale quantum (NISQ) devices and especially variational quantum circuits (VQC) have recently attracted much interest and have already been benchmarked for certain problems. Inspired by classical deep learning, VQCs are trained by gradient descent methods which allow for efficient training over big parameter spaces. For NISQ sized circuits, such methods show good convergence. There are however still many open questions related to the convergence of the loss function and to the trainability of these circuits in situations of vanishing gradients. Furthermore, it is not clear how "good" the minima are in terms of generalization and stability against perturbations of the data and there is, therefore, a need for tools to quantitatively study the convergence of the VQCs. In this work, we introduce a way to compute the Hessian of the loss function of VQCs and show how to characterize the loss landscape with it. The eigenvalues of the Hessian give information on the local curvature and we discuss how this information can be interpreted and compared to classical neural networks. We benchmark our results on several examples, starting with a simple analytic toy model to provide some intuition about the behavior of the Hessian, then going to bigger circuits, and also train VQCs on data. Finally, we show how the Hessian can be used to adjust the learning rate for faster convergence during the training of variational circuits. | quantum physics |
We theoretically study the dynamical phase diagram of the Dicke model in both classical and quantum limits using large, experimentally relevant system sizes. Our analysis elucidates that the model features dynamical critical points that are distinct from previously investigated excited-state equilibrium transitions. Moreover, our numerical calculations demonstrate that mean-field features of the dynamics remain valid in the exact quantum dynamics, but we also find that in regimes where quantum effects dominate signatures of the dynamical phases and chaos can persist in purely quantum metrics such as entanglement and correlations. Our predictions can be verified in current quantum simulators of the Dicke model including arrays of trapped ions. | quantum physics |
Searching for two-body resonance decays is a central component of the high energy physics energy frontier research program. While many of the possibilities are covered when the two bodies are Standard Model (SM) particles, there are still significant gaps. If one or both of the bodies are themselves non-SM particles, there is very little coverage from existing searches. We review the status of two-body searches and motivate the need to search for the missing combinations. It is likely that the search program of the future will be able to cover all possibilities with a combination of dedicated and model agnostic search approaches. | high energy physics phenomenology |
Recently, Transformer has gained success in automatic speech recognition (ASR) field. However, it is challenging to deploy a Transformer-based end-to-end (E2E) model for online speech recognition. In this paper, we propose the Transformer-based online CTC/attention E2E ASR architecture, which contains the chunk self-attention encoder (chunk-SAE) and the monotonic truncated attention (MTA) based self-attention decoder (SAD). Firstly, the chunk-SAE splits the speech into isolated chunks. To reduce the computational cost and improve the performance, we propose the state reuse chunk-SAE. Sencondly, the MTA based SAD truncates the speech features monotonically and performs attention on the truncated features. To support the online recognition, we integrate the state reuse chunk-SAE and the MTA based SAD into online CTC/attention architecture. We evaluate the proposed online models on the HKUST Mandarin ASR benchmark and achieve a 23.66% character error rate (CER) with a 320 ms latency. Our online model yields as little as 0.19% absolute CER degradation compared with the offline baseline, and achieves significant improvement over our prior work on Long Short-Term Memory (LSTM) based online E2E models. | electrical engineering and systems science |
We consider infinite horizon optimal control problems with time averaging and time discounting criteria and give estimates for the Cesaro and Abel limits of their optimal values in the case when they depend on the initial conditions. We establish that these limits are bounded from above by the optimal value of a certain infinite dimensional (ID) linear programming (LP) problem and that they are bounded from below by the optimal value of the corresponding dual problem. (These estimates imply, in particular, that the Cesaro and Abel limits exist and are equal to each other if there is no duality gap). In addition, we obtain IDLP-based optimality conditions for the long run average optimal control problem, and we illustrate these conditions by an example. | mathematics |
For conformal field theories, it is shown how the Ward identity corresponding to dilatation invariance arises in a Wilsonian setting. In so doing, several points which are opaque in textbook treatments are clarified. Exploiting the fact that the Exact Renormalization Group furnishes a representation of the conformal algebra allows dilatation invariance to be stated directly as a property of the action, despite the presence of a regulator. This obviates the need for formal statements that conformal invariance is recovered once the regulator is removed. Furthermore, the proper subset of conformal primary fields for which the Ward identity holds is identified for all dimensionalities. | high energy physics theory |
The correct quark and charged lepton mass matrices along with a nearly correct CKM matrix may be naturally accommodated in a Pati-Salam model constructed from intersecting D6 branes on a $T^6/(\Z_2 \times \Z_2)$ orientifold. Furthermore, near-tribimaximal mixing for neutrinos may arise naturally due to the structure of the Yukawa matrices. Consistency with the quark and charged lepton mass matrices in combination with obtaining near-tribimaximal mixing fixes the Dirac neutrino mass matrix completely. Then, applying the seesaw mechanism for different choices of right-handed neutrino masses and running the obtained neutrino parameters down to the electroweak scale via the RGEs, we are able to make predictions for the neutrino masses and mixing angles. We obtain lepton mixing angles which are close to the observed values, $\theta_{12} =33.8^{\circ}\pm1.2^{\circ}$, $\theta_{23}=46.9^{\circ}\pm0.9^{\circ}$, and $\theta_{13}=8.56^{\circ}\pm0.20^{\circ}$. In addition, the neutrino mass-squared differences are found to be $\Delta m^2_{32} = 0.0025\pm0.0001$~eV$^2$ and $\Delta m^2_{21} = 0.000075\pm0.000003$~eV with $m_1=0.0150\pm0.0002$~eV, $m_2=0.0173\pm0.0002$~eV, and $m_3=0.053\pm 0.002$~eV so that $\sum_i m_i = 0.085\pm0.002$~eV, consistent with experimental observations. | high energy physics phenomenology |
Context: The internal characteristics of stars, such as their core rotation rates, are obtained via asteroseismic observations. A comparison of core rotation rates found in this way with core rotation rates as predicted by stellar evolution models demonstrate a large discrepancy. This means that there must be a process of angular momentum transport missing in the current theory of stellar evolution. A new formalism was recently proposed to fill in for this missing process, which has the Tayler instability as its starting point (hereafter referred to as `Fuller-formalism'). Aims: We investigate the effect of the Fuller-formalism on the internal rotation of stellar models with an initial mass of 2.5 Mo. Methods: Stellar evolution models, including the Fuller-formalism, of intermediate-mass stars were calculated to make a comparison between asteroseismically obtained core rotation rates in the core He burning phase and in the white dwarf phase. Results: Our main results show that models including the Fuller-formalism can match the core rotation rates obtained for the core He burning phases. However, these models are unable to match the rotation rates obtained for white dwarfs. When we exclude the Fuller-formalism at the end of the core He burning phase, the white dwarf rotation rates of the models match the observed rates. Conclusions: We conclude that in the present form, the Fuller-formalism cannot be the sole solution for the missing process of angular momentum transport in intermediate-mass stars. | astrophysics |
Motivated by the need for effectively summarising, modelling, and forecasting the distributional characteristics of intra-daily returns, as well as the recent work on forecasting histogram-valued time-series in the area of symbolic data analysis, we develop a time-series model for forecasting quantile-function-valued (QF-valued) daily summaries for intra-daily returns. We call this model the dynamic quantile function (DQF) model. Instead of a histogram, we propose to use a $g$-and-$h$ quantile function to summarise the distribution of intra-daily returns. We work with a Bayesian formulation of the DQF model in order to make statistical inference while accounting for parameter uncertainty; an efficient MCMC algorithm is developed for sampling-based posterior inference. Using ten international market indices and approximately 2,000 days of out-of-sample data from each market, the performance of the DQF model compares favourably, in terms of forecasting VaR of intra-daily returns, against the interval-valued and histogram-valued time-series models. Additionally, we demonstrate that the QF-valued forecasts can be used to forecast VaR measures at the daily timescale via a simple quantile regression model on daily returns (QR-DQF). In certain markets, the resulting QR-DQF model is able to provide competitive VaR forecasts for daily returns. | statistics |
This paper is based on the opening lecture given at the 2017 edition of the Evry Schatzman school on high-angular resolution imaging of stars and their direct environment. Two relevant observing techniques: long baseline interferometry and adaptive optics fed high-contrast imaging produce data whose overall aspect is dominated by the phenomenon of diffraction. The proper interpretation of such data requires an understanding of the coherence properties of astrophysical sources, that is, the ability of light to produce interferences. This theory is used to describe high-contrast imaging in more details. The paper introduces the rationale for ideas such as apodization and coronagraphy and describes how they interact with adaptive optics. The incredible precision brought by the latest generation adaptive optics systems makes observations particularly sensitive to subtle instrumental biases that must be accounted for, up until now using post-processing techniques. The ability to directly measure the coherence of the light in the focal plane of high-contrast imaging instruments using focal-plane based wavefront control techniques will be the next step to further enhance our ability to directly detect extrasolar planets. | astrophysics |
As platforms of Majorana modes, topological insulator (quantum anomalous Hall insulator)/superconductor (SC) heterostructures have attracted tremendous attention over the past decade. Here we substitute the topological insulator by its higher-order counterparts. Concretely, we consider second-order topological insulators (SOTIs) without time-reversal symmetry and investigate SOTI/SC heterostructures in both two and three dimensions. Remarkably, we find that such novel heterostructures provide natural realizations of second-order topological superconductors (SOTSCs) which host Majorana corner modes in two dimensions and chiral Majorana hinge modes in three dimensions. As here the realization of SOTSCs requires neither special pairings nor magnetic fields, such SOTI/SC heterostructures are outstanding platforms of Majorana modes and may have wide applications in future. | condensed matter |
We construct traversable wormholes by starting with simple four-dimensional classical solutions respecting the null energy condition and containing a pair of oppositely charged black holes connected by a non-traversable wormhole. We then consider the perturbative back-reaction of bulk quantum fields in Hartle-Hawking states. Our geometries have zero cosmological constant and are asymptotically flat except for a cosmic string stretching to infinity that is used to hold the black holes apart. Another cosmic string wraps the non-contractible cycle through the wormhole, and its quantum fluctuations provide the negative energy needed for traversability. Our setting is closely related to the non-perturbative construction of Maldacena, Milekhin, and Popov (MMP), but the analysis is complementary. In particular, we consider cases where back-reaction slows, but fails to halt, the collapse of the wormhole interior, so that the wormhole is traversable only at sufficiently early times. For non-extremal backgrounds, we find the integrated null energy along the horizon of the classical background to be exponentially small, and thus traversability to be exponentially fragile. Nevertheless, if there are no larger perturbations, and for appropriately timed signals, a wormhole with mouths separated by a distance $d$ becomes traversable with a minimum transit time $t_{\text{min transit}} = d + \text{logs}$. Thus $\frac{t_{\text{min transit}}}{d}$ is smaller than for the eternally traversable MMP wormholes by more than a factor of 2, and approaches the value that, at least in higher dimensions, would be the theoretical minimum. For contrast we also briefly consider a `cosmological wormhole' solution where the back-reaction has the opposite sign, so that negative energy from quantum fields makes the wormhole harder to traverse. | high energy physics theory |
In the case where the Standard Model is extended by one heavy Majorana fermion, the branching fractions of semileptonic meson decays into same-sign and opposite-sign dileptons are expected to be of the same order. As we discuss here, this need not be the case in extensions by at least two sterile fermions, due to the possible destructive and constructive interferences that might arise. Depending on the $CP$ violating phases, one can have an enhancement of the lepton number violating modes and suppression of the lepton number conserving ones (and vice-versa). We explore for the first time the interference effects in semileptonic decays, and illustrate them for a future observation of kaon decays at NA62. We also argue that a non-observation of a given mode need not be interpreted in terms of reduced active-sterile mixings, but that it could instead be understood in terms of interference effects due to the presence of several sterile states; in particular, for different-flavour final state charged leptons, observing a lepton number conserving process and not a lepton number violating one does not rule out that the mediators are Majorana fermions. | high energy physics phenomenology |
The initial mass function (IMF) is an important, yet enigmatic aspect of the star formation process. The two major open questions regarding the IMF are: is the IMF constant regardless of environment? Is the IMF a universal property of star formation? The next generation of extremely large telescopes will allow us to observe further, fainter and more compact stellar clusters than is possible with current facilities. In these proceeding we present our study looking at just how much will these future observatories improve our knowledge of the IMF. | astrophysics |
In this paper we develop a new approach for studying overlapping iterated function systems. This approach is inspired by a famous result due to Khintchine from Diophantine approximation. This result shows that for a family of limsup sets, their Lebesgue measure is determined by the convergence or divergence of naturally occurring volume sums. For many parameterised families of overlapping iterated function systems, we prove that a typical member will exhibit similar Khintchine like behaviour. Families of iterated function systems our results apply to include those arising from Bernoulli convolutions, the $\{0,1,3\}$ problem, and affine contractions with varying translation parameter. As a by-product of our analysis we obtain new proofs of well known results due to Solomyak on the absolute continuity of Bernoulli convolutions, and when the attractor in the $\{0,1,3\}$ problem has positive Lebesgue measure. For each $t\in [0,1]$ we let $\Phi_t$ be the iterated function system given by $$\Phi_{t}:=\Big\{\phi_1(x)=\frac{x}{2},\phi_2(x)=\frac{x+1}{2},\phi_3(x)=\frac{x+t}{2},\phi_{4}(x)=\frac{x+1+t}{2}\Big\}.$$ We include a detailed study of this family. We prove that either $\Phi_t$ contains an exact overlap, or we observe Khintchine like behaviour. Our analysis of this family shows that by studying the metric properties of limsup sets, we can distinguish between the overlapping behaviour of iterated function systems in a way that is not available to us by simply studying properties of self-similar measures. Last of all, we introduce a property of an iterated function system that we call being consistently separated with respect to a measure. We prove that this property implies that the pushforward of the measure is absolutely continuous. We include several explicit examples of consistently separated iterated function systems. | mathematics |
Recent advancements in Convolutional Neural Networks have yielded super-human levels of performance in image recognition tasks [13, 25]; however, with increasing volumes of parcels crossing UK borders each year, classification of threats becomes integral to the smooth operation of UK borders. In this work we propose the first pipeline to effectively process Dual-Energy X-Ray scanner output, and perform classification capable of distinguishing between firearm families (Assault Rifle, Revolver, Self-Loading Pistol,Shotgun, and Sub-Machine Gun) from this output. With this pipeline we compare re-cent Convolutional Neural Network architectures against the X-Ray baggage domain via Transfer Learning and show ResNet50 to be most suitable to classification - outlining a number of considerations for operational success within the domain. | computer science |
Understanding the interplay between disorder, environment and interactions is key to elucidating the transport properties of open quantum systems, from excitons in photosynthetic networks to qubits in ion traps. This interplay is studied here theoretically in the context of environment-assisted quantum transport (ENAQT), a unique situation in open system where an environment-induced dephasing can, counter-intuitively, enhance transport. First, we show a surprising situation where the particle current grows with increasing disorder, even without dephasing. Then, we suggest a specific mechanism for ENAQT (which we dub population uniformization) and demonstrate that it can explain the persistence of ENAQT deep into the disorder-induced localization regime. Finally, we show that repulsive interactions are detrimental to ENAQT, and lead to an environment-hampered quantum transport. Our predictions can readily be tested within the scope of particle current experimental capabilities. | quantum physics |
Total dominator total coloring of a graph is a total coloring of the graph such that each object of the graph is adjacent or incident to every object of some color class. The minimum namber of the color classes of a total dominator total coloring of a graph is called the total dominator total chromatic number of the graph. Here, we will find the total dominator chromatic numbers of wheels, complete bipartite graphs and complete graphs. | mathematics |
If machine failures can be detected preemptively, then maintenance and repairs can be performed more efficiently, reducing production costs. Many machine learning techniques for performing early failure detection using vibration data have been proposed; however, these methods are often power and data-hungry, susceptible to noise, and require large amounts of data preprocessing. Also, training is usually only performed once before inference, so they do not learn and adapt as the machine ages. Thus, we propose a method of performing online, real-time anomaly detection for predictive maintenance using Hierarchical Temporal Memory (HTM). Inspired by the human neocortex, HTMs learn and adapt continuously and are robust to noise. Using the Numenta Anomaly Benchmark, we empirically demonstrate that our approach outperforms state-of-the-art algorithms at preemptively detecting real-world cases of bearing failures and simulated 3D printer failures. Our approach achieves an average score of 64.71, surpassing state-of-the-art deep-learning (49.38) and statistical (61.06) methods. | computer science |
The flow of non-Newtonian fluids is ubiquitous in many applications in the geological and industrial context. We focus here on yield stress fluids (YSF), i.e. a material that requires minimal stress to flow. We study numerically the flow of yield stress fluids in 2D porous media on a macroscopic scale in the presence of local heterogeneities. As with the microscopic problem, heterogeneities are of crucial importance because some regions will flow more easily than others. As a result, the flow is characterized by preferential flow paths with fractal features. These fractal properties are characterized by different scale exponents that will be determined and analyzed. One of the salient features of these results is that these exponents seem to be independent of the amplitude of heterogeneities for a log-normal distribution. In addition, these exponents appear to differ from those at the microscopic level, illustrating the fact that, although similar, the two scales are governed by different sets of equations. | physics |
In recent years, wsj0-2mix has become the reference dataset for single-channel speech separation. Most deep learning-based speech separation models today are benchmarked on it. However, recent studies have shown important performance drops when models trained on wsj0-2mix are evaluated on other, similar datasets. To address this generalization issue, we created LibriMix, an open-source alternative to wsj0-2mix, and to its noisy extension, WHAM!. Based on LibriSpeech, LibriMix consists of two- or three-speaker mixtures combined with ambient noise samples from WHAM!. Using Conv-TasNet, we achieve competitive performance on all LibriMix versions. In order to fairly evaluate across datasets, we introduce a third test set based on VCTK for speech and WHAM! for noise. Our experiments show that the generalization error is smaller for models trained with LibriMix than with WHAM!, in both clean and noisy conditions. Aiming towards evaluation in more realistic, conversation-like scenarios, we also release a sparsely overlapping version of LibriMix's test set. | electrical engineering and systems science |
The coupling of laser light to matter can exert sub-cycle coherent control over material properties, with optically induced currents and magnetism shown to be controllable on ultrafast femtosecond time scales. Here, by employing laser light consisting of both linear and circular pulses, we show that charge of specified spin and crystal momentum can be created with precision throughout the first Brillouin zone. Our hybrid pulses induce in a controlled way both adiabatic intraband motion as well as vertical interband excitation between valence and conduction bands, and require only a gapped spin split valley structure for their implementation. This scenario is commonly found in the 2d semi-conductors, and we demonstrate our approach with monolayer WSe$_2$. We thus establish a route from laser light to local control over excitations in reciprocal space, opening the way to the preparation of momenta specified excited states at ultrafast time scales. | physics |
A novel modulation scheme termed orthogonal frequency-division multiplexing with subcarrier number modulation (OFDM-SNM) has been proposed and regarded as one of the promising candidate modulation schemes for next generation networks. Although OFDM-SNM is capable of having a higher spectral efficiency (SE) than OFDM with index modulation (OFDM-IM) and plain OFDM under certain conditions, its reliability is relatively inferior to these existing schemes, because the number of active subcarriers varies. In this regard, we propose an enhanced OFDM-SNM scheme in this paper, which utilizes the flexibility of placing subcarriers to harvest a coding gain in the high signal-to-noise ratio (SNR) region. In particular, we stipulate a methodology that optimizes the subcarrier activation pattern (SAP) by subcarrier assignment using instantaneous channel state information (CSI) and therefore the subcarriers with higher channel power gains will be granted the priority to be activated, given the number of subcarriers is fixed. We also analyze the proposed enhanced OFDM-SNM system in terms of outage and error performance. The average outage probability and block error rate (BLER) are derived and approximated in closed-form expressions, which are further verified by numerical results generated by Monte Carlo simulations. The high-reliability nature of the enhanced OFDM-SNM makes it a promising candidate for implementing in the Internet of Things (IoT) with stationary machine-type devices (MTDs), which are subject to slow fading and supported by proper power supply. | electrical engineering and systems science |
Hasse diagrams provide a principled means for visualizing the structure of statistical designs constructed by crossing and nesting of experimental factors. They have long been applied for automated construction of linear models and their associated linear subspaces for complex designs. Here, we argue that they could also provide a central component for planning and teaching introductory or service courses in experimental design. Specifically, we show how Hasse diagrams allow constructing most elementary designs and finding many of their properties, such as degrees of freedom, error strata, experimental units and denominators for F-tests. Linear (mixed) models for analysis directly correspond to the diagrams, which facilitates both defining a model and specifying it in statistical software. We demonstrate how instructors can seamlessly use Hasse diagrams to construct designs by combining simple unit- and treatment structures, identify pseudo-replication, and discuss a design's randomization, unit-treatment versus treatment-treatment interactions, or complete confounding. These features commend Hasse diagrams as a powerful tool for unifying ideas and concepts. | statistics |
An explicit formula for the canonical bilinear form on the Grothendieck ring of the Lie supergroup $GL(n,m)$ is given. As an application we get an algorithm for the decomposition Euler supercharacters in terms of supercharacters of irreducible modules in the category of partially polynomial modules. | mathematics |
We investigate use of two or more linked registers, or lists, for both population size estimation and to investigate the relationship between variables appearing on all or only some registers. This relationship is usually not fully known because some individuals appear in only some registers, and some are not in any register. These two problems have been solved simultaneously using the EM algorithm. We extend this approach to estimate the size of the indigenous M\=aori population in New Zealand, leading to several innovations: (1) the approach is extended to four registers (including the population census), where the reporting of M\=aori status differs between registers; (2) some individuals in one or more registers have missing ethnicity, and we adapt the approach to handle this additional missingness; (3) some registers cover subsets of the population by design. We discuss under which assumptions such structural undercoverage can be ignored and provide a general result; (4) we treat the M\=aori indicator in each register as a variable measured with error, and embed a latent class model in the multiple system estimation to estimate the population size of a latent variable, interpreted as the true M\=aori status. Finally, we discuss estimating the M\=aori population size from administrative data only. Supplementary materials for our article are available online. | statistics |
The solar wind is found by Parker Solar Probe (PSP) to be abundant with Alfv\'enic velocity spikes and magnetic field kinks. Temperature enhancement is another remarkable feature associated with the Alfv\'enic spikes. How the prototype of these coincident phenomena is generated intermittently in the source region becomes a hot topic of wide concerns. Here we propose a new model introducing guide-field discontinuity into the interchange magnetic reconnection between open funnels and closed loops with different magnetic helicities. The modified interchange reconnection model not only can accelerate jet flows from the newly opening closed loop but also excite and launch Alfv\'enic wave pulses along the newly-reconnected and post-reconnected open flux tubes. We find that the modeling results can reproduce the following observational features: (1) Alfv\'en disturbance is pulsive in time and asymmetric in space; (2) Alfv\'enic pulse is compressible with temperature enhancement and density variation inside the pulse. We point out that three physical processes co-happening with Alfv\'en wave propagation can be responsible for the temperature enhancement: (a) convection of heated jet flow plasmas (decrease in density), (b) propagation of compressed slow-mode waves (increase in density), and (c) conduction of heat flux (weak change in density). We also suggest that the radial nonlinear evolution of the Alfv\'enic pulses should be taken into account to explain the formation of magnetic switchback geometry. | astrophysics |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.