text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
In this note, we give a generalization of Gossez's example of a maximally monotone multifunction such that the closure of its range is not convex, using more elementary techniques than in Gossez's original papers. We also discuss some new properties of Gossez's skew linear operator and its adjoint. While most of this paper uses elementary functional analysis, we correlate our results with those obtained by using the Stone-Cech compactification of the integers. | mathematics |
We construct a family of non-supersymmetric extremal black holes and their horizonless microstate geometries in four dimensions. The black holes can have finite angular momentum and an arbitrary charge-to-mass ratio, unlike their supersymmetric cousins. These features make them and their microstate geometries astrophysically relevant. Thus, they provide interesting prototypes to study deviations from Kerr solutions caused by new horizon-scale physics. In this paper, we compute the gravitational multipole structure of these solutions and compare them to Kerr black holes. The multipoles of the black hole differ significantly from Kerr as they depend non-trivially on the charge-to-mass ratio. The horizonless microstate geometries have the same multipoles as their corresponding black hole, with small deviations set by the scale of their microstructure. | high energy physics theory |
The COVID-19 coronavirus is one of the devastating viruses according to the world health organization. This novel virus leads to pneumonia, which is an infection that inflames the lungs' air sacs of a human. One of the methods to detect those inflames is by using x-rays for the chest. In this paper, a pneumonia chest x-ray detection based on generative adversarial networks (GAN) with a fine-tuned deep transfer learning for a limited dataset will be presented. The use of GAN positively affects the proposed model robustness and made it immune to the overfitting problem and helps in generating more images from the dataset. The dataset used in this research consists of 5863 X-ray images with two categories: Normal and Pneumonia. This research uses only 10% of the dataset for training data and generates 90% of images using GAN to prove the efficiency of the proposed model. Through the paper, AlexNet, GoogLeNet, Squeeznet, and Resnet18 are selected as deep transfer learning models to detect the pneumonia from chest x-rays. Those models are selected based on their small number of layers on their architectures, which will reflect in reducing the complexity of the models and the consumed memory and time. Using a combination of GAN and deep transfer models proved it is efficiency according to testing accuracy measurement. The research concludes that the Resnet18 is the most appropriate deep transfer model according to testing accuracy measurement and achieved 99% with the other performance metrics such as precision, recall, and F1 score while using GAN as an image augmenter. Finally, a comparison result was carried out at the end of the research with related work which used the same dataset except that this research used only 10% of original dataset. The presented work achieved a superior result than the related work in terms of testing accuracy. | electrical engineering and systems science |
We construct Markov processes for modeling the rupture of edges in a two-dimensional foam. We first describe a network model for tracking topological information of foam networks with a state space of combinatorial embeddings. Through a mean-field rule for randomly selecting neighboring cells of a rupturing edge, we consider a simplified version of the network model in the sequence space $\ell_1(\mathbb N)$ which counts total numbers of cells with $n\ge 3$ sides ($n$-gons). Under a large cell limit, we show that number densities of $n$-gons in the mean field model are solutions of an infinite system of nonlinear kinetic equations. This system is comparable to the Smoluchowski coagulation equation for coalescing particles under a multiplicative collision kernel, suggesting gelation behavior. Numerical simulations reveal gelation in the mean-field model, and also comparable statistical behavior between the network and mean-field models. | condensed matter |
We give a description of double parton scattering with measured transverse momenta in the final state, extending the formalism for factorisation and resummation developed by Collins, Soper and Sterman for the production of colourless particles. After a detailed analysis of their colour structure, we derive and solve evolution equations in rapidity and renormalisation scale for the relevant soft factors and double parton distributions. We show how in the perturbative regime, transverse momentum dependent double parton distributions can be expressed in terms of simpler nonperturbative quantities and compute several of the corresponding perturbative kernels at one-loop accuracy. We then show how the coherent sum of single and double parton scattering can be simplified for perturbatively large transverse momenta, and we discuss to which order resummation can be performed with presently available results. As an auxiliary result, we derive a simple form for the square root factor in the Collins construction of transverse momentum dependent parton distributions. | high energy physics phenomenology |
Objective: Computed Tomography (CT) has an important role to detect lung lesion related to Covide 19. The purpose of this work is to obtain diagnostic findings of Ultra-Low Dose (ULD) chest CT image and compare with routine dose chest CT. Material and Methods: Patients, suspected of Covid 19 infection, were scanned successively with routine dose, and ULD, with 98% or 94% dose reduction, protocols. Axial images of routine and ULD chest CT were evaluated objectively by two expert radiologists and quantitatively by Signal to Noise Ratio (SNR) and pixel by pixel noise measurement. Results: It was observed that the ULD and routine dose chest CT images could detect Covid 19 related lung lesions in patients with PCR positive test. Also, SNR and pixel noise values were comparable in these protocols. Conclusion: ULD chest CT with 98% dose reduction can be used in non-pandemic situation as a substitute for chest radiograph for screening and follow up. Routine chest CT protocol can be replaced by ULD, with 94% dose reduction, to detect patients suspected with Covid 19 at an early stage and for its follow up. | physics |
We describe a technique to reveal speed-of-sound (SoS) variations within an echogenic sample. The technique uses the same receive data as standard pulse-echo imaging based on plane-wave compounding, and can be operated in parallel. Point-like scatterers randomly distributed throughout the sample serve as local probes of the downstream transmit-beam phase shifts caused by aberrating structures within the sample. Phase shifts are monitored in a differential manner, providing signatures of transverse gradients of the local sample SoS. The contrast of the signatures is augmented by a method of angular compounding, which provides ``focus" control of the image sharpness, which, in turn, enables a visual localization of aberrating inclusions within the sample on the fly. The localization can be performed in 2D when operated with standard B-mode imaging, or in 3D when operated with C-mode imaging. Finally, we present a wave-acoustic forward model that provides insight into the principle of differential phase contrast (DPC) imaging, and roughly recapitulates experimental results obtained with an elastography phantom. In particular, we demonstrate that our technique easily reveals relative SoS variations as small as 0.5\% in real time. Such imaging may ultimately be useful for clinical diagnosis of pathologies in soft tissue. | electrical engineering and systems science |
Under semiclassical evolution, black holes retain a smooth horizon but fail to return information. Yet, the Ryu-Takayanagi prescription computes the boundary entropy expected from unitary CFT evolution. We demonstrate this in a novel setting with an asymptotic bulk detector, eliminating an assumption about the entanglement wedge of auxiliary systems. We consider three interpretations of this result. (i) At face value, information is lost in the bulk but not in the CFT. This conflicts with the AdS/CFT dictionary. (ii) No unique QFT state (pure or mixed) governs all detector responses to the bulk Hawking radiation. This conflicts with the existence of an S-matrix. (iii) Nonlocal couplings to the black hole interior cause asymptotic detectors to respond as though the radiation was pure, even though it is naively thermal. This invalidates the standard interpretation of the semiclassical state, including its smoothness at the horizon. We conclude that unitary boundary evolution requires asymptotic bulk detectors to become unambiguously pure at late times. We ask whether the RT prescription can still reproduce the boundary entropy in this bulk scenario. We find that this requires a substantial failure of semiclassical gravity in a low-curvature region, such as a firewall that purifies the Hawking radiation. Finally, we allow that the dual to semiclassical gravity may be an ensemble of unitary theories. This appears to relax the tensions we found: the ensemble average of out-states would be mixed, but the ensemble average of final entropies would vanish. | high energy physics theory |
Conventional CMOS technology operated at cryogenic conditions has recently attracted interest for its uses in low-noise electronics. We present one of the first characterizations of 180 nm CMOS technology at a temperature of 100 mK, extracting I/V characteristics, threshold voltages, and transconductance values, as well as observing their temperature dependence. We find that CMOS devices remain fully operational down to these temperatures, although we observe hysteresis effects in some devices. The measurements described in this paper can be used to inform the future design of CMOS devices intended to be operated in this deep cryogenic regime. | physics |
In this research, a model is proposed to learn from event log and predict future events of a system. The proposed PEDF model learns based on events' sequences, durations, and extra features. The PEDF model is built by a network made of standard clusterers and classifiers, and it has high flexibility to update the model iteratively. The model requires to extract two sets of data from log files i.e., transition differences, and cumulative features. The model has one layer of memory which means that each transition is dependent on both the current event and the previous event. To evaluate the performance of the proposed model, it is compared to the Recurrent Neural Network and Sequential Prediction models, and it outperforms them. Since there is missing performance measure for event log prediction models, three measures are proposed. | computer science |
We study the maximally-helicity-violating (MHV) six-gluon scattering amplitude in planar $\mathcal{N} = 4$ super-Yang-Mills theory at finite coupling when all three cross ratios are small. It exhibits a double logarithmic scaling in the cross ratios, controlled by a handful of "anomalous dimensions" that are functions of the coupling constant alone. Inspired by known seven-loop results at weak coupling and the integrability-based pentagon OPE, we present conjectures for the all-order resummation of these anomalous dimensions. At strong coupling, our predictions agree perfectly with the string theory analysis. Intriguingly, the simplest of these anomalous dimensions coincides with one describing the light-like limit of the octagon, namely the four-point function of large-charge BPS operators. | high energy physics theory |
Quantum computing hardware has received world-wide attention and made considerable progress recently. YIG thin film have spin wave (magnon) modes with low dissipation and reliable control for quantum information processing. However, the coherent coupling between a quantum device and YIG thin film has yet been demonstrated. Here, we propose a scheme to achieve strong coupling between superconducting flux qubits and magnon modes in YIG thin film. Unlike the direct $\sqrt{N}$ enhancement factor in coupling to the Kittel mode or other spin ensembles, with N the total number of spins, an additional spatial dependent phase factor needs to be considered when the qubits are magnetically coupled with the magnon modes of finite wavelength. To avoid undesirable cancelation of coupling caused by the symmetrical boundary condition, a CoFeB thin layer is added to one side of the YIG thin film to break the symmetry. Our numerical simulation demonstrates avoided crossing and coherent transfer of quantum information between the flux qubits and the standing spin waves in YIG thin films. We show that the YIG thin film can be used as a tunable switch between two flux qubits, which have modified shape with small direct inductive coupling between them. Our results manifest that it is possible to couple flux qubits while suppressing undesirable cross-talk. | quantum physics |
In this paper, we address the noncommutative rank (nc-rank) computation of a linear symbolic matrix \[ A = A_1 x_1 + A_2 x_2 + \cdots + A_m x_m, \] where each $A_i$ is an $n \times n$ matrix over a field $\mathbb{K}$, and $x_i$ $(i=1,2,\ldots,m)$ are noncommutative variables. For this problem, polynomial time algorithms were given by Garg, Gurvits, Oliveira, and Wigderson for $\mathbb{K} = \mathbb{Q}$, and by Ivanyos, Qiao, and Subrahmanyam for an arbitrary field $\mathbb{K}$. We present a significantly different polynomial time algorithm that works on an arbitrary field $\mathbb{K}$. Our algorithm is based on a combination of submodular optimization on modular lattices and convex optimization on CAT(0) spaces. | mathematics |
We dimensionally reduce the spacetime action of bosonic string theory, and that of the bosonic sector of heterotic string theory after truncating the Yang-Mills gauge fields, on a $d$-dimensional torus including all higher-derivative corrections to first order in $\alpha'$. A systematic procedure is developed that brings this action into a minimal form in which all fields except the metric carry only first order derivatives. This action is shown to be invariant under ${\rm O}(d,d,\mathbb{R})$ transformations that acquire $\alpha'$-corrections through a Green-Schwarz type mechanism. We prove that, up to a global pre-factor, the first order $\alpha'$-corrections are uniquely determined by ${\rm O}(d,d,\mathbb{R})$ invariance. | high energy physics theory |
For $A \subseteq \{1,2,\ldots\}$, we consider $R(A) = \{a/a' : a,a' \in A\}$. If $A$ is the set of nonzero values assumed by a quadratic form, when is $R(A)$ dense in the $p$-adic numbers? We show that for a binary quadratic form $Q$, $R(A)$ is dense in $\mathbb{Q}_{p}$ if and only if the discriminant of $Q$ is a nonzero square in $\mathbb{Q}_{p}$, and for a quadratic form in at least three variables, $R(A)$ is always dense in $\mathbb{Q}_{p}$. This answers a question posed by several authors in 2017. | mathematics |
Scheduling precedence-constrained tasks is a classical problem that has been studied for more than fifty years. However, little progress has been made in the setting where there are communication delays between tasks. Results for the case of identical machines were derived nearly thirty years ago, and yet no results for related machines have followed. In this work, we propose a new scheduler, Generalized Earliest Time First (GETF), and provide the first provable, worst-case approximation guarantees for the goals of minimizing both the makespan and total weighted completion time of tasks with precedence constraints on related machines with machine-dependent communication times. | computer science |
We propose and demonstrate a novel scheme to produce ultrashort and ultrastable MeV electron beam. In this scheme, the electron beam produced in a photocathode radio-frequency (rf) gun first expands under its own Coulomb force with which a positive energy chirp is imprinted in the beam longitudinal phase space. The beam is then sent through a double bend achromat with positive longitudinal dispersion where electrons at the bunch tail with lower energies follow shorter paths and thus catch up with the bunch head, leading to longitudinal bunch compression. We show that with optimized parameter sets, the whole beam path from the electron source to the compression point can be made isochronous such that the time of flight for the electron beam is immune to the fluctuations of rf amplitude. With a laser-driven THz deflector, the bunch length and arrival time jitter for a 20 fC beam after bunch compression are measured to be about 29 fs (FWHM) and 22 fs (FWHM), respectively. Such an ultrashort and ultrastable electron beam allows us to achieve 50 femtosecond (FWHM) resolution in MeV ultrafast electron diffraction where lattice oscillation at 2.6 THz corresponding to Bismuth A1g mode is clearly observed without correcting both the short-term timing jitter and long-term timing drift. Furthermore, oscillating weak diffuse scattering signal related to phonon coupling and decay is also clearly resolved thanks to the improved temporal resolution and increased electron flux. We expect that this technique will have a strong impact in emerging ultrashort electron beam based facilities and applications. | physics |
In millimeter-wave communications, multiple-input-multiple-output (MIMO) systems use large antenna arrays to achieve high gain and spectral efficiency. These massive MIMO systems employ hybrid beamformers to reduce power consumption associated with fully digital beamforming in large arrays. Further savings in cost and power are possible through the use of subarrays. Unlike prior works that resort to large latency methods such as optimization and greedy search for subarray selection, we propose a deep-learning-based approach in order to overcome the complexity issue without causing significant performance loss. We formulate antenna selection and hybrid beamformer design as a classification/prediction problem for convolutional neural networks (CNNs). For antenna selection, the CNN accepts the channel matrix as input and outputs a subarray with optimal spectral efficiency. The resultant subarray channel matrix is then again fed to a CNN to obtain analog and baseband beamformers. We train the CNNs with several noisy channel matrices that have different channel statistics in order to achieve a robust performance at the network output. Numerical experiments show that our CNN framework provides an order better spectral efficiency and is 10 times faster than the conventional techniques. Further investigations with quantized-CNNs show that the proposed network, saved in no more than 5 bits, is also suited for digital mobile devices. | electrical engineering and systems science |
We prescribe a method to study the effects of self-gravity of accretion disk around a black hole associated with long Gamma Ray Bursts (GRBs) in an evolving background Kerr metric. This is an extension to our previous work where we presented possible constraints for the final masses and spins of these astrophysical black holes. Incorporating the self-force of the accreting cloud around the black hole is a very important aspect due to the transient nature of the event, in which a huge amount of mass is accreted and changes the fundamental black hole parameters i.e. its mass and spin, during the process. Understanding of the GRBs engine is important because they are possible sources of high-energy particles and gravitational waves as most of the energy released from the dynamical evolution is in the form of gravitational radiation. Here, we describe the analytical framework we developed to employ in our numerical model. The numerical studies are planned for the future work. | astrophysics |
The robust Chinese remainder theorem (CRT) has been recently proposed for robustly reconstructing a large nonnegative integer from erroneous remainders. It has found many applications in signal processing, including phase unwrapping and frequency estimation under sub-Nyquist sampling. Motivated by the applications in multidimensional (MD) signal processing, in this paper we propose the MD-CRT and robust MD-CRT for integer vectors. Specifically, by rephrasing the abstract CRT for rings in number-theoretic terms, we first derive the MD-CRT for integer vectors with respect to a general set of integer matrix moduli, which provides an algorithm to uniquely reconstruct an integer vector from its remainders, if it is in the fundamental parallelepiped of the lattice generated by a least common right multiple of all the moduli. For some special forms of moduli, we present explicit reconstruction formulae. Moreover, we derive the robust MD-CRT for integer vectors when the remaining integer matrices of all the moduli left divided by their greatest common left divisor (gcld) are pairwise commutative and coprime. Two different reconstruction algorithms are proposed, and accordingly, two different conditions on the remainder error bound for the reconstruction robustness are obtained, which are related to a quarter of the minimum distance of the lattice generated by the gcld of all the moduli or the Smith normal form of the gcld. | computer science |
In this work we investigate the hadronic effects on the $X_{J=0,1} (2900)$ states in heavy-ion collisions. We make use of Effective Lagrangians to estimate the cross sections and their thermal averages of the processes $X_J \pi \to \bar{D}^{*} K , K^{*} \bar{D} $, as well as those of the corresponding inverse processes, considering also the possibility of different isospin assignments ($I=0,1$). We complete the analysis by solving the rate equation to follow the time evolution of the $X_J (2900)$ multiplicities and determine how they are affected by the considered reactions during the expansion of the hadronic matter. We also perform a comparison of the $X_J (2900)$ abundances considering them as hadronic molecular states ($J=0$ as a $S$-wave and $J=1$ as a $P$-wave) and tetraquark states at kinetic freeze-out. | high energy physics phenomenology |
We consider a certain two-parameter family of automorphisms of the affine plane over a complete, locally compact non-Archimedean field. Each of these automorphisms admits a chaotic attractor on which it is topologically conjugate to a full two-sided shift map, and the attractor supports a unit Borel measure which describes the distribution of the forward orbit of Haar-almost all points in the basin of attraction. We also compute the Hausdorff dimension of the attractor, which is non-integral. | mathematics |
We provide an efficient method of blowing up to compute leading order contributions of the recently introduced stringy canonical forms. The method is related to the well-known Hironaka's polyhedra game, and the given algorithm is also useful on similar problems, e.g. sector decomposition. | high energy physics theory |
Neutrinos lose coherence as they propagate, which leads to the fading away of oscillations. In this work, we model neutrino decoherence induced in open quantum systems from their interaction with the environment. We first present two different models in the quantum mechanical framework, in which the environment is modeled as forced harmonic oscillators with white noise interactions, or two-level systems with stochastic phase kicks. We then look at the decoherence process in the quantum field theoretic framework induced by elastic scatterings with environmental particles. The exponential decay is obtained as a common feature for all models, which shows the universality of the decoherence processes. We discuss connections to the GKSL master equation approach and give a clear physical meaning of the Lindblad operators. We demonstrate that the universality of exponential decay of coherence is based on the Born-Markov approximation. The models in this work are suitable to be extended to describe real physical processes that could be non-Markovian. | high energy physics phenomenology |
Thanks to the spontaneous interaction between noble metals and biological scaffolds, nanomaterials with unique features can be achieved following relatively straightforward and cost-efficient synthetic procedures. Here, plasmonic silver nanorings are synthesized on a ring-like Peroxiredoxin (PRX) protein and used to assemble large arrays of functional nanostructures. The PRX protein drives the seeding growth of metal silver under wet reducing conditions, yielding nanorings with outer and inner diameters down to 28 and 3 nm, respectively. The obtained hybrid nanostructures can be deposited onto a solid-state membrane in order to prepare plasmonic nanopores. In particular, the interaction between graphene and PRX allows for the simple preparation of ordered arrays of plasmonic nanorings on a 2D-material membrane. This fabrication process can be finalized by drilling a nanometer scale pore in the middle of the ring. Fluorescence spectroscopic measurements have been used to demonstrate the plasmonic enhancement induced by the metallic ring. Finally, we support the experimental findings with some numerical simulations showing that the nanorings are endowed with a remarkable plasmonic field within the cavity. Our results represent a proof of concept of a fabrication process that could be suitable for nanopore-based technologies such as next-generation sequencing and single-molecule detection. | physics |
A misalignment of LiDAR as low as a few degrees could cause a significant error in obstacle detection and mapping that could cause safety and quality issues. In this paper, an accurate inspection system is proposed for estimating a LiDAR alignment error after sensor attachment on a mobility system such as a vehicle or robot. The proposed method uses only a single target board at the fixed position to estimate the three orientations (roll, tilt, and yaw) and the horizontal position of the LiDAR attachment with sub-degree and millimeter level accuracy. After the proposed preprocessing steps, the feature beam points that are the closest to each target corner are extracted and used to calculate the sensor attachment pose with respect to the target board frame using a nonlinear optimization method and with a low computational cost. The performance of the proposed method is evaluated using a test bench that can control the reference yaw and horizontal translation of LiDAR within ranges of 3 degrees and 30 millimeters, respectively. The experimental results for a low-resolution 16 channel LiDAR (Velodyne VLP-16) confirmed that misalignment could be estimated with accuracy within 0.2 degrees and 4 mm. The high accuracy and simplicity of the proposed system make it practical for large-scale industrial applications such as automobile or robot manufacturing process that inspects the sensor attachment for the safety quality control. | electrical engineering and systems science |
We propose theoretically how unconventional superconducting pairing in a repulsively interacting Hubbard ladder can be enhanced via the application of a Floquet driving. Initially the Hubbard ladder is prepared in its charge-density-wave dominated ground state. A periodic Floquet drive is applied which modulates oppositely the energy offset of the two chains and effectively reduces the tunneling along the rungs. This modulation of the energy offsets might be caused by the excitation of a suitable phononic mode in solids or a superlattice modulation in cold atomic gases. We use state-of-the-art density matrix renormalization group methods to monitor the resulting real-time dynamics of the system. We find an enormous enhancement of the unconventional superconducting pair correlations by approximately one order of magnitude. | condensed matter |
Here we consider a one-dimensional $q$-state Potts model with an external magnetic field and an anisotropic interaction that selects neighboring sites that are in the spin state 1. The present model exhibits an unusual behavior in the low-temperature region, where we observe an anomalous vigorous change in the entropy for a given temperature. There is a steep behavior at a given temperature in entropy as a function of temperature, quite similar to first-order discontinuity, but there is no jump in the entropy. Similarly, second derivative quantities like specific heat and magnetic susceptibility also exhibit a strong acute peak rather similar to second-order phase transition divergence, but once again there is no singularity at this point. Correlation length also confirms this anomalous behavior at the same given temperature, showing a strong and sharp peak which easily one may confuse with a divergence. The temperature where occurs this anomalous feature we call pseudo-critical temperature. We have analyzed physical quantities, like correlation length, entropy, magnetization, specific heat, magnetic susceptibility, and distant pair correlation functions. Furthermore, we analyze the pseudo-critical exponent that satisfy a class of universality previously identified in the literature for other one-dimensional models, these pseudo-critical exponents are: for correlation length $\nu=1$, specific heat $\alpha=3$ and magnetic susceptibility $\mu=3$. | condensed matter |
Reinforcement learning has been shown to be highly successful at many challenging tasks. However, success heavily relies on well-shaped rewards. Intrinsically motivated RL attempts to remove this constraint by defining an intrinsic reward function. Motivated by the self-consciousness concept in psychology, we make a natural assumption that the agent knows what constitutes itself, and propose a new intrinsic objective that encourages the agent to have maximum control on the environment. We mathematically formalize this reward as the mutual information between the agent state and the surrounding state under the current agent policy. With this new intrinsic motivation, we are able to outperform previous methods, including being able to complete the pick-and-place task for the first time without using any task reward. A video showing experimental results is available at https://youtu.be/AUCwc9RThpk. | computer science |
We have calculated one loop quantum electrodynamic corrections to the process $ \gamma\gamma\rightarrow\mu^+\mu^-\gamma $, where all photons are on mass shell and the muon mass is taken into account. The result is obtained in the analytical form and is implemented as functions in the C programming language, which can be used to calculate the cross-section, the differential cross section, and to construct generators. We also present numerical results for corrections to the cross section and to the differential cross section. | high energy physics phenomenology |
Photonic measurement-based quantum computation (MBQC) is a promising route towards fault-tolerant universal quantum computing. A central challenge in this effort is the huge overhead in the resources required for the construction of large photonic clusters using probabilistic linear-optics gates. Although strong single-photon nonlinearity ideally enables deterministic construction of such clusters, it is challenging to realise in a scalable way. Here we explore the prospects of using moderate nonlinearity (with conditional phase shifts smaller than $\pi$) to boost photonic quantum computing and significantly reduce its resources overhead. The key element in our scheme is a nonlinear router that preferentially directs photonic wavepackets to different output ports depending on their intensity. As a relevant example, we analyze the nonlinearity provided by Rydberg blockade in atomic ensembles, in which the trade-off between the nonlinearity and the accompanying loss is well understood. We present protocols for efficient Bell measurement and GHZ-state preparation -- both key elements in the construction of cluster states, as well as for the CNOT gate and quantum factorization. Given the large number of entangling operations involved in fault-tolerant MBQC, the increase in success probability provided by our protocols already at moderate nonlinearity can result in a dramatic reduction in the required resources. | quantum physics |
About half of nearby galaxies have a central surface brightness >1 magnitude below that of the sky. The overall properties of these low-surface-brightness galaxies (LSBGs) remain understudied, and in particular we know very little about their massive black hole population. This gap must be closed to determine the frequency of massive black holes at z=0 as well as to understand their role in regulating galaxy evolution. Here we investigate the incidence and intensity of nuclear, accretion-powered X-ray emission in a sample of 32 nearby LSBGs with the Chandra X-ray Observatory. A nuclear X-ray source is detected in 4 galaxies (12.5%). Based on an X-ray binary contamination assessment technique developed for normal galaxies, we conclude that the detected X-ray nuclei indicate low-level accretion from massive black holes. The active fraction is consistent with that expected from the stellar mass distribution of the LSBGs, but not their total baryonic mass, when using a scaling relation from an unbiased X-ray survey of normal galaxies. This suggests that their black holes co-evolved with their stellar population. In addition, the apparent agreement nearly doubles the number of galaxies available within ~100 Mpc for which a measurement of nuclear activity can efficiently constrain the frequency of black holes as a function of stellar mass. We conclude by discussing the feasibility of measuring this occupation fraction to a few percent precision below 1e10 solar masses with high-resolution, wide-field X-ray missions currently under consideration. | astrophysics |
Biosensing based on whispering-gallery mode (WGM) resonators has been continuously studied with great attention due to its excellent sensitivity guaranteeing the label-free detection. However, its practical impact is insignificant to date despite notable achievements in academic research. Here, we demonstrate a novel practical platform of on-chip WGM sensors integrated with microfluidic channels. By placing silicon nanoclusters as a stable active compound in micro-resonators, the sensor chip can be operated with a remote pump and readout, which simplifies the chip integration and connection to the external setup. In addition, silicon nanoclusters having large absorption cross-section over broad wavelength range allow active sensing for the first time with an LED pump in a top-illumination scheme which significantly reduces the complexity and cost of the measurement setup. The nano-slot structure of 25 nm gap width is embedded in the resonator where the target bio-molecules are selectively detected with the sensitivity enhanced by strongly confined mode-field. The sensitivity confirmed by real-time measurements for the streptavidin-biotin complex is 0.012 nm/nM, improved over 20 times larger than the previously reported WGM sensors with remote readout. | physics |
We introduce two algorithms for nonconvex regularized finite sum minimization, where typical Lipschitz differentiability assumptions are relaxed to the notion of relative smoothness. The first one is a Bregman extension of Finito/MISO, studied for fully nonconvex problems when the sampling is random, or under convexity of the nonsmooth term when it is essentially cyclic. The second algorithm is a low-memory variant, in the spirit of SVRG and SARAH, that also allows for fully nonconvex formulations. Our analysis is made remarkably simple by employing a Bregman Moreau envelope as Lyapunov function. In the randomized case, linear convergence is established when the cost function is strongly convex, yet with no convexity requirements on the individual functions in the sum. For the essentially cyclic and low-memory variants, global and linear convergence results are established when the cost function satisfies the Kurdyka-\L ojasiewicz property. | mathematics |
This paper is on the normal approximation of singular subspaces when the noise matrix has i.i.d. entries. Our contributions are three-fold. First, we derive an explicit representation formula of the empirical spectral projectors. The formula is neat and holds for deterministic matrix perturbations. Second, we calculate the expected projection distance between the empirical singular subspaces and true singular subspaces. Our method allows obtaining arbitrary $k$-th order approximation of the expected projection distance. Third, we prove the non-asymptotical normal approximation of the projection distance with different levels of bias corrections. By the $\lceil \log(d_1+d_2)\rceil$-th order bias corrections, the asymptotical normality holds under optimal signal-to-noise ration (SNR) condition where $d_1$ and $d_2$ denote the matrix sizes. In addition, it shows that higher order approximations are unnecessary when $|d_1-d_2|=O((d_1+d_2)^{1/2})$. Finally, we provide comprehensive simulation results to merit our theoretic discoveries. Unlike the existing results, our approach is non-asymptotical and the convergence rates are established. Our method allows the rank $r$ to diverge as fast as $o((d_1+d_2)^{1/3})$. Moreover, our method requires no eigen-gap condition (except the SNR) and no constraints between $d_1$ and $d_2$. | mathematics |
We present correlations involving central intensity ratio (CIR) of 52 early type galaxies, including 24 ellipticals and 28 lenticulars, selected from low density environment in the nearby (< 30 Mpc) universe. CIR is found to be negatively and significantly correlated with the mass of the central super massive black hole, central velocity dispersion, absolute B band magnitude, stellar bulge mass and central Mg2 index of the host galaxy. The study proposes the use of CIR as a simple, fast and efficient photometric tool for exploring the co-evolution scenario existing in galaxies. | astrophysics |
Time-series forecasting is important for many applications. Forecasting models are usually trained using time-series data in a specific target task. However, sufficient data in the target task might be unavailable, which leads to performance degradation. In this paper, we propose a few-shot learning method that forecasts a future value of a time-series in a target task given a few time-series in the target task. Our model is trained using time-series data in multiple training tasks that are different from target tasks. Our model uses a few time-series to build a forecasting function based on a recurrent neural network with an attention mechanism. With the attention mechanism, we can retrieve useful patterns in a small number of time-series for the current situation. Our model is trained by minimizing an expected test error of forecasting next timestep values. We demonstrate the effectiveness of the proposed method using 90 time-series datasets. | statistics |
Limited angle CT reconstruction is an under-determined linear inverse problem that requires appropriate regularization techniques to be solved. In this work we study how pre-trained generative adversarial networks (GANs) can be used to clean noisy, highly artifact laden reconstructions from conventional techniques, by effectively projecting onto the inferred image manifold. In particular, we use a robust version of the popularly used GAN prior for inverse problems, based on a recent technique called corruption mimicking, that significantly improves the reconstruction quality. The proposed approach operates in the image space directly, as a result of which it does not need to be trained or require access to the measurement model, is scanner agnostic, and can work over a wide range of sensing scenarios. | electrical engineering and systems science |
The standard way of training video models entails sampling at each iteration a single clip from a video and optimizing the clip prediction with respect to the video-level label. We argue that a single clip may not have enough temporal coverage to exhibit the label to recognize, since video datasets are often weakly labeled with categorical information but without dense temporal annotations. Furthermore, optimizing the model over brief clips impedes its ability to learn long-term temporal dependencies. To overcome these limitations, we introduce a collaborative memory mechanism that encodes information across multiple sampled clips of a video at each training iteration. This enables the learning of long-range dependencies beyond a single clip. We explore different design choices for the collaborative memory to ease the optimization difficulties. Our proposed framework is end-to-end trainable and significantly improves the accuracy of video classification at a negligible computational overhead. Through extensive experiments, we demonstrate that our framework generalizes to different video architectures and tasks, outperforming the state of the art on both action recognition (e.g., Kinetics-400 & 700, Charades, Something-Something-V1) and action detection (e.g., AVA v2.1 & v2.2). | computer science |
In this work we consider a generalized bilevel optimization framework for solving inverse problems. We introduce fractional Laplacian as a regularizer to improve the reconstruction quality, and compare it with the total variation regularization. We emphasize that the key advantage of using fractional Laplacian as a regularizer is that it leads to a linear operator, as opposed to the total variation regularization which results in a nonlinear degenerate operator. Inspired by residual neural networks, to learn the optimal strength of regularization and the exponent of fractional Laplacian, we develop a dedicated bilevel optimization neural network with a variable depth for a general regularized inverse problem. We also draw some parallels between an activation function in a neural network and regularization. We illustrate how to incorporate various regularizer choices into our proposed network. As an example, we consider tomographic reconstruction as a model problem and show an improvement in reconstruction quality, especially for limited data, via fractional Laplacian regularization. We successfully learn the regularization strength and the fractional exponent via our proposed bilevel optimization neural network. We observe that the fractional Laplacian regularization outperforms total variation regularization. This is specially encouraging, and important, in the case of limited and noisy data. | electrical engineering and systems science |
Using cross-correlation current noise spectroscopy, we have investigated carrier dynamics in methylammonium lead triiodide solar cells. This method provides a space selectivity for devices with planar multi-layered structure, effectively amplifying current noise contributions coming from the most resistive element of the stack. In the studied solar cells, we observe near full-scale shot noise, indicating the dominance of noise generation by a single source, likely the interface between the perovskite and the spiro-OMeTAD hole-transport layer. We argue that the strong 1/f noise term has contributions both from the perovskite layer and interfaces. It displays non-ideal dependence on photocurrent, $S \propto I^{1.4}$ (instead of usual $S \propto I^2$ ), which is likely due to current-induced halide migration. Finally, we observe generation-recombination noise. The relaxation time of this process grows linearly with photocurrent, which allows to attribute this contribution to bimolecular recombination in the perovskite bulk absorption layer. Extrapolating our results, we estimate that at the standard 1 sun illumination the electron-hole recombination time is 5 microseconds. | physics |
The class of autoregressive (AR) processes is extensively used to model temporal dependence in observed time series. Such models are easily available and routinely fitted using freely available statistical software like R. A potential caveat in analyzing short time series is that commonly applied estimators for the coefficients of AR processes are severely biased. This paper suggests a model-based approach for bias correction of well-known estimators for the coefficients of first and second-order stationary AR processes, taking the sampling distribution of the original estimator into account. This is achieved by modeling the relationship between the true and estimated AR coefficients using weighted orthogonal polynomial regression, fitted to a huge number of simulations. The finite-sample distributions of the new estimators are approximated using transformations of skew-normal densities and their properties are demonstrated by simulations and in the analysis of a real ecological data set. The new estimators are easily available in our accompanying R-package ARbiascorrect for time series of length n = 10, 11, ... , 50, where original estimates are found using exact or conditional maximum likelihood, Burg's method or the Yule-Walker equations. | statistics |
Charge-order stripes of different types occur when copper oxides are doped with either heterovalent metal, like $La_{2-x}Sr_xCuO_4$, or oxygen, like $YBa_2Cu_3O_{6+y}$. The difference shows up in the doping dependence of their incommensurability: $q_c(x) \propto \sqrt{x-\check{p}}$ but $q_c(y) \approx 0.3$. The square-root dependence in the former compound family results from Coulomb repulsion between doped holes (or electrons), residing pairwise in lattice-site $O$ (or $Cu$) atoms of the $CuO_2$ planes. The almost constant $q_c(y)$ value in the second family results from the aggregation of ozone-like molecules, formed from $O^{2-}$ ions of the host with embedded oxygen atoms, $O_i$, at interstitial sites in the $CuO_2$ planes. The magnetic moments, $\mathbf{m}(O)$, of the lattice-defect $O$ atoms in the first family arrange antiferromagnetically, which gives rise to accompanying magnetization stripes of incommensurability $q_m(x) = q_c(x)/2$. The ozone complexes have a vanishing magnetic moment, $\mathbf{m}=0$, which explains the absence of accompanying magnetization stripes in the second family. Embedding excess oxygen as $O_i$ atoms in $CuO_2$ planes is likewise assumed for $HgBa_2CuO_{4+\delta}$ and oxygen-enriched bismuth cuprates. A combination of characteristics from both families is present in oxygen-enriched $La_2CuO_{4+y}$. The validity of determining the hole density in oxygen-enriched cuprates with the universal-dome method is independently confirmed. Besides causing different types of stripes, the two types of lattice-defect oxygen may also cause different types of superconductivity. This could explain the much higher $T_{c,max}$ in oxygen-enriched than $Sr$-doped cuprates, as well as the cusped cooling-curves of X-ray intensity diffracted by stripes in the former family. | physics |
We present a general approach to model an integrated source of counterpropagating continuous-variable entangled states based on a coupled-resonator optical waveguide that is pumped by a classical pulsed source incident from above the waveguide. This paper is an extension of our previous work~(Ref. \cite{PhysRevA.100.033839}), where we analytically investigated the generation and propagation of continues-variable entangled states in this coupled-cavity system in the presence of intrinsic loss. However, in this work, we employ a numerical method to implement the Schmidt decomposition method rather than pursuing analytical methods. We show that not only this gives us a much higher degree of freedom in choosing the pumping parameters which were not possible to investigate analytically, but also it enables us to go beyond some of the approximations we had made to derive analytical expressions before. | quantum physics |
We study holographic entanglement entropy in four-dimensional quantum gravity with negative cosmological constant. By using the replica trick and evaluating path integrals in the minisuperspace approximation, in conjunction with the Wheeler-DeWitt equation, we compute quantum corrections to the holographic entanglement entropy for a circular entangling surface on the boundary three sphere. Similarly to our previous work on the sphere partition function, the path integrals are dominated by a replica version of asymptotically AdS conic geometries at saddle points. As expected from a general CFT argument, the final result is minus the free energy on the three sphere which agrees with the logarithm of the Airy partition function for the ABJM theory that sums up all perturbative $1/N$ corrections despite the absence of supersymmetries. The all-order holographic entanglement entropy cleanly splits into two parts, (1) the $1/N$-corrected Ryu-Takayanagi minimal surface area and (2) the bulk entanglement entropy across the minimal surface, as suggested in the earlier literature. It is explicitly shown that the former comes from the localized conical singularity of the replica geometries and the latter from the replication of the bulk volume. | high energy physics theory |
The electric field induced quantum phase transition from topological to conventional insulator has been proposed as the basis of a topological field effect transistor [1-4]. In this scheme an electric field can switch 'on' the ballistic flow of charge and spin along dissipationless edges of the two-dimensional (2D) quantum spin Hall insulator [5-9], and when 'off' is a conventional insulator with no conductive channels. Such as topological transistor is promising for low-energy logic circuits [4], which would necessitate electric field-switched materials with conventional and topological bandgaps much greater than room temperature, significantly greater than proposed to date [6-8]. Topological Dirac semimetals(TDS) are promising systems in which to look for topological field-effect switching, as they lie at the boundary between conventional and topological phases [3,10-16]. Here we use scanning probe microscopy/spectroscopy (STM/STS) and angle-resolved photoelectron spectroscopy (ARPES) to show that mono- and bilayer films of TDS Na3Bi [3,17] are 2D topological insulators with bulk bandgaps >400 meV in the absence of electric field. Upon application of electric field by doping with potassium or by close approach of the STM tip, the bandgap can be completely closed then re-opened with conventional gap greater than 100 meV. The large bandgaps in both the conventional and quantum spin Hall phases, much greater than the thermal energy kT = 25 meV at room temperature, suggest that ultrathin Na3Bi is suitable for room temperature topological transistor operation. | condensed matter |
The recent detection of GW190521 stimulated ideas on how to populate the predicted black hole pair-instability mass gap. One proposed scenario is the dynamical merger of two stars below the pair instability regime forming a star with a small core and an over-sized envelope. We explore this scenario with detailed stellar evolution calculations, starting with ad-hoc initial conditions enforcing no core growth during the merger. We outline the main challenges this scenario has to overcome, in particular the requirement to retain enough of its mass at merger time, in the subsequent evolution, and at core-collapse. We found that these massive merger products are likely helium-rich, and spend most of their remaining lifetime within regions of the Herzsprung-Russell diagram where envelope instabilities akin to luminous blue variable (LBV) eruptions are expected. An energetic estimate of the amount of mass loss neglecting the back-reaction of the star suggests that the total amount of mass that can be removed at low metallicity is . 1 M . This is small enough that at core-collapse our models are retaining sufficient mass to form black holes in the pair-instability gap similar to the recent ones detected by LIGO/Virgo. However, mass loss at the time of merger and the neutrino-driven mass loss at core collapse still need to be quantified for these models in order to confirm the viability of this scenario. | astrophysics |
We give a geometric interpretation of color-kinematics duality between tree-level scattering amplitudes of gauge and gravity theories. Using their representation as intersection numbers we show how to obtain Bern-Carrasco-Johansson numerators in a constructive way as residues around boundaries of the moduli space. In this language the kinematic Jacobi identity between each triple of numerators is a residue theorem in disguise. | high energy physics theory |
Liu et al. [Phys.Rev.B 98, 241109 (2018)] used Monte Carlo sampling of the physical degrees of freedom of a Projected Entangled Pair State (PEPS) type wave function for the $S=1/2$ frustrated $J_1$-$J_2$ Heisenberg model on the square lattice and found a non-magnetic state argued to be a gapless spin liquid when the coupling ratio $g=J_2/J_1$ is in the range $g \in [0.42,0.6]$. Here we show that their definition of the order parameter for another candidate ground state within this coupling window---a spontaneously dimerized state---is problematic. The order parameter as defined will not detect dimer order when lattice symmeties are broken due to open boundaries or asymmetries originating from the calculation itself. Thus, a dimerized phase for some range of $g$ cannot be excluded (and is likely based on several other recent works). | condensed matter |
We study the optical reflectivity of three-dimensional (3D) photonic band gap crystals with increasing thickness. The crystals consist of GaAs plates with nanorod arrays that are assembled by an advanced stacking method into high-quality 3D woodpile structures. We observe intense and broad reflectivity peak with stop bands that correspond to a broad gap in the photonic band structures. The maximum reflectivity quickly reaches high values even for a few crystal layers. Remarkably, the bandwidth of the stop bands hardly decreases with increasing crystal thickness, in good agreement with FDTD simulations. This behavior differs remarkably from the large changes observed earlier in weakly interacting 3D photonic crystals. The nearly constant bandwidth and high reflectivity are rationalized by multiple Bragg interference that occurs in strongly interacting photonic band gap crystals, whereby the incident light scatters from multiple reciprocal lattice vectors simultaneously, in particular from oblique ones that are parallel to a longer crystal dimension and thus experience hardly any finite size effects. Our new insights have favorable consequences for the application of 3D photonic band gap crystals, notably since even thin structures reveal the full band gap functionality, including devices that shield quantum bits from vacuum fluctuations. | physics |
Estimating the directions of arrival (DOAs) of multiple sources from a single snapshot obtained by a coherent antenna array is a well-known problem, which can be addressed by sparse signal reconstruction methods, where the DOAs are estimated from the peaks of the recovered high-dimensional signal. In this paper, we consider a more challenging DOA estimation task where the array is composed of non-coherent sub-arrays (i.e., sub-arrays that observe different unknown phase shifts due to using low-cost unsynchronized local oscillators). We formulate this problem as the reconstruction of a joint sparse and low-rank matrix and solve its convex relaxation. While the DOAs can be estimated from the solution of the convex problem, we further show how an improvement is obtained if instead one estimates from this solution the phase shifts, creates "phase-corrected" observations and applies another final (plain, coherent) sparsity-based DOA estimation. Numerical experiments show that the proposed approach outperforms strategies that are based on non-coherent processing of the sub-arrays as well as other sparsity-based methods. | electrical engineering and systems science |
This paper extends the formalism for quantizing field theories via a microcanonical quantum field theory and Hamilton's principle to classical evolution equations. These are based on the well-known correspondence under a Wick rotation between quantum field theories and 4-D statistical mechanical theories. By placing quantum field theories on a 4+1-D under Wick rotation to 5-D, expectations of observables are calculated for a microcanonical field theory averaging Hamiltonian flow over a fifth spacelike dimension, a technique common in lattice gauge simulations but not in perturbation theory. In a novel demonstration, averaging pairs of external lines in the classical Feynman diagrams over the fifth dimension generates diagrams with loops and vacuum fluctuations identical to Standard Model diagrams. Because it is microcanonical, this approach, while equivalent for standard quantum fields theories in the Standard Model, is able to quantize theories that have no canonical quantization. It is also unique in representing expectations as averages over solutions to an ordinary, classical PDE rather than a path integral or operator based approaches. Hence, this approach draws a clear connection between quantum field theory and classical field theory in higher dimensions which has implications towards how quantum effects are interpreted. In particular, it raises questions about how violations of the ergodic hypothesis could influence quantum measurements even in standard, non-statistical quantum field theory. | high energy physics theory |
The Mantanza-Riachuelo basin recovery is one of the most ambitious environmental projects under construction in Argentina. In this context, the sanitary bureau of the metropolitan area of Buenos Aires (AySA) is building a sewage collection network to transport the waste water of the population in the southern area of the city, composed by almost five million people. The most complex tunnel in this big project is named \textit{Lot 3}, an outfall EPB-TBM tunnel starting at a shaft located at the \textit{Rio de la Plata} margin and running under the river 12 km to a discharge area. The tunnel runs through soft clay belonging to the \textit{post-pampeano} formation and dense sands of the \textit{Puelchese} formation. In operation, it will be pressurized by a pumping station which will produce a piezometer head that, in the first 2000 m, might be eventually higher than the confining pressure around the tunnel. This paper presents the numerical analysis of the structural forces acting on the tunnel rings using a risk-oriented approach that considers the stochastic nature of materials, stratigraphy and tunnel-ground interaction. The compression of the lining is evaluated and compared with field measurements in order to predict the structural forces and the risk of the rings going into tension beyond the structural capacity of the system. | physics |
Choosing the number of mixture components remains an elusive challenge. Model selection criteria can be either overly liberal or conservative and return poorly-separated components of limited practical use. We formalize non-local priors (NLPs) for mixtures and show how they lead to well-separated components with non-negligible weight, interpretable as distinct subpopulations. We also propose an estimator for posterior model probabilities under local and non-local priors, showing that Bayes factors are ratios of posterior to prior empty-cluster probabilities. The estimator is widely applicable and helps set thresholds to drop unoccupied components in overfitted mixtures. We suggest default prior parameters based on multi-modality for Normal/T mixtures and minimal informativeness for categorical outcomes. We characterise theoretically the NLP-induced sparsity, derive tractable expressions and algorithms. We fully develop Normal, Binomial and product Binomial mixtures but the theory, computation and principles hold more generally. We observed a serious lack of sensitivity of the Bayesian information criterion (BIC), insufficient parsimony of the AIC and a local prior, and a mixed behavior of the singular BIC. We also considered overfitted mixtures, their performance was competitive but depended on tuning parameters. Under our default prior elicitation NLPs offered a good compromise between sparsity and power to detect meaningfully-separated components. | statistics |
In this work, we explore the dynamics of entanglement of an isolated quantum system consisting of two time-dependent, coupled harmonic oscillators. Through the use of a numerical method that relies on the estimation of the system's Wigner representation by a specific Gaussian function, we investigate the time evolution of the entanglement entropy after an instant quench in the inherent parameters of the system. Besides, from the comparison of the results obtained from the analytical expression for the time-dependent von Neumann entropy with the numerically computed entropy data, the effectiveness of the numerical method is tested for a variety of angular frequency combinations. Also, we analyze how the entropy of entanglement change as a function of time. | quantum physics |
Eigenstate preparation is ubiquitous in quantum computing, and a standard approach for generating the lowest-energy states of a given system is by employing adiabatic state preparation (ASP). In the present work, we investigate a variational method for determining the optimal scheduling procedure within the context of ASP. In the absence of quantum error correction, running a quantum device for any meaningful amount of time causes a system to become susceptible to the loss of relevant information. Therefore, if accurate quantum states are to be successfully generated, it is crucial to find techniques that shorten the time of individual runs during iterations of annealing. We demonstrate our variational method toward this end by investigating the hydrogen and P4 molecules, as well as the Ising model problem on a two-dimensional triangular lattice. In both cases, the time required for one iteration to produce accurate results is reduced by several orders of magnitude in comparison to what is achievable via standard ASP. As a result, the required quantum coherence time to perform such a calculation on a quantum device becomes much less stringent with the implementation of this algorithm. In addition, our variational method is found to exhibit resilience against control errors, which are commonly encountered within the realm of quantum computing. | quantum physics |
We study primordial cosmology with two scalar fields that participate in inflation at the same time, by coupling quantum gravity (i.e., the theory $R+R^{2}+C^{2}$ with the fakeon prescription/projection for $C^{2}$) to a scalar field with a quadratic potential. We show that there exists a perturbative regime that can be described by an asymptotically de Sitter, cosmic RG flow in two couplings. Since the two scalar degrees of freedom mix in nontrivial ways, the adiabatic and isocurvature perturbations are not RG invariant on superhorizon scales. It is possible to identify the correct perturbations by using RG invariance as a guiding principle. We work out the resulting power spectra of the tensor and scalar perturbations to the NNLL and NLL orders, respectively. An unexpected consequence of RG invariance is that the theory remains predictive. Indeed, the scalar mixing affects only the subleading corrections, so the predictions of quantum gravity with single-field inflation are confirmed to the leading order. | high energy physics theory |
We consider a flavor model based on A4 modular group to account for both lepton and quark parameters (masses and mixing). The inverse seesaw mechanism is considered to produce the light neutrino masses. Lepton masses and mixing are obtained in terms of Yukawa coupling ratios and values of the modulus {\tau} nearby some fixed points for inverted neutrino mass hierarchy. The quark masses and mixing are arisen at the same {\tau} values used in inverted neutrino mass hierarchy and are in agreements with the recent data. | high energy physics phenomenology |
The introduction of graphene-related materials (GRMs) in carbon fibre-reinforced polymers (CFRP) has been proved to enhance their mechanical and electrical properties. However, methodologies to produce the 3-phase materials (multiscale composites) at an industrial scale and in an efficient manner are still lacking. In this paper, multiscale CFRP composites containing different GRMs have been manufactured following standard procedures currently used in the aerospace industry with the aim to evaluate its potential application. Graphite nanoplateletelets (GNPs), in situ exfoliated graphene oxide (GO) and reduced graphene oxide (rGO) have been dispersed into an epoxy resin to subsequently impregnate aeronautical grade carbon fibre tape. The resulting prepregs have been used for manufacturing laminates by hand lay-up and autoclave curing at 180 {\deg}C. Abroad characterization campaign has been carried out to understand the behaviour of the different multiscale laminates manufactured. The degree of cure, glass transition temperature and degradation temperature have been evaluated by thermal evolution techniques. Similarly, their mechanical properties (tensile, flexural, in-plane shear, interlaminar shear and mode I interlaminar fracture toughness) have been analysed together with their electrical conductivity. The manufacturing process resulted appropriated for producing three-phase laminates and their quality was as good as in conventional CFRPs. The addition ofGOand rGO resulted in an enhancement of the in-plane shear properties and delamination resistance while the addition ofGNPimproved the electrical conductivity. | physics |
Robotic-assisted orthopaedic surgeries demand accurate, automated leg manipulation for improved spatial accuracy to reduce iatrogenic damage. In this study, we propose novel rigid body designs and an optical tracking volume setup for tracking of the femur, tibia and surgical instruments. Anatomical points inside the leg are measured using Computed Tomography with an accuracy of 0.3mm. Combined with kinematic modelling, we can express these points relative to any frame and across joints to sub-millimetre accuracy. It enables the setup of vectors on the mechanical axes of the femur and tibia for kinematic analysis. Cadaveric experiments are used to verify the tracking of internal anatomies and joint motion analysis. The proposed integrated solution is a first step in the automation of leg manipulation and can be used as a ground-truth for future robot-assisted orthopaedic research. | computer science |
A point set $\mathrm X_N$ on the unit sphere is a spherical $t$-design is equivalent to the nonnegative quantity $A_{N,t+1}$ vanished. We show that if $\mathrm X_N$ is a stationary point set of $A_{N,t+1}$ and the minimal singular value of basis matrix is positive, then $\mathrm X_N$ is a spherical $t$-design. Moreover, the numerical construction of spherical $t$-designs is valid by using Barzilai-Borwein method. We obtain numerical spherical $t$-designs with $t+1$ up to $127$ at $N=(t+2)^2$. | mathematics |
Objective: Researchers often use model-based multiple imputation to handle missing at random data to minimize bias while making the best use of all available data. However, there are sometimes constraints within the data that make model-based imputation difficult and may result in implausible values. In these contexts, we describe how to use random hot deck imputation to allow for plausible multiple imputation in longitudinal studies. Study Design and Setting: We illustrate random hot deck multiple imputation using The Childhood Health, Activity, and Motor Performance School Study Denmark (CHAMPS-DK), a prospective cohort study that measured weekly sports participation for 1700 Danish schoolchildren. We matched records with missing data to several observed records, generated probabilities for matched records using observed data, and sampled from these records based on the probability of each occurring. Because imputed values are generated randomly, multiple complete datasets can be created and analyzed similar to model-based multiple imputation. Conclusion: Multiple imputation using random hot deck imputation is an alternative method when model-based approaches are infeasible, specifically where there are constraints within and between covariates. | statistics |
Neural networks based vocoders have recently demonstrated the powerful ability to synthesize high quality speech. These models usually generate samples by conditioning on some spectrum features, such as Mel-spectrum. However, these features are extracted by using speech analysis module including some processing based on the human knowledge. In this work, we proposed RawNet, a truly end-to-end neural vocoder, which use a coder network to learn the higher representation of signal, and an autoregressive voder network to generate speech sample by sample. The coder and voder together act like an auto-encoder network, and could be jointly trained directly on raw waveform without any human-designed features. The experiments on the Copy-Synthesis tasks show that RawNet can achieve the comparative synthesized speech quality with LPCNet, with a smaller model architecture and faster speech generation at the inference step. | electrical engineering and systems science |
Deep convolutional neural networks have proven to be well suited for image classification applications. However, if there is distortion in the image, the classification accuracy can be significantly degraded, even with state-of-the-art neural networks. The accuracy cannot be significantly improved by simply training with distorted images. Instead, this paper proposes a multiple neural network topology referred to as a selective deep convolutional neural network. By modifying existing state-of-the-art neural networks in the proposed manner, it is shown that a similar level of classification accuracy can be achieved, but at a significantly lower cost. The cost reduction is obtained primarily through the use of fewer weight parameters. Using fewer weights reduces the number of multiply-accumulate operations and also reduces the energy required for data accesses. Finally, it is shown that the effectiveness of the proposed selective deep convolutional neural network can be further improved by combining it with previously proposed network cost reduction methods. | computer science |
The topological spin-Hall effect causes different spins to propagate in opposite directions based on Hermitian physics. The non-Hermitian skin effect causes the localization of all modes of a system at its edges. Although these effects have separately attracted tremendous interest, their completely different origins have made them hard to observe in a single system. Here we propose a system based on exciton-polariton elliptical micropillars hosting both the effects. The polarization splitting and different orientation of the elliptical micropillars naturally give rise to the topological spin-Hall effect in a one dimensional lattice. By making the effective decay rates of the different spin ($\sigma_\pm$) modes unequal, the system makes a phase transition from the Hermitian regime to non-Hermitian regime showing the non-Hermitian skin effect. | condensed matter |
We introduce an open quantum battery protocol using dark states to achieve both superextensive capacity and power density, with non-interacting spins coupled to a reservoir. Further, our power density actually scales with the of number of spins $N$ in the battery. We show that the enhanced capacity and power is correlated with entanglement. Whilst connected to the charger, the charged state of the battery is a steady state, stabilized through quantum interference in the open system. | quantum physics |
We assess the accuracy of a recently introduced nonlinear interference model for general dual-polarization 4D formats.~ Unlike previous models for polarization-multiplexed 2D formats, an average gap from split-step Fourier simulations within 0.1 dB is demonstrated. | electrical engineering and systems science |
We study the problem where a group of agents aim to collaboratively learn a common latent function through streaming data. We propose a Resource-aware Gaussian process regression algorithm that is cognizant of agents' limited capabilities in communication, computation and memory. We quantify the improvement that limited inter-agent communication brings to the transient and steady-state performance in predictive variance and predictive mean. A set of simulations is conducted to evaluate the developed algorithm. | computer science |
Type Iax supernovae (SNe~Iax) are the most common class of peculiar SNe. While they are thought to be thermonuclear white-dwarf (WD) SNe, SNe~Iax are observationally similar to, but distinct from SNe~Ia. Unlike SNe~Ia, where roughly 30\% occur in early-type galaxies, only one SN~Iax has been discovered in an early-type galaxy, suggesting a relatively short delay time and a distinct progenitor system. Furthermore, one SN~Iax progenitor system has been detected in pre-explosion images with its properties consistent with either of two models: a short-lived (<100 Myr) progenitor system consisting of a WD primary and a He-star companion, or a singular Wolf-Rayet progenitor star. Using deep \textit{Hubble Space Telescope} images of nine nearby SN~Iax host galaxies, we measure the properties of stars within 200 pc of the SN position. The ages of local stars, some of which formed with the SN progenitor system, can constrain the time between star formation and SN, known as the delay time. We compare the local stellar properties to synthetic photometry of single-stellar populations, fitting to a range of possible delay times for each SN. With this sample, we uniquely constrain the delay-time distribution for SNe~Iax, with a median and $1-\sigma$ confidence interval delay time of $63_{- 15}^{+ 58} \times 10^{6}$ years. The measured delay-time distribution provides an excellent constraint on the progenitor system for the class, indicating a preference for a WD progenitor system over a Wolf-Rayet progenitor star. | astrophysics |
We explore, employing the renormalization-group theory, the critical scaling behavior of the permutation symmetric three-vector model that obeys non-conserving dynamics and has a relevant anisotropic perturbation which drives the system into a non-equilibrium steady state. We explicitly find the independent critical exponents with corrections up to two loops. They include the static exponents $\nu$ and $\eta$, the off equilibrium exponent $\widetilde{\eta}$, the dynamic exponent $z$ and the strong anisotropy exponent $\Delta$. We also express the other anisotropy exponents in terms of these. | condensed matter |
Confining sound is of significant importance for the manipulation and routing acoustic waves. We propose a Helmholtz resonator (HR) based subwavelength sound channel formed at the interface of two metamaterials, for this purpose. The confinement is quantified through (i) a substantial reduction of the pressure, and (ii) an increase in a specific acoustic impedance (defined by the ratio of the local pressure to the sound velocity) - to a very large value outside the channel. The sound confinement is robust to frequency as well as spatial disorder at the interface, as long as the interface related edge mode is situated within the band gap. A closed acoustic circuit was formed by introducing controlled disorder in the HR units at the corners, indicating the possibility of confining sound to a point. | physics |
Tsirelson's problem asks whether the commuting operator model for two-party quantum correlations is equivalent to the tensor-product model. We give a negative answer to this question by showing that there are non-local games which have perfect commuting-operator strategies, but do not have perfect tensor-product strategies. The weak Tsirelson problem, which is known to be equivalent to Connes embedding problem, remains open. The examples we construct are instances of (binary) linear system games. For such games, previous results state that the existence of perfect strategies is controlled by the solution group of the linear system. Our main result is that every finitely-presented group embeds in some solution group. As an additional consequence, we show that the problem of determining whether a linear system game has a perfect commuting-operator strategy is undecidable. | quantum physics |
Spin transistors based on a semiconducting channel attached to ferromagnetic electrodes suffer from fast spin decay and extremely low spin injection/detection efficiencies. Here, we propose an alternative all-in-one spin device whose operation principle relies on electric manipulation of the spin lifetime in two-dimensional (2D) SnTe, in which the sizable spin Hall effect eliminates the need for using ferromagnets. In particular, we explore the persistent spin texture (PST) intrinsically present in the ferroelectric phase which protects the spin from decoherence and supports extraordinarily long spin lifetime. Our first-principles calculations followed by symmetry arguments revealed that such a spin wave mode can be externally detuned by perpendicular electric field, leading to spin randomization and decrease in spin lifetime. We further extend our analysis to ultrathin SnTe films and confirm the emergence of PST as well as a moderate enhancement of intrinsic spin Hall conductivity. The recent room-temperature observation of the ferroelectric phase in 2D-SnTe suggests that novel all-electric spintronics devices are within reach. | condensed matter |
In this work, we present an efficiently computable lower bound on the quantum Fisher information (QFI). This bound itself is of interest, as we show that it satisfies the canonical criteria of a QFI measure. Specifically, it is essentially a QFI measure for sub-normalized states, and hence it generalizes the standard QFI in this sense. Our bound employs the generalized fidelity applied to the truncated state, which is constructed via the $m$ largest eigenvalues and their corresponding eigenvectors of the probe quantum state $\rho_\theta$. Focusing on the unitary families of probe states, we analyze the properties of our proposed lower bound, which we believe will be useful in efficiently estimating the QFI. | quantum physics |
Computing the Sparse Fast Fourier Transform(sFFT) of a K-sparse signal of size N has emerged as a critical topic for a long time. The sFFT algorithms decrease the runtime and sampling complexity by taking advantage of the signal inherent characteristics that a large number of signals are sparse in the frequency domain(e.g., sensors, video data, audio, medical image, etc.). The first stage of sFFT is frequency bucketization through one of these filters: Dirichlet kernel filter, flat filter, aliasing filter, etc. Compared to other sFFT algorithms, the sFFT algorithms using the flat filter is more convenient and efficient because the filtered signal is concentrated both in the time domain and frequency domain. Up to now, three sFFT algorithms sFFT1.0, sFFT2.0, sFFT3.0 algorithm have been proposed by the Massachusetts Institute of Technology(MIT) in 2013. Still, the sFFT4.0 algorithm using the multiscale approach method has not been implemented yet. This paper will discuss this algorithm comprehensively in theory and implement it in practice. It is proved that the performance of the sFFT4.0 algorithm depends on two parameters. The runtime and sampling complexity are in direct ratio to the multiscale parameter and in inverse ratio to the extension parameter. The robustness is in direct ratio to the extension parameter and in inverse ratio to the multiscale parameter. Compared with three similar algorithms or other four types of algorithms, the sFFT4.0 algorithm has excellent runtime and sampling complexity that ten to one hundred times better than the fftw algorithm, although the robustness of the algorithm is medium. | electrical engineering and systems science |
Meteorological ensembles are a collection of scenarios for future weather delivered by a meteorological center. Such ensembles form the main source of valuable information for probabilistic forecasting which aims at producing a predictive probability distribution of the quantity of interest instead of a single best guess estimate. Unfortunately, ensembles cannot generally be considered as a sample from such a predictive probability distribution without a preliminary post-processing treatment to calibrate the ensemble. Two main families of post-processing methods, either competing such as BMA or collaborative such as EMOS, can be found in the literature. This paper proposes a mixed effect model belonging to the collaborative family. The structure of the model is based on the hypothesis of invariance under the relabelling of the ensemble members. Its interesting specificities are as follows: 1) exchangeability, which contributes to parsimony, with a latent pivot variable synthesizing the essential meteorological features of the ensembles, 2) a multi-ensemble implementation, allowing to take advantage of various information so as to increase the sharpness of the forecasting procedure. Focus is cast onto Normal statistical structures, first with a direct application for temperatures, then with its Tobit extension for precipitation. Inference is performed by EM algorithms with recourse made to stochastic conditional simulations in the precipitation case. After checking its good behavior on artificial data, the proposed post-processing technique is applied to temperature and precipitation ensemble forecasts produced over five river basins managed by Hydro-Qu$\'e$bec. These ensemble forecasts were extracted from the THORPEX Interactive Grand Global Ensemble (TIGGE) database. The results indicate that post-processed ensemble are calibrated and generally sharper than the raw ensembles. | statistics |
By working in QED, we obtain the electron, positron, and photon Parton Distribution Functions (PDFs) of the unpolarised electron at the next-to-leading logarithmic accuracy. The PDFs account for all of the universal effects of initial-state collinear origin, and are key ingredients in the calculations of cross sections in the so-called structure-function approach. We present both numerical and analytical results, and show that they agree extremely well with each other. The analytical predictions are defined by means of an additive formula that matches a large-$z$ solution that includes all orders in the QED coupling constant $\alpha$, with a small- and intermediate-$z$ solution that includes terms up to ${\cal O}(\alpha^3)$. | high energy physics phenomenology |
We revisit the perturbative S-matrix of c=1 string theory from the worldsheet perspective. We clarify the origin of the leg pole factors, the non-analyticity of the string amplitudes, and the validity as well as limitations of earlier computations based on resonance momenta. We compute the tree level 4-point amplitude and the genus one 2-point reflection amplitude by numerically integrating Virasoro conformal blocks with DOZZ structure constants on the sphere and on the torus, with sufficiently generic complex Liouville momenta, and find agreement with known answers from the c=1 matrix model. | high energy physics theory |
Deep generative models have shown great promise when it comes to synthesising novel images. While they can generate images that look convincing on a higher-level, generating fine-grained details is still a challenge. In order to foster research on more powerful generative approaches, this paper proposes a novel task: generative modelling of 2D tree skeletons. Trees are an interesting shape class because they exhibit complexity and variations that are well-suited to measure the ability of a generative model to generated detailed structures. We propose a new dataset for this task and demonstrate that state-of-the-art generative models fail to synthesise realistic images on our benchmark, even though they perform well on current datasets like MNIST digits. Motivated by these results, we propose a novel network architecture based on combining a variational autoencoder using Recurrent Neural Networks and a convolutional discriminator. The network, error metrics and training procedure are adapted to the task of fine-grained sketching. Through quantitative and perceptual experiments, we show that our model outperforms previous work and that our dataset is a valuable benchmark for generative models. We will make our dataset publicly available. | computer science |
This paper studies the possibility of detecting and isolating topology failures (including link failures and node failures) of a networked system from subsystem measurements, in which subsystems are of fixed high-order linear dynamics, and the exact interaction weights among them are unknown. We prove that in such class of networked systems with the same network topologies, the detectability and isolability of a given topology failure (set) are generic properties, indicating that it is the network topology that dominates the property of being detectable or isolable for a failure (set). We first give algebraic conditions for detectability and isolability of arbitrary parameter perturbations for a lumped plant, and then derive graph-theoretical necessary and sufficient conditions for generic detectability and isolability of topology failures for the networked systems. On the basis of these results, we consider the problems of deploying the smallest set of sensors for generic detectability and isolability. We reduce the associated sensor placement problems to the hitting set problems, which can be effectively solved by greedy algorithms with guaranteed approximation performances. | electrical engineering and systems science |
In this paper, we propose a model-based machine-learning approach for dual-polarization systems by parameterizing the split-step Fourier method for the Manakov-PMD equation. The resulting method combines hardware-friendly time-domain nonlinearity mitigation via the recently proposed learned digital backpropagation (LDBP) with distributed compensation of polarization-mode dispersion (PMD). We refer to the resulting approach as LDBP-PMD. We train LDBP-PMD on multiple PMD realizations and show that it converges within 1% of its peak dB performance after 428 training iterations on average, yielding a peak effective signal-to-noise ratio of only 0.30 dB below the PMD-free case. Similar to state-of-the-art lumped PMD compensation algorithms in practical systems, our approach does not assume any knowledge about the particular PMD realization along the link, nor any knowledge about the total accumulated PMD. This is a significant improvement compared to prior work on distributed PMD compensation, where knowledge about the accumulated PMD is typically assumed. We also compare different parameterization choices in terms of performance, complexity, and convergence behavior. Lastly, we demonstrate that the learned models can be successfully retrained after an abrupt change of the PMD realization along the fiber. | electrical engineering and systems science |
The ability to switch magnetic elements by spin-orbit-induced torques has recently attracted much attention for a path towards high-performance, non-volatile memories with low power consumption. Realizing efficient spin-orbit-based switching requires harnessing both new materials and novel physics to obtain high charge-to-spin conversion efficiencies, thus making the choice of spin source crucial. Here we report the observation of spin-orbit torque switching in bilayers consisting of a semimetallic film of 1T'-MoTe2 adjacent to permalloy. Deterministic switching is achieved without external magnetic fields at room temperature, and the switching occurs with currents one order of magnitude smaller than those typical in devices using the best-performing heavy metals. The thickness dependence can be understood if the interfacial spin-orbit contribution is considered in addition to the bulk spin Hall effect. Further threefold reduction in the switching current is demonstrated with resort to dumbbell-shaped magnetic elements. These findings foretell exciting prospects of using MoTe2 for low-power semimetal material based spin devices. | condensed matter |
Deriving interpretable prognostic features from deep-learning-based prognostic histopathology models remains a challenge. In this study, we developed a deep learning system (DLS) for predicting disease specific survival for stage II and III colorectal cancer using 3,652 cases (27,300 slides). When evaluated on two validation datasets containing 1,239 cases (9,340 slides) and 738 cases (7,140 slides) respectively, the DLS achieved a 5-year disease-specific survival AUC of 0.70 (95%CI 0.66-0.73) and 0.69 (95%CI 0.64-0.72), and added significant predictive value to a set of 9 clinicopathologic features. To interpret the DLS, we explored the ability of different human-interpretable features to explain the variance in DLS scores. We observed that clinicopathologic features such as T-category, N-category, and grade explained a small fraction of the variance in DLS scores (R2=18% in both validation sets). Next, we generated human-interpretable histologic features by clustering embeddings from a deep-learning based image-similarity model and showed that they explain the majority of the variance (R2 of 73% to 80%). Furthermore, the clustering-derived feature most strongly associated with high DLS scores was also highly prognostic in isolation. With a distinct visual appearance (poorly differentiated tumor cell clusters adjacent to adipose tissue), this feature was identified by annotators with 87.0-95.5% accuracy. Our approach can be used to explain predictions from a prognostic deep learning model and uncover potentially-novel prognostic features that can be reliably identified by people for future validation studies. | electrical engineering and systems science |
The three body problem is a special case of the n body problem where one takes the initial positions and velocities of three point masses and attempts to predict their motion over time according to Newtonian laws of motion and universal gravitation. Though analytical solutions have been found for special cases, the general problem remains unsolved; the solutions that do exist are impractical. Fortunately, for many applications, we may not need to solve the problem completely, i.e., predicting with reasonable accuracy for some time steps, may be sufficient. Recently, Breen et al attempted to approximately solve the three body problem using a simple neural network. Although their methods appear to achieve some success in reducing the computational overhead, their model is extremely restricted, applying to a specialized 2D case. The authors do not provide explanations for critical decisions taken in their experimental design, no details on their model or architecture, and nor do they publish their code. Moreover, the model does not generalize well to unseen cases. In this paper, we propose a detailed experimental setup to determine the feasibility of using neural networks to solve the three body problem up to a certain number of time steps. We establish a benchmark on the dataset size and set an accuracy threshold to measure the viability of our results for practical applications. Then, we build our models according to the listed class of NNs using a dataset generated from standard numerical integrators. We gradually increase the complexity of our data set to determine whether NNs can learn a representation of the chaotic three body problem well enough to replace numerical integrators in real life scenarios. | computer science |
This review aims at gathering the most relevant quantum multi-parameter estimation methods that go beyond the direct use of the Quantum Fisher Information concept. We discuss in detail the Holevo Cram\'er-Rao bound, the Quantum Local Asymptotic Normality approach as well as Bayesian methods. Even though the fundamental concepts in the field have been laid out more than forty years ago, a number of important results have appeared much more recently. Moreover, the field drew increased attention recently thanks to advances in practical quantum metrology proposals and implementations that often involve estimation of multiple parameters simultaneously. Since these topics are spread in the literature and often served in a very formal mathematical language, one of the main goals of this review is to provide a largely self-contained work that allows the reader to follow most of the derivations and get an intuitive understanding of the interrelations between different concepts using a set of simple yet representative examples involving qubit and Gaussian shift models. | quantum physics |
A generally applicable approach for the calculation of relativistic one-electron properties with two-component wave functions is presented. The formalism is explicitly evaluated for the example of quasi-relativistic wavefunctions obtained within the zeroth order regular approximation (ZORA). The wide applicability of the scheme is demonstrated for the calculation of parity ($\mathcal{P}$) and time-reversal ($\mathcal{T}$) symmetry violating properties, which are important for searches of physics beyond the standard model of particle physics. The quality of the ZORA results is shown exemplarily for the molecules RaF and TlF by comparison to data from four-component calculations as far as available. Finally, the applicability of RaF in experiments that search for $\mathcal{P,T}$-violation not only in the electronic but also in quark sector is demonstrated. | physics |
The explosive growth of data and its related energy consumption is pushing the need to develop energy-efficient brain-inspired schemes and materials for data processing and storage. Here, we demonstrate experimentally that Co/Pt films can be used as artificial synapses by manipulating their magnetization state using circularly-polarized ultrashort optical pulses at room temperature. We also show an efficient implementation of supervised perceptron learning on an opto-magnetic neural network, built from such magnetic synapses. Importantly, we demonstrate that the optimization of synaptic weights can be achieved using a global feedback mechanism, such that the learning does not rely on external storage or additional optimization schemes. These results suggest there is high potential for realizing artificial neural networks using optically-controlled magnetization in technologically relevant materials, that can learn not only fast but also energy-efficient. | computer science |
The flux of cosmic-ray high-energy positrons has recently been measured by AMS with unprecedented precision. This flux is well above the expectation from secondary positrons made by the observed fluxes of nuclear cosmic rays impinging on the interstellar medium. Various authors have pointed out that the positron excess may originate at the primary cosmic-ray source itself, rather than in the more local ISM, thus avoiding the temptation to invoke a dark-matter decay or annihilation origin, or nearby pulsars. We investigate the possibility that the source is the one of a comprehensive model of gamma-ray bursts and cosmic rays, proposed two decades ago. The result, based on the original unmodified priors of the model --and with no fitting of parameters-- very closely reproduces the shape and magnitude of the AMS observations. | high energy physics phenomenology |
In this work, we study the elastic scattering behavior of electron vortices when propagating through amorphous samples. We use a formulation of the multislice approach in cylindrical coordinates to theoretically investigate the redistribution of intensity between different angular momentum components due to scattering. To corroborate and elaborate on our theoretical results, we perform extensive numerical simulations on three model systems (Si$_3$N$_4$, Fe$_{0.8}$B$_{0.2}$, Pt) for a wide variety of experimental parameters to quantify the purity of the vortices, the net angular momentum transfer, and the variability of the results with respect to the random relative position between the electron beam and the scattering atoms. These results will help scientists to further improve the creation of electron vortices and enhance applications involving them. | condensed matter |
Bacteria commonly live in structured communities that affect human health and influence ecological systems. Heterogeneous populations, such as motile and non-motile populations, often coexist in bacteria communities. Motile subpopulations in microbial communities are believed to be important to dispersal, quest for food, and material transport. However, except in circumstances where motile cells drive colony expansion (e.g. bacterial swarming), the physiological functions of motile subpopulations in bacterial communities are largely unclear. Here we discovered that motile cells in routinely cultured sessile colonies of peritrichously flagellated bacteria can self-organize into two adjacent, centimeter-scale motile rings surrounding the entire colony. The motile rings arise from spontaneous segregation of a homogeneous swimmer suspension that mimics a phase separation; the process is mediated by intercellular interactions and shear-induced depletion. As a result of this self-organization, cells drive fluid flows to circulate around the colony at a constant peak speed of approximately 30 microns per second, providing a stable and high-speed avenue for directed material transport at the macroscopic scale. These findings present a unique form of bacterial self-organization that influences population structure and material distribution in bacterial communities. | physics |
We investigate the encoding of higher-dimensional logic into quantum states. To that end we introduce finite-function-encoding (FFE) states which encode arbitrary $d$-valued logic functions and investigate their structure as an algebra over the ring of integers modulo $d$. We point out that the polynomiality of the function is the deciding property for associating hypergraphs to states. Given a polynomial, we map it to a tensor-edge hypergraph, where each edge of the hypergraph is associated with a tensor. We observe how these states generalize the previously defined qudit hypergraph states, especially through the study of a group of finite-function-encoding Pauli stabilizers. Finally, we investigate the structure of FFE states under local unitary operations, with a focus on the bipartite scenario and its connections to the theory of complex Hadamard matrices. | quantum physics |
Los Alamos is currently developing novel particle accelerator controls and diagnostics algorithms to enable higher quality beams with lower beam losses than is currently possible. The purpose of this workshop was to consider tuning and optimization challenges of a wide range of particle accelerators including linear proton accelerators such as the Los Alamos Neutron Science Center (LANSCE), rings such as the Advanced Photon Source (APS) synchrotron, free electron lasers (FEL) such as the Linac Coherent Light Source (LCLS) and LCLS-II, the European X-ray Free Electron Laser (EuXFEL), the Swiss FEL, and the planned MaRIE FEL, and plasma wake-field accelerators such as FACET, FACET-II, and AWAKE at CERN. One major challenge is an the ability to quickly create very high quality, extremely intense, custom current and energy profile beams while working with limited real time non-invasive diagnostics and utilizing time-varying uncertain initial beam distributions and accelerator components. Currently, a few individual accelerator labs have been developing and applying their own diagnostics tools and custom control and ML algorithms for automated machine tuning and optimization. The goal of this workshop was to bring together a group of accelerator physicists and accelerator related control and ML experts in order to define which controls and diagnostics would be most useful for existing and future accelerators and to create a plan for developing a new family of algorithms that can be shared and maintained by the community. | physics |
In radar accurate localization of unmanned aerial vehicle (UAV) swarms, the high density, similar motion parameters, small radar cross-section (RCS), strong noise and far range put forward high requirements on radar resolution and transmitting power. In this paper, by using advantages of the long-time integration (LTI) technique and gridless sparse method, we construct a super-resolution framework for radar accurate localization of UAV swarms without changing radar hardware and system parameters. Thereafter, based on this framework, a range super-resolution method is proposed to realize the radar accurate localization of UAV swarms. Mathematical analyses and numerical simulations are performed and demonstrate that, compared to the keystone transform (KT)-based LTI method, MUSIC-based method and reweighted atomic-norm minimization (RAM)-based method, the range super-resolution method is more robust and practical for radar accurate localization of UAV swarms under the noisy environment. Additionally, the real experiment with X-band radar is also conducted to verify the effectiveness of the range super-resolution method. | electrical engineering and systems science |
Reduced complexity faster-than-Nyquist (FTN) signaling systems are gaining increased attention as they provide improved bandwidth utilization for an acceptable level of detection complexity. In order to have a better understanding of the tradeoff between performance and complexity of the reduced complexity FTN detection techniques, it is necessary to study these techniques in the presence of channel coding. In this paper, we investigate the performance a polar coded FTN system which uses a reduced complexity FTN detection, namely, the recently proposed successive symbol-by-symbol with go-backK sequence estimation (SSSgbKSE) technique. Simulations are performed for various intersymbol-interference (ISI) levels and for various go-back-K values. Bit error rate (BER) performance of Bahl-Cocke-Jelinek-Raviv (BCJR) detection and SSSgbKSE detection techniques are studied for both uncoded and polar coded systems. Simulation results reveal that polar codes can compensate some of the performance loss incurred in the reduced complexity SSSgbKSE technique and assist in closing the performance gap between BCJR and SSSgbKSE detection algorithms. | electrical engineering and systems science |
We propose local polynomial estimators for the conditional mean of a continuous response when only pooled response data are collected under different pooling designs. Asymptotic properties of these estimators are investigated and compared. Extensive simulation studies are carried out to compare finite sample performance of the proposed estimators under various model settings and pooling strategies. We apply the proposed local polynomial regression methods to two real-life applications to illustrate practical implementation and performance of the estimators for the mean function. | statistics |
This work consists an introduction to the classical and quantum information theory of geometric flows of (relativistic) Lagrange--Hamilton mechanical systems. Basic geometric and physical properties of the canonical nonholonomic deformations of G. Perelman entropy functionals and geometric flows evolution equations of classical mechanical systems are described. There are studied projections of such F- and W-functionals on Lorentz spacetime manifolds and three-dimensional spacelike hypersurfaces. These functionals are used for elaborating relativistic thermodynamic models for Lagrange--Hamilton geometric evolution and respective generalized R. Hamilton geometric flow and nonholonomic Ricci flow equations. The concept of nonholonomic W-entropy is developed as a complementary one for the classical Shannon entropy and the quantum von Neumann entropy. There are considered geometric flow generalizations of the approaches based on classical and quantum relative entropy, conditional entropy, mutual information, and related thermodynamic models. Such basic ingredients and topics of quantum geometric flow information theory are elaborated using the formalism of density matrices and measurements with quantum channels for the evolution of quantum mechanical systems. | physics |
We perform a numerical study of a spin-1/2 model with $\mathbb{Z}_2 \times \mathbb{Z}_2$ symmetry in one dimension which demonstrates an interesting similarity to the physics of two-dimensional deconfined quantum critical points (DQCP). Specifically, we investigate the quantum phase transition between Ising ferromagnetic and valence bond solid (VBS) symmetry-breaking phases. Working directly in the thermodynamic limit using uniform matrix product states, we find evidence for a direct continuous phase transition that lies outside of the Landau-Ginzburg-Wilson paradigm. In our model, the continuous transition is found everywhere on the phase boundary. We find that the magnetic and VBS correlations show very close power law exponents, which is expected from the self-duality of the parton description of this DQCP. Critical exponents vary continuously along the phase boundary in a manner consistent with the predictions of the field theory for this transition. We also find a regime where the phase boundary splits, as suggested by the theory, introducing an intermediate phase of coexisting ferromagnetic and VBS order parameters. Interestingly, we discover a transition involving this coexistence phase which is similar to the DQCP, being also disallowed by Landau-Ginzburg-Wilson symmetry-breaking theory. | condensed matter |
The possibility of making an object invisible for detectors has become a topic of considerable interest over the past decades. Most of the studies so far focused on reducing the visibility by reshaping the electromagnetic scattering in the spatial domain. In fact, by manipulating the electromagnetic scattering in the time domain, the visibility of an object can also be reduced. Importantly, unlike previous studies on phase-switched screens and time-varying metasurfaces, where the effect is narrow band due to the dispersive resonance, for microwave frequency range, we introduce a broadband switchable metasurface integrated with p-i-n diodes. The reflection phase of the metasurface can be changed by approximately {\pi} over a fractional bandwidth of 76%. By modulating the metasurface quasirandomly in the time domain, the incident narrow-band signal is spread into a white-noiselike spectrum upon reflection, creating a spectral camouflage. The broadband feature of the proposed time-varying metasurface can provide practical insight for various applications, including radar stealth and ultrawide-band wireless communication. | physics |
We examine whether galaxy environments directly affect triggering nuclear activity in Sloan Digital Sky Survey (SDSS) local spiral galaxies using a volume-limited sample with the $r$-band absolute magnitude $M_{r} < -19.0$ and $0.02 < z < 0.055$ selected from the SDSS Data Release 7. To avoid incompleteness of the central velocity dispersion $\sigma$ of the volume-limited sample and to fix the black hole mass affecting AGN activity, we limit the sample to a narrow $\sigma$ range of $130$ km s$^{-1}<\sigma<200$ km s$^{-1}$. We define a variety of environments as a combination of neighbour interactions and local density on a galaxy. After the central star formation rate (which is closely related to AGN activity level) is additionally restricted, the direct impact of the environment is unveiled. In the outskirts of rich clusters, red spiral galaxies show a significant excess of the AGN fraction despite the lack of central gas. We argue that they have been pre-processed before entering the rich clusters, and due to mergers or strong encounters in the in-fall region, their remaining gases efficiently lose angular momentum. We investigate an environment in which many star-forming galaxies coexist with a few starburst-AGN composite hosts having the highest [OIII] luminosity. We claim that they are a gas-rich merger product in groups or are group galaxies in-falling into clusters, indicating that many AGN signatures may be obscured following the merger events. | astrophysics |
As the world we live in becomes smaller and more interconnected, with people and goods traveling for thousands of kilometers to reach their destinations, the reliability and efficiency of transportation systems have become critical. Indeed, trans-continental highways need particular attention due to their important role in sustaining globalization. In this context, intelligent transportation systems (ITS) can actively enhance the safety, mobility, productivity, and comfort of trans-continental highways. However, ITS efficiency depends greatly on the roads where they are deployed, on the availability of power and connectivity, and on the integration of future connected and autonomous vehicles. To this end, high altitude platform station (HAPS) systems, due to their mobility, sustainability, payload capacity, and communication/caching/computing capabilities, are seen as a key enabler of future ITS services for trans-continental highways; this paradigm is referred to as HAPS-ITS. The latter is envisioned as an active component of ITS systems to support a plethora of transportation applications, such as traffic monitoring, accident reporting, and platooning. This paper discusses how HAPS systems can enable advanced ITS services for trans-continental highways, presenting the main requirements of HAPS-ITS and a detailed case study of the Trans-Sahara highway. | electrical engineering and systems science |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.