text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
We propose a scheme for investigating the nonequilibrium aspects of small-polaron physics using an array of superconducting qubits and microwave resonators. This system, which can be realized with transmon or gatemon qubits, serves as an analog simulator for a lattice model describing a nonlocal coupling of a quantum particle (excitation) to dispersionless phonons. We study its dynamics following an excitation-phonon (qubit-resonator) interaction quench using a numerically exact approach based on a Chebyshev-moment expansion of the time-evolution operator of the system. We thereby glean heretofore unavailable insights into the process of the small-polaron formation resulting from strongly momentum-dependent excitation-phonon interactions, most prominently about its inherent dynamical timescale. To further characterize this complex process, we evaluate the excitation-phonon entanglement entropy and show that initially prepared bare-excitation Bloch states here dynamically evolve into small-polaron states that are close to being maximally entangled. Finally, by computing the dynamical variances of the phonon position and momentum quadratures, we demonstrate a pronounced non-Gaussian character of the latter states, with a strong antisqueezing in both quadratures.
|
quantum physics
|
We construct black hole solutions to the leading order of string effective action in five dimensions with the source given by dilaton and magnetically charged antisymmetric gauge $B$-field. Presence of the considered $B$-field leads to the unusual asymptotic behavior of solutions which are neither asymptotically flat nor asymptotically (A)dS. We consider the three-dimensional space part to correspond to the Bianchi classes and so the horizons of these topological black hole solutions are modeled by seven homogeneous Thurston geometries of $E^3$, $S^3$, $H^3$, $H^2 \times E^1$, $\widetilde{{SL_2R}}$, nilgeometry, and solvegeometry. Calculating the quasi-local mass, temperature, entropy, dilaton charge, and magnetic potential, we show that the first law of black hole thermodynamics is satisfied by these quantities and the dilaton hair is of the secondary type. Furthermore, for Bianchi type $V$, the $T$-dual black hole solution is obtained which carries no charge associated with $B$-field and possesses a dilaton hair of secondary kind. Also, the entropy turns to be invariant under the $T$-duality.
|
high energy physics theory
|
Central exclusive diffractive (CED) production of meson resonances potentially is a factory producing new particles, in particular a glueball. The produced resonances lie in trajectories with vacuum quantum numbers, essentially on the pomeron trajectory. A tower of resonance recurrences, the production cross section and the resonances widths are predicted. A new feature is the form of the non-linear pomeron trajectory, producing resonances (glueballs) with increasing widths. At LHC energies, in the nearly forward direction the $t$-channel both in elastic, single or double diffraction dissociation as well as in CED is dominated by pomeron exchange (the role of secondary trajectories is negligible, however a small contribution from the odderon may be present).
|
high energy physics phenomenology
|
Congruent Procrustes analysis aims to find the best matching between two point sets through rotation, reflection and translation. We formulate the Procrustes problem for hyperbolic spaces, review the canonical definition of the center of point sets, and give a closed form solution for the optimal isometry for noise-free measurements. We also analyze the performance of the proposed method under measurement noise.
|
electrical engineering and systems science
|
High-energy (>20keV) X-ray photon detection at high quantum yield, high spatial resolution and short response time has long been an important area of study in physics. Scintillation is a prevalent method but limited in various ways. Directly detecting high-energy X-ray photons has been a challenge to this day, mainly due to low photon-to-photoelectron conversion efficiencies. Commercially available state-of-the-art Si direct detection products such as the Si charge-coupled device (CCD) are inefficient for >10keV photons. Here, we present Monte Carlo simulation results and analyses to introduce a highly effective yet simple high-energy X-ray detection concept with significantly enhanced photon-to-electron conversion efficiencies composed of two layers: a top high-Z photon energy attenuator layer (PAL) and a bottom Si detector. We use the principle of photon energy down conversion, where high-energy X-ray photon energies are attenuated down to and below 10keV via inelastic scattering suitable for efficient photoelectric absorption by Si. Our Monte Carlo simulation results demonstrate that 10-30x increase in quantum yield can be achieved using PbTe PAL on Si, potentially advancing high-resolution, high-efficiency X-ray detection using PAL-enhanced Si CMOS image sensors.
|
physics
|
We propose a distributed algorithm for multiagent systems that aim to optimize a common objective when agents differ in their estimates of the objective-relevant state of the environment. Each agent keeps an estimate of the environment and a model of the behavior of other agents. The model of other agents' behavior assumes agents choose their actions randomly based on a stationary distribution determined by the empirical frequencies of past actions. At each step, each agent takes the action that maximizes its expectation of the common objective computed with respect to its estimate of the environment and its model of others. We propose a weighted averaging rule with non-doubly stochastic weights for agents to estimate the empirical frequency of past actions of all other agents by exchanging their estimates with their neighbors over a time-varying communication network. Under this averaging rule, we show agents' estimates converge to the actual empirical frequencies fast enough. This implies convergence of actions to a Nash equilibrium of the game with identical payoffs given by the expectation of the common objective with respect to an asymptotically agreed estimate of the state of the environment.
|
electrical engineering and systems science
|
The runtime performance of modern SAT solvers is deeply connected to the phase transition behavior of CNF formulas. While CNF solving has witnessed significant runtime improvement over the past two decades, the same does not hold for several other classes such as the conjunction of cardinality and XOR constraints, denoted as CARD-XOR formulas. The problem of determining the satisfiability of CARD-XOR formulas is a fundamental problem with a wide variety of applications ranging from discrete integration in the field of artificial intelligence to maximum likelihood decoding in coding theory. The runtime behavior of random CARD-XOR formulas is unexplored in prior work. In this paper, we present the first rigorous empirical study to characterize the runtime behavior of 1-CARD-XOR formulas. We show empirical evidence of a surprising phase-transition that follows a non-linear tradeoff between CARD and XOR constraints.
|
computer science
|
Bi$_2$O$_2$Se is a promising material for next-generation semiconducting electronics. It exhibits premature metallicity on the introduction of a tiny amount of electrons, the physics behind which remains elusive. Here we report on transport and dielectric measurements in Bi$_2$O$_2$Se single crystals at various carrier densities. The temperature-dependent resistivity ($\rho$) indicates a smooth evolution from the semiconducting to the metallic state. The critical concentration for the metal-insulator transition (MIT) to occur is extraordinarily low ($n_\textrm{c}\sim10^{16}$ cm$^{-3}$). The relative permittivity of the insulating sample is huge ($\epsilon_\textrm{r}\approx155(10)$) and varies slowly with temperature. Combined with the light effective mass, a long effective Bohr radius ($a_\textrm{B}^*\approx36(2)$ $\textrm{nm}$) is derived, which provides a reasonable interpretation of the metallic prematurity according to Mott's criterion for MITs. The high electron mobility ($\mu$) at low temperatures may result from the screening of ionized scattering centers due to the huge $\epsilon_\textrm{r}$. Our findings shed light on the electron dynamics in two dimensional (2D) Bi$_2$O$_2$Se devices.
|
condensed matter
|
Context: The globalisation of activities associated with software development and use has introduced many challenges in practice and for research. While the predominant approach to research in software engineering has followed a positivist science model, this approach may be sub-optimal when addressing problems with a dominant social or cultural dimension, such as those frequently encountered when studying work practices in a globally distributed team setting. The investigation of such a team reported in this paper provides one example of an alternative approach to research in a global context, through a longitudinal interpretive field study seeking to understand how global virtual teams mediated the use of technology. Objective: Our focus in this paper is on the conduct of research in the context of global software activities, particularly as applied to the actions and interactions of global virtual teams. Method: We describe how we undertook a substantial field study of global virtual teams, and highlight how the adopted structuration theory enabled us to deliver effectively against our goals. Results: We believe that the approach taken suited a research context in which situated practices were occurring over time in a highly complex domain, ensuring that our results were both strongly grounded and relevant to practice. It has resulted in the generation of substantive theory and techniques that have been adapted and applied on a pilot basis in further field settings. Conclusion: We conclude that globally distributed teamwork presents a complex context which demands new research approaches, beyond the limited set customarily applied by software engineering researchers. We advocate experimenting with different research methodologies and methods so that we have a more rounded repertoire to address the most important and relevant issues in global software development research.(Abridged)
|
computer science
|
Using elementary techniques, we prove sharp anisotropic Hardy-Littlewood inequalities for positive multilinear forms. In particular, we recover an inequality proved by F. Bayart in 2018.
|
mathematics
|
We describe two new generation mechanisms for Dark Matter composed of sterile neutrinos with ${\cal O}(1)$ keV mass. The model contains a light scalar field which coherently oscillates in the early Universe and modulates the Majorana mass of the sterile neutrino. In a region of model parameter space, the oscillations between active and sterile neutrinos are resonantly enhanced. This mechanism allows us to produce sterile neutrino DM with small mixing angle with active neutrinos, thus evading the X-ray constraints. At the same time the spectrum of produced DM is much cooler, than in the case of ordinary oscillations in plasma, opening a window of lower mass DM, which is otherwise forbidden by structure formation considerations. In other regions of the model parameter space, where the resonance does not appear, another mechanism can operate: large field suppresses the active-sterile oscillations, but instead sterile neutrinos are produced by the oscillating scalar field when the effective fermion mass crosses zero. In this case DM component is cold, and even 1 keV neutrino is consistent with the cosmic structure formation.
|
high energy physics phenomenology
|
Graph Neural Networks (GNN) has demonstrated the superior performance in many challenging applications, including the few-shot learning tasks. Despite its powerful capacity to learn and generalize the model from few samples, GNN usually suffers from severe over-fitting and over-smoothing as the model becomes deep, which limit the scalability. In this work, we propose a novel Attentive GNN to tackle these challenges, by incorporating a triple-attention mechanism, i.e. node self-attention, neighborhood attention, and layer memory attention. We explain why the proposed attentive modules can improve GNN for few-shot learning with theoretical analysis and illustrations. Extensive experiments show that the proposed Attentive GNN model achieves the promising results, comparing to the state-of-the-art GNN- and CNN-based methods for few-shot learning tasks, over the mini-ImageNet and tiered-ImageNet benchmarks, under ConvNet-4 and ResNet-based backbone with both inductive and transductive settings. The codes will be made publicly available.
|
computer science
|
We study a new model of quintessential inflation which is inspired by supergravity and string theory. The model features a kinetic pole, which gives rise to the inflationary plateau, and a runaway quintessential tail. We envisage a coupling between the inflaton and the Peccei-Quinn (PQ) field which terminates the roll of the runaway inflaton and traps the latter at an enhanced symmetry point (ESP), thereby breaking the PQ symmetry. The kinetic density of the inflaton is transferred to the newly created thermal bath of the hot big bang due to the decay of PQ particles. The model successfully accounts for the observations of inflation and dark energy without any fine-tuning, while also resolving the strong CP problem of QCD and generating axionic dark matter, without isocurvature perturbations. Trapping the inflaton at the ESP ensures that the model does not suffer from the infamous 5th force problem, which typically plagues quintessence.
|
high energy physics phenomenology
|
Assessing tumor tissue heterogeneity via ultrasound has recently been suggested for predicting early response to treatment. The ultrasound backscattering characteristics can assist in better understanding the tumor texture by highlighting local concentration and spatial arrangement of tissue scatterers. However, it is challenging to quantify the various tissue heterogeneities ranging from fine-to-coarse of the echo envelope peaks in tumor texture. Local parametric fractal features extracted via maximum likelihood estimation from five well-known statistical model families are evaluated for the purpose of ultrasound tissue characterization. The fractal dimension (self-similarity measure) was used to characterize the spatial distribution of scatterers, while the Lacunarity (sparsity measure) was applied to determine scatterer number density. Performance was assessed based on 608 cross-sectional clinical ultrasound RF images of liver tumors (230 and 378 demonstrating respondent and non-respondent cases, respectively). Crossvalidation via leave-one-tumor-out and with different k-folds methodologies using a Bayesian classifier were employed for validation. The fractal properties of the backscattered echoes based on the Nakagami model (Nkg) and its extend four-parameter Nakagami-generalized inverse Gaussian (NIG) distribution achieved best results - with nearly similar performance - for characterizing liver tumor tissue. Accuracy, sensitivity and specificity for the Nkg/NIG were: 85.6%/86.3%, 94.0%/96.0%, and 73.0%/71.0%, respectively. Other statistical models, such as the Rician, Rayleigh, and K-distribution were found to not be as effective in characterizing the subtle changes in tissue texture as an indication of response to treatment. Employing the most relevant and practical statistical model could have potential consequences for the design of an early and effective clinical therapy.
|
electrical engineering and systems science
|
We investigate the asymptotic behaviour of networks of interacting non-linear Hawkes processes modeling a homogeneous population of neurons in the large population limit. In particular, we prove a functional central limit theorem for the mean spike-activity thereby characterizing the asymptotic fluctuations in terms of a stochastic Volterra integral equation. Our approach differs from previous approaches in making use of the associated resolvent in order to represent the fluctutations as Skorokhod continuous mappings of weakly converging martingales. Since the Lipschitz properties of the resolvent are explicit, our analysis in principle also allows to derive approximation errors in terms of driving martingales.
|
mathematics
|
Adversarial adaptation models have demonstrated significant progress towards transferring knowledge from a labeled source dataset to an unlabeled target dataset. Partial domain adaptation (PDA) investigates the scenarios in which the source domain is large and diverse, and the target label space is a subset of the source label space. The main purpose of PDA is to identify the shared classes between the domains and promote learning transferable knowledge from these classes. In this paper, we propose a multi-class adversarial architecture for PDA. The proposed approach jointly aligns the marginal and class-conditional distributions in the shared label space by minimaxing a novel multi-class adversarial loss function. Furthermore, we incorporate effective regularization terms to encourage selecting the most relevant subset of source domain classes. In the absence of target labels, the proposed approach is able to effectively learn domain-invariant feature representations, which in turn can enhance the classification performance in the target domain. Comprehensive experiments on three benchmark datasets Office-31, Office-Home, and Caltech-Office corroborate the effectiveness of the proposed approach in addressing different partial transfer learning tasks.
|
computer science
|
We propose a model where the anapole appears as a hidden photon that is coupled to visible matter through a kinetic mixing. For low momentum $|{\bf p}| \ll M$ where $M$ is the cutoff the model (soft hidden photons limit) is reduced to the Ho-Scherrer description. We show that the hidden gauge boson is stable and therefore the hidden photons, indeed, are candidates for dark matter. Our approach shows that anapole and kinetic mixing terms are equivalent descriptions seen from different scales of energy.
|
high energy physics phenomenology
|
We prove the stability of contact discontinuities without shear, a family of special discontinuous solutions for the three-dimensional full Euler systems, in the class of vanishing dissipation limits of the corresponding Navier-Stokes-Fourier system. We also show that solutions of the Navier-Stokes-Fourier system converge to the contact discontinuity when the initial datum converges to the contact discontinuity itself. This implies the uniqueness of the contact discontinuity in the class that we are considering. Our results give an answer to the open question, whether the contact discontinuity is unique for the multi-D compressible Euler system. Our proof is based on the relative entropy method, together with the theory of $a$-contraction up to a shift.
|
mathematics
|
Extracting robust and general 3D local features is key to downstream tasks such as point cloud registration and reconstruction. Existing learning-based local descriptors are either sensitive to rotation transformations, or rely on classical handcrafted features which are neither general nor representative. In this paper, we introduce a new, yet conceptually simple, neural architecture, termed SpinNet, to extract local features which are rotationally invariant whilst sufficiently informative to enable accurate registration. A Spatial Point Transformer is first introduced to map the input local surface into a carefully designed cylindrical space, enabling end-to-end optimization with SO(2) equivariant representation. A Neural Feature Extractor which leverages the powerful point-based and 3D cylindrical convolutional neural layers is then utilized to derive a compact and representative descriptor for matching. Extensive experiments on both indoor and outdoor datasets demonstrate that SpinNet outperforms existing state-of-the-art techniques by a large margin. More critically, it has the best generalization ability across unseen scenarios with different sensor modalities. The code is available at https://github.com/QingyongHu/SpinNet.
|
computer science
|
The aim of the paper is to describe a model of the development of the Covid-19 contamination of the population of a country or a region. For this purpose a special branching process with two types of individuals is considered. This model is intended to use only the observed daily statistics to estimate the main parameter of the contamination and to give a prediction of the mean value of the non-observed population of the contaminated individuals. This is a serious advantage in comparison with other more complicated models where the observed official statistics are not sufficient. In this way the specific development of the Covid-19 epidemics is considered for different countries.
|
statistics
|
In this paper, we study a continuous-time discounted jump Markov decision process with both controlled actions and observations. The observation is only available for a discrete set of time instances. At each time of observation, one has to select an optimal timing for the next observation and a control trajectory for the time interval between two observation points. We provide a theoretical framework that the decision maker can utilize to find the optimal observation epochs and the optimal actions jointly. Two cases are investigated. One is gated queueing systems in which we explicitly characterize the optimal action and the optimal observation where the optimal observation is shown to be independent of the state. Another is the inventory control problem with Poisson arrival process in which we obtain numerically the optimal action and observation. The results show that it is optimal to observe more frequently at a region of states where the optimal action adapts constantly.
|
mathematics
|
We investigate the nonequilibrium evolution of the quark-meson model using two-particle irreducible effective action techniques. Our numerical simulations, which include the full dynamics of the order parameter of chiral symmetry, show how the model thermalizes into different regions of its phase diagram. In particular, by studying quark and meson spectral functions, we shed light on the real-time dynamics approaching the crossover transition, revealing e.g. the emergence of light effective fermionic degrees of freedom in the infrared. At late times in the evolution, the fluctuation-dissipation relation emerges naturally among both meson and quark degrees of freedom, confirming that the simulation successfully reaches thermal equilibrium.
|
high energy physics phenomenology
|
With the stringent requirement of receiving video from unmanned aerial vehicle (UAV) from anywhere in the stadium of sports events and the significant-high per-cell throughput for video transmission to virtual reality (VR) users, a promising solution is a cell-free multi-group broadcast (CF-MB) network with cooperative reception and broadcast access points (AP). To explore the benefit of broadcasting user-correlated decode-dependent video resources to spatially correlated VR users, the network should dynamically schedule the video and cluster APs into virtual cells for a different group of VR users with overlapped video requests. By decomposition the problem into scheduling and association sub-problems, we first introduce the conventional non-learning-based scheduling and association algorithms, and a centralized deep reinforcement learning (DRL) association approach based on the rainbow agent with a convolutional neural network (CNN) to generate decisions from observation. To reduce its complexity, we then decompose the association problem into multiple sub-problems, resulting in a networked-distributed Partially Observable Markov decision process (ND-POMDP). To solve it, we propose a multi-agent deep DRL algorithm. To jointly solve the coupled association and scheduling problems, we further develop a hierarchical federated DRL algorithm with scheduler as meta-controller, and association as the controller. Our simulation results shown that our CF-MB network can effectively handle real-time video transmission from UAVs to VR users. Our proposed learning architectures is effective and scalable for a high-dimensional cooperative association problem with increasing APs and VR users. Also, our proposed algorithms outperform non-learning based methods with significant performance improvement.
|
electrical engineering and systems science
|
We consider radial solutions to the cubic Schr{\"o}dinger equation on the Heisenberg group$$i\partial_t u - \Delta_{\mathbb{H}^1} u = |u|^2u, \quad\Delta_{\mathbb{H}^1} = \frac{1}{4}(\partial_x^2+\partial_y^2) + (x^2+y^2)\partial_s^2, \quad(t,x,y,s) \in \mathbb{R}\times\mathbb{H}^1.$$This equation is a model for totally non-dispersive evolution equations. We show existence of ground state traveling waves with speed $\beta \in (-1,1)$. When the speed $\beta$ is sufficiently close to $1$, we prove their uniqueness up to symmetries and their smoothness along the parameter $\beta$. The main ingredient is the emergence of a limiting system as $\beta$ tends to the limit $1$, for which we establish linear stability of the ground state traveling wave.
|
mathematics
|
Direct imaging surveys have found that long-period super-Jupiters are rare. By contrast, recent modeling of the widespread gaps in protoplanetary disks revealed by ALMA suggests an abundant population of smaller Neptune to Jupiter-mass planets at large separations. The thermal emission from such lower-mass planets is negligible at optical and near-infrared wavelengths, leaving only their weak signals in reflected light. Planets do not scatter enough light at these large orbital distances, but there is a natural way to enhance their reflecting area. Each of the four giant planets in our solar system hosts swarms of dozens of irregular satellites, gravitationally captured planetesimals that fill their host planets' spheres of gravitational influence. What we see of them today are the leftovers of an intense collisional evolution. At early times, they would have generated bright circumplanetary debris disks. We investigate the properties and detectability of such irregular satellite disks (ISDs) following models for their collisional evolution from Kennedy and Wyatt 2011. We find that ISD brightnesses peak in the $10-100$ AU range probed by ALMA, and can render planets detectable over a wide range of parameters with upcoming high-contrast instrumentation. We argue that future instruments with wide fields of view could simultaneously characterize the atmospheres of known close-in planets, and reveal the population of long-period $\sim$ Neptune-Jupiter mass exoplanets inaccessible to other detection methods. This provides a complementary and compelling science case that would elucidate the early lives of planetary systems.
|
astrophysics
|
The vector boson scattering (VBS) processes in Large Hadron Collider (LHC) experiments offer a unique opportunity to probe the anomalous quartic gauge couplings (aQGCs). We study the dimension-8 operators contributing to the anomalous $\gamma\gamma WW$ coupling and the corresponding unitarity bounds via the exclusive $\gamma\gamma \to W^+W^-$ production in $pp$ collisions at LHC for a center of mass energy of $\sqrt{s}=13$ TeV. By analysing the kinematical features of the signal, we propose an event selection strategy to highlight the aQGC contributions. Based on the event selection strategy, the statistical significance of the signals are analyzed in detail, and the constraints on the coefficients of the anomalous quartic gauge operators are obtained.
|
high energy physics phenomenology
|
The identification of the electromagnetic counterpart candidate ZTF19abanrhr to the binary black hole merger GW190521 opens the possibility to infer cosmological parameters from this standard siren with a uniquely identified host galaxy. The distant merger allows for cosmological inference beyond the Hubble constant. Here we show that the three-dimensional spatial location of ZTF19abanrhr calculated from the electromagnetic data remains consistent with the updated sky localization of GW190521 provided by the LIGO-Virgo Collaboration. If ZTF19abanrhr is associated with the GW190521 merger and assuming a flat wCDM model we find that $H_0 =48^{+24}_{-10}$ km/s/Mpc, $\Omega_m =0.39^{+0.38}_{-0.29}$, and $w_0 = -1.29^{+0.63}_{-0.50}$ (median and 68% credible interval). If we use the Hubble constant value inferred from another gravitational-wave event, GW170817, as a prior for our analysis, together with assumption of a flat ${\Lambda}$CDM and the model-independent constraint on the physical matter density ${\omega}_m$ from Planck, we find $H_0 = 69.1^{8.7}_{-6.0}$ km/s/Mpc.
|
astrophysics
|
Edge computing offers an additional layer of compute infrastructure closer to the data source before raw data from privacy-sensitive and performance-critical applications is transferred to a cloud data center. Deep Neural Networks (DNNs) are one class of applications that are reported to benefit from collaboratively computing between the edge and the cloud. A DNN is partitioned such that specific layers of the DNN are deployed onto the edge and the cloud to meet performance and privacy objectives. However, there is limited understanding of: (a) whether and how evolving operational conditions (increased CPU and memory utilization at the edge or reduced data transfer rates between the edge and the cloud) affect the performance of already deployed DNNs, and (b) whether a new partition configuration is required to maximize performance. A DNN that adapts to changing operational conditions is referred to as an 'adaptive DNN'. This paper investigates whether there is a case for adaptive DNNs in edge computing by considering three questions: (i) Are DNNs sensitive to operational conditions? (ii) How sensitive are DNNs to operational conditions? (iii) Do individual or a combination of operational conditions equally affect DNNs? (iv) Is DNN partitioning sensitive to hardware architectures on the cloud/edge? The exploration is carried out in the context of 8 pre-trained DNN models and the results presented are from analyzing nearly 8 million data points. The results highlight that network conditions affects DNN performance more than CPU or memory related operational conditions. Repartitioning is noted to provide a performance gain in a number of cases, but a specific trend was not noted in relation to its correlation to the underlying hardware architecture. Nonetheless, the need for adaptive DNNs is confirmed.
|
computer science
|
This study develops an online predictive optimization framework for dynamically operating a transit service in an area of crowd movements. The proposed framework integrates demand prediction and supply optimization to periodically redesign the service routes based on recently observed demand. To predict demand for the service, we use Quantile Regression to estimate the marginal distribution of movement counts between each pair of serviced locations. The framework then combines these marginals into a joint demand distribution by constructing a Gaussian copula, which captures the structure of correlation between the marginals. For supply optimization, we devise a linear programming model, which simultaneously determines the route structure and the service frequency according to the predicted demand. Importantly, our framework both preserves the uncertainty structure of future demand and leverages this for robust route optimization, while keeping both components decoupled. We evaluate our framework using a real-world case study of autonomous mobility in a university campus in Denmark. The results show that our framework often obtains the ground truth optimal solution, and can outperform conventional methods for route optimization, which do not leverage full predictive distributions.
|
statistics
|
The new technologies emerging in the energy sector pose new requirements for both the regulation and operation of the electricity grid. Revised tariff structures and the introduction of local markets are two approaches that could tackle the issues resulting from the increasing number of active end-users. However, a smooth transition from the traditional schemes is critical, thus creating the need for architecture that can be implemented in the current circumstances. This paper proposes a local market concept and a corresponding dynamic tariff system, which can be operated parallel to the current retail market. The participants of the market can trade energy peer-to-peer via a platform that allocates proper network charges to all transactions. The calculated tariffs consider the physical effect of the transactions on the grid in terms of nodal voltage deviations, branch current flows, and overall system losses. The proposed method is tested on the IEEE European LV test feeder through market simulations. The results imply that with the proper tuning of DNUT (Dynamic Network Usage Tariff) components, the end-users can realize surplus, while the security of network operation is also ensured.
|
electrical engineering and systems science
|
We present the first theoretical study of medium modifications of the global geometrical pattern, i.e., transverse sphericity ($S_{\perp}$) distribution of jet events with parton energy loss in relativistic heavy-ion collisions. In our investigation, POWHEG+PYTHIA is employed to make an accurate description of transverse sphericity in the p+p baseline, which combines the next-to-leading order (NLO) pQCD calculations with the matched parton shower (PS). The Linear Boltzmann Transport (LBT) model of the parton energy loss is implemented to simulate the in-medium evolution of jets. We calculate the event normalized transverse sphericity distribution in central Pb+Pb collisions at the LHC, and give its medium modifications. An enhancement of transverse sphericity distribution at small $S_{\perp}$ region but a suppression at large $S_{\perp}$ region are observed in A+A collisions as compared to their p+p references, which indicates that in overall the geometry of jet events in Pb+Pb becomes more pencil-like. We demonstrate that for events with 2 jets in the final-state of heavy-ion collisions, the jet quenching makes the geometry more sphere-like with medium-induced gluon radiation. However, for events with $\ge 3$~jets, parton energy loss in the QCD medium leads to the events more pencil-like due to jet number reduction, where less energetic jets may lose their energies and then fall off the jet selection kinematic cut. These two effects offset each other and in the end result in more jetty events in heavy-ion collisions relative to that in p+p.
|
high energy physics phenomenology
|
In the present work, an embedded PI controller is designed for speed regulation of DC servomotor over a wireless network. The embedded controller integrates PI controller with a proposed time-delay estimator and an adaptive digital Smith predictor for its real time operation, which is the novelty of the work. The real time or rather online operation of the developed embedded controller over wireless network is possible mainly due to the contributions in terms of proposing a new empirical formula for time-delay estimation, viable approximation of time-delay, discretization scheme in developing the proposed adaptive Smith predictor. The proposed embedded design validates that the deterioration of control performance due to random network-induced delay are mitigated in real time. The embedded controller is designed and tested for a short-range of 100 meters, and wireless technology chosen is suitable for biomedical applications.
|
electrical engineering and systems science
|
We study a wireless ad-hoc sensor network (WASN) where $N$ sensors gather data from the surrounding environment and transmit their sensed information to $M$ fusion centers (FCs) via multi-hop wireless communications. This node deployment problem is formulated as an optimization problem to make a trade-off between the sensing uncertainty and energy consumption of the network. Our primary goal is to find an optimal deployment of sensors and FCs to minimize a Lagrange combination of the sensing uncertainty and energy consumption. To support arbitrary routing protocols in WASNs, the routing-dependent necessary conditions for the optimal deployment are explored. Based on these necessary conditions, we propose a routing-aware Lloyd algorithm to optimize node deployment. Simulation results show that, on average, the proposed algorithm outperforms the existing deployment algorithms.
|
computer science
|
We study the macroscopics of 2d $\mathcal{N}=(0,4)$ SCFTs arising from F-theory constructions. The class of 2d SCFTs we consider live on black strings which are obtained by wrapping D3-branes on a curve in the base of a possibly singular elliptically fibered Calabi-Yau threefold. In addition, we allow the D3-branes to probe ALE or ALF spaces transversely. We compute anomaly coefficients of these SCFTs by determining Chern-Simons terms in the 3d action resulting from the reduction of 6d $\mathcal{N}=(1,0)$ supergravity on the compact space surrounding the black string. Essential contributions to these coefficients are from one-loop induced Chern-Simons terms arising from integrating out massive Kaluza-Klein modes.
|
high energy physics theory
|
Recently, deep learning-based positioning systems have gained attention due to their higher performance relative to traditional methods. However, obtaining the expected performance of deep learning-based systems requires large amounts of data to train model. Obtaining this data is usually a tedious process which hinders the utilization of such deep learning approaches. In this paper, we introduce a number of techniques for addressing the data collection problem for deep learning-based cellular localization systems. The basic idea is to generate synthetic data that reflects the typical pattern of the wireless data as observed from a small collected dataset. Evaluation of the proposed data augmentation techniques using different Android phones in a cellular localization case study shows that we can enhance the performance of the localization systems in both indoor and outdoor scenarios by 157% and 50.5%, respectively. This highlights the promise of the proposed techniques for enabling deep learning-based localization systems.
|
electrical engineering and systems science
|
Knowledge graph embedding research has overlooked the problem of probability calibration. We show popular embedding models are indeed uncalibrated. That means probability estimates associated to predicted triples are unreliable. We present a novel method to calibrate a model when ground truth negatives are not available, which is the usual case in knowledge graphs. We propose to use Platt scaling and isotonic regression alongside our method. Experiments on three datasets with ground truth negatives show our contribution leads to well-calibrated models when compared to the gold standard of using negatives. We get significantly better results than the uncalibrated models from all calibration methods. We show isotonic regression offers the best the performance overall, not without trade-offs. We also show that calibrated models reach state-of-the-art accuracy without the need to define relation-specific decision thresholds.
|
computer science
|
We describe the generation of sequences of random bits from the parity of photon counts produced by polarization measurements on a polarization-entangled state. The resulting sequences are bias free, pass the applicable tests in the NIST battery of statistical randomness tests, and are shown to be Borel normal, without the need for experimental calibration stages or postprocessing of the output. Because the photon counts are produced in the course of a measurement of the violation of the Clauser-Horne-Shimony-Holt inequality, we are able to concurrently verify the nonclassical nature of the photon statistics and estimate a lower bound on the min-entropy of the bit-generating source. The rate of bit production in our experiment is around 13 bits/s.
|
quantum physics
|
We consider a system of nonlinear PDEs modeling nematic electrolytes, and construct a dissipative solution with the help of its implementable, structure-inheriting space-time discretization. Computational studies are performed to study the mutual effects of electric, elastic, and viscous effects onto the molecules in a nematic electrolyte.
|
mathematics
|
We have constructed a very different type of particle than any presently known. It is a boson and resides in the $(1/2,0)\oplus(0,1/2)$ representation space. The associated local field has mass dimension three half. These new bosons can only be created or destroyed in pairs. When paired with a fermion the total zero point energy of the boson-fermion system identically vanishes. This communication presents the new quantum field theoretic construct.
|
high energy physics theory
|
We consider the problem of learning high-level controls over the global structure of generated sequences, particularly in the context of symbolic music generation with complex language models. In this work, we present the Transformer autoencoder, which aggregates encodings of the input data across time to obtain a global representation of style from a given performance. We show it is possible to combine this global representation with other temporally distributed embeddings, enabling improved control over the separate aspects of performance style and melody. Empirically, we demonstrate the effectiveness of our method on various music generation tasks on the MAESTRO dataset and a YouTube dataset with 10,000+ hours of piano performances, where we achieve improvements in terms of log-likelihood and mean listening scores as compared to baselines.
|
computer science
|
We consider a simple Preferential Attachment graph process, which begins with a finite graph, and in which a new $(t+1)$st vertex is added at each subsequent time step $t$, and connected to each previous vertex $u \leq t$ with probability $\frac{d_u(t)}{t}$ where $d_u(t)$ is the degree of $u$ at time $t$. We analyse the graph obtained as the infinite limit of this process, and show that so long as the initial finite graph is neither edgeless nor complete, with probability 1 the outcome will be a copy of the Rado graph augmented with a finite number of either isolated or universal vertices.
|
mathematics
|
A challenge in the Gauss sums factorization scheme is the presence of ghost factors - non-factors that behave similarly to actual factors of an integer - which might lead to the misidentification of non-factors as factors or vice versa, especially in the presence of noise. We investigate Type II ghost factors, which are the class of ghost factors that cannot be suppressed with techniques previously laid out in the literature. The presence of Type II ghost factors and the coherence time of the qubit set an upper limit for the total experiment time, and hence the largest factorizable number with this scheme. Discernability is a figure of merit introduced to characterize this behavior. We introduce preprocessing as a strategy to increase the discernability of a system, and demonstrate the technique with a transmon qubit. This can bring the total experiment time of the system closer to its decoherence limit, and increase the largest factorizable number.
|
quantum physics
|
The perplexing mystery of what maintains the solar coronal temperature at about a million K, while the visible disc of the Sun is only at 5800 K, has been a long standing problem in solar physics. A recent study by Mondal(2020) has provided the first evidence for the presence of numerous ubiquitous impulsive emissions at low radio frequencies from the quiet sun regions, which could hold the key to solving this mystery. These features occur at rates of about five hundred events per minute, and their strength is only a few percent of the background steady emission. One of the next steps for exploring the feasibility of this resolution to the coronal heating problem is to understand the morphology of these emissions. To meet this objective we have developed a technique based on an unsupervised machine learning approach for characterising the morphology of these impulsive emissions. Here we present the results of application of this technique to over 8000 images spanning 70 minutes of data in which about 34,500 features could robustly be characterised as 2D elliptical Gaussians.
|
astrophysics
|
In blind quantum computation (BQC), a client delegates her quantum computation to a server with universal quantum computers who learns nothing about the client's private information. In measurement-based BQC model, entangled states are generally used to realize quantum computing. However, to generate a large-scale entangled state in experiment becomes a challenge issue. In circuit-based BQC model, single-qubit gates can be realized precisely, but entangled gates are probabilistically successful. This remains a challenge to realize entangled gates with a deterministic method in some systems. To solve above two problems, we propose the first hybrid universal BQC protocol based on measurements and circuits, where the client prepares single-qubit states and the server performs universal quantum computing. We analyze and prove the correctness, blindness and verifiability of the proposed protocol.
|
quantum physics
|
Accurate channel state information (CSI) feedback plays a vital role in improving the performance gain of massive multiple-input multiple-output (m-MIMO) systems, where the dilemma is excessive CSI overhead versus limited feedback bandwith. By considering the noisy CSI due to imperfect channel estimation, we propose a novel deep neural network architecture, namely AnciNet, to conduct the CSI feedback with limited bandwidth. AnciNet extracts noise-free features from the noisy CSI samples to achieve effective CSI compression for the feedback. Experimental results verify that the proposed AnciNet approach outperforms the existing techniques under various conditions.
|
electrical engineering and systems science
|
The recent experimental observation of dissipation-induced structural instability provides new opportunities for exploring the competition mechanism between stationary and nonstationary dynamics [Science 366, 1496 (2019)]. In that study, two orthogonal quadratures of cavity field are coupled to two different Zeeman states of a spinor Bose-Einstein condensate (BEC). Here we propose a novel scheme to couple two density-wave degrees of freedom of a BEC to two quadratures of the cavity field. Being drastically different from previous studies, the light-matter quadratures coupling in our model is endowed with a tunable coupling angle. Apart from the uniform and self-organized phases, we unravel a dynamically unstable state induced by the cavity dissipation. Interestingly, the dissipation defines a particular coupling angle, across which the instabilities disappear. Moreover, at this critical coupling angle, one of the two atomic density waves can be independently excited without affecting one another. It is also found that our system can be mapped into a reduced three-level model under the commonly used low-excitation-mode approximation. However, the effectiveness of this approximation is shown to be broken by the dissipation nature for some special system parameters, hinting that the low-excitation-mode approximation is insufficient in capturing some dissipation-sensitive physics. Our work enriches the quantum simulation toolbox in the cavity-quantum-electrodynamics system and broadens the frontiers of light-matter interaction.
|
quantum physics
|
Parkinson's disease (PD) is a progressive degenerative disorder of the central nervous system characterized by motor and non-motor symptoms. As the disease progresses, patients alternate periods in which motor symptoms are mitigated due to medication intake (ON state) and periods with motor complications (OFF state). The time that patients spend in the OFF condition is currently the main parameter employed to assess pharmacological interventions and to evaluate the efficacy of different active principles. In this work, we present a system that combines automatic speech processing and deep learning techniques to classify the medication state of PD patients by leveraging personal speech-based bio-markers. We devise a speaker-dependent approach and investigate the relevance of different acoustic-prosodic feature sets. Results show an accuracy of 90.54% in a test task with mixed speech and an accuracy of 95.27% in a semi-spontaneous speech task. Overall, the experimental assessment shows the potentials of this approach towards the development of reliable, remote daily monitoring and scheduling of medication intake of PD patients.
|
electrical engineering and systems science
|
Deep Learning has gained immense success in pushing today's artificial intelligence forward. To solve the challenge of limited labeled data in the supervised learning world, unsupervised learning has been proposed years ago while low accuracy hinters its realistic applications. Generative adversarial network (GAN) emerges as an unsupervised learning approach with promising accuracy and are under extensively study. However, the execution of GAN is extremely memory and computation intensive and results in ultra-low speed and high-power consumption. In this work, we proposed a holistic solution for fast and energy-efficient GAN computation through a memristor-based neuromorphic system. First, we exploited a hardware and software co-design approach to map the computation blocks in GAN efficiently. We also proposed an efficient data flow for optimal parallelism training and testing, depending on the computation correlations between different computing blocks. To compute the unique and complex loss of GAN, we developed a diff-block with optimized accuracy and performance. The experiment results on big data show that our design achieves 2.8x speedup and 6.1x energy-saving compared with the traditional GPU accelerator, as well as 5.5x speedup and 1.4x energy-saving compared with the previous FPGA-based accelerator.
|
computer science
|
We investigate the potential for quantum computers to contribute to the analysis of the results of dark matter direct detection experiments. Careful experimental design could distinguish between the several distinct couplings, as allowed by effective field theory, between dark matter and ordinary matter, and quantum computation and quantum simulation has the potential to aid in the analysis of the experiments. To gain insight into the uncertainties in the relevant quantum calculations, we develop circuits to implement variational quantum eigensolver (VQE) algorithms for the Lipkin-Meshkov-Glick model, a highly symmetric model for which exact solutions are known, and implement the VQE for a 2-particle system on the IBM Quantum Experience. We identify initialization and two-qubit gates as the largest sources of error, pointing to the hardware improvements needed so that quantum computers can play an important role in the analysis of dark matter direct detection experiments.
|
high energy physics theory
|
We review the current status of model building for light dark matter in theories of QCD-like gauge groups in the hidden sector. The focus is upon the dark mesons with the $SU(3)_V$ flavor symmetry in scenarios of Strongly Interacting Massive Particles. We show the production mechanism and the kinetic equilibrium condition for dark mesons and discuss a unitarization of dark chiral perturbation theory with vector mesons in the scheme of hidden gauge symmetry.
|
high energy physics phenomenology
|
We provide a quantitative theory of discrimination between objects with the same color temperature but having different angular spectrum by intensity interferometry. The two-point correlation function of the black body image with extended angular spectrum has significant differences with a correlation function of a black body with a narrow angular spectrum.
|
physics
|
An analysis of accelerated kaon decays based on the Unruh effect shows a slight decrease in a CP-violation parameter for very high accelerations, as a consequence of the previously know increase in decay rates for non-inertial systems. Its consequences on the understanding of the relation between thermal and non-inertial phenomena are briefly discussed.
|
high energy physics phenomenology
|
To cater the rapidly growing demand for electricity leading to the integration of renewable energy sources in power system. Due to intermittent nature of renewables, it also brings challenges for research community during the planning and operation stage in power system. Therefore it is primary necessity of the community to develop an accurate forecasting technique to solve the intermittency problem. In this report, A forecasting technique is proposed based on ensemble of state of the art forecasting techniques. For performance comparison among the techniques, GEFCom2014 meteorological data are used to predict the photovoltaic power, and the obtained results are included in this report.
|
electrical engineering and systems science
|
Orthogonal time frequency space (OTFS) modulation is a promising candidate for supporting reliable information transmission in high-mobility vehicular networks. In this paper, we consider the employment of the integrated (radar) sensing and communication (ISAC) technique for assisting OTFS transmission in both uplink and downlink vehicular communication systems. Benefiting from the OTFS-ISAC signals, the roadside unit (RSU) is capable of simultaneously transmitting downlink information to the vehicles and estimating the sensing parameters of vehicles, e.g., locations and speeds, based on the reflected echoes. Then, relying on the estimated kinematic parameters of vehicles, the RSU can construct the topology of the vehicular network that enables the prediction of the vehicle states in the following time instant. Consequently, the RSU can effectively formulate the transmit downlink beamformers according to the predicted parameters to counteract the channel adversity such that the vehicles can directly detect the information without the need of performing channel estimation. As for the uplink transmission, the RSU can infer the delays and Dopplers associated with different channel paths based on the aforementioned dynamic topology of the vehicular network. Thus, inserting guard space as in conventional methods are not needed for uplink channel estimation which removes the required training overhead. Finally, an efficient uplink detector is proposed by taking into account the channel estimation uncertainty. Through numerical simulations, we demonstrate the benefits of the proposed ISAC-assisted OTFS transmission scheme.
|
electrical engineering and systems science
|
We study risk of the minimum norm linear least squares estimator in when the number of parameters $d$ depends on $n$, and $\frac{d}{n} \rightarrow \infty$. We assume that data has an underlying low rank structure by restricting ourselves to spike covariance matrices, where a fixed finite number of eigenvalues grow with $n$ and are much larger than the rest of the eigenvalues, which are (asymptotically) in the same order. We show that in this setting risk of minimum norm least squares estimator vanishes in compare to risk of the null estimator. We give asymptotic and non asymptotic upper bounds for this risk, and also leverage the assumption of spike model to give an analysis of the bias that leads to tighter bounds in compare to previous works.
|
statistics
|
The fast-growing market of autonomous vehicles, unmanned aerial vehicles, and fleets in general necessitates the design of smart and automatic navigation systems considering the stochastic latency along different paths in the traffic network. The longstanding shortest path problem in a deterministic network, whose counterpart in a congestion game setting is Wardrop equilibrium, has been studied extensively, but it is well known that finding the notion of an optimal path is challenging in a traffic network with stochastic arc delays. In this work, we propose three classes of risk-averse equilibria for an atomic stochastic congestion game in its general form where the arc delay distributions are load dependent and not necessarily independent of each other. The three classes are risk-averse equilibrium (RAE), mean-variance equilibrium (MVE), and conditional value at risk level $\alpha$ equilibrium (CVaR$_\alpha$E) whose notions of risk-averse best responses are based on maximizing the probability of taking the shortest path, minimizing a linear combination of mean and variance of path delay, and minimizing the expected delay at a specified risky quantile of the delay distributions, respectively. We prove that for any finite stochastic atomic congestion game, the risk-averse, mean-variance, and CVaR$_\alpha$ equilibria exist. We show that for risk-averse travelers, the Braess paradox may not occur to the extent presented originally since players do not necessarily travel along the shortest path in expectation, but they take the uncertainty of travel time into consideration as well. We show through some examples that the price of anarchy can be improved when players are risk-averse and travel according to one of the three classes of risk-averse equilibria rather than the Wardrop equilibrium.
|
computer science
|
A square matrix $A$ is completely positive if $A=BB^T$, where $B$ is a (not necessarily square) nonnegative matrix. In general, a completely positive matrix may have many, even infinitely many, such CP factorizations. But in some cases a unique CP factorization exists. We prove a simple necessary and sufficient condition for a completely positive matrix whose graph is triangle free to have a unique CP factorization. This implies uniqueness of the CP factorization for some other matrices on the boundary of the cone $\mathcal{CP}_n$ of $n\times n$ completely positive matrices. We also describe the minimal face of $\mathcal{CP}_n$ containing a completely positive $A$. If $A$ has a unique CP factorization, this face is polyhedral.
|
mathematics
|
Braiding defects in topological stabiliser codes can be used to fault-tolerantly implement logical operations. Twists are defects corresponding to the end-points of domain walls and are associated with symmetries of the anyon model of the code. We consider twists in multiple copies of the 2d surface code and identify necessary and sufficient conditions for considering these twists as anyons: namely that they must be self-inverse and that all charges which can be localised by the twist must be invariant under its associated symmetry. If both of these conditions are satisfied the twist and its set of localisable anyonic charges reproduce the behaviour of an anyonic model belonging to a hierarchy which generalises the Ising anyons. We show that the braiding of these twists results in either (tensor products of) the S gate or (tensor products of) the CZ gate. We also show that for any number of copies of the 2d surface code the application of H gates within a copy and CNOT gates between copies is sufficient to generate all possible twists.
|
quantum physics
|
Although convolutional neural networks (CNN) achieve high diagnostic accuracy for detecting Alzheimer's disease (AD) dementia based on magnetic resonance imaging (MRI) scans, they are not yet applied in clinical routine. One important reason for this is a lack of model comprehensibility. Recently developed visualization methods for deriving CNN relevance maps may help to fill this gap. We investigated whether models with higher accuracy also rely more on discriminative brain regions predefined by prior knowledge. We trained a CNN for the detection of AD in N=663 T1-weighted MRI scans of patients with dementia and amnestic mild cognitive impairment (MCI) and verified the accuracy of the models via cross-validation and in three independent samples including N=1655 cases. We evaluated the association of relevance scores and hippocampus volume to validate the clinical utility of this approach. To improve model comprehensibility, we implemented an interactive visualization of 3D CNN relevance maps. Across three independent datasets, group separation showed high accuracy for AD dementia vs. controls (AUC$\geq$0.92) and moderate accuracy for MCI vs. controls (AUC$\approx$0.75). Relevance maps indicated that hippocampal atrophy was considered as the most informative factor for AD detection, with additional contributions from atrophy in other cortical and subcortical regions. Relevance scores within the hippocampus were highly correlated with hippocampal volumes (Pearson's r$\approx$-0.81). The relevance maps highlighted atrophy in regions that we had hypothesized a priori. This strengthens the comprehensibility of the CNN models, which were trained in a purely data-driven manner based on the scans and diagnosis labels. The high hippocampus relevance scores and high performance achieved in independent samples support the validity of the CNN models in the detection of AD-related MRI abnormalities.
|
electrical engineering and systems science
|
We investigate the maximum value of the spin-independent cross section ($\sigma_\text{SI}$) in a dark matter (DM) model called the two-Higgs doublet model + a (THDM+a). This model can explain the measured value of the DM energy density by the freeze-out mechanism. Also, $\sigma_\text{SI}$ is suppressed by the momentum transfer at the tree level, and loop diagrams give the leading contribution to it. The model prediction of $\sigma_\text{SI}$ highly depends on values of $c_1$ and $c_2$ that are the quartic couplings between the gauge singlet CP-odd state ($a_0$) and Higgs doublet fields ($H_1$ and $H_2$), $c_1 a_0^2 H_1^\dagger H_1$ and $c_2 a_0^2 H_2^\dagger H_2$. We discuss the upper and lower bounds on $c_1$ and $c_2$ by studying the stability of the electroweak vacuum, the condition for the potential bounded from the below, and the perturbative unitarity. We find that the condition for the stability of the electroweak vacuum gives upper bounds on $c_1$ and $c_2$. The condition for the potential to be bounded from below gives lower bounds on $c_1$ and $c_2$. It also constrains the mixing angle between the two CP-odd states. The perturbative unitarity bound gives the upper bound on the Yukawa coupling between the dark matter and $a_0$ and the quartic coupling of $a_0$. Under these theoretical constraints, we find that the maximum value of the $\sigma_\text{SI}$ is $\sim 5\times 10^{-47}$ cm$^2$ for $m_A = $ 600 GeV, and the LZ and XENONnT experiments can see the DM signal predicted in this model near future.
|
high energy physics phenomenology
|
Many predictions are probabilistic in nature; for example, a prediction could be for precipitation tomorrow, but with only a 30 percent chance. Given both the predictions and the actual outcomes, "reliability diagrams" (also known as "calibration plots") help detect and diagnose statistically significant discrepancies between the predictions and the outcomes. The canonical reliability diagrams are based on histogramming the observed and expected values of the predictions; several variants of the standard reliability diagrams propose to replace the hard histogram binning with soft kernel density estimation using smooth convolutional kernels of widths similar to the widths of the bins. In all cases, an important question naturally arises: which widths are best (or are multiple plots with different widths better)? Rather than answering this question, plots of the cumulative differences between the observed and expected values largely avoid the question, by displaying miscalibration directly as the slopes of secant lines for the graphs. Slope is easy to perceive with quantitative precision even when the constant offsets of the secant lines are irrelevant. There is no need to bin or perform kernel density estimation with a somewhat arbitrary kernel.
|
statistics
|
We study the problem of estimating the parameters (i.e., infection rate and recovery rate) governing the spread of epidemics in networks. Such parameters are typically estimated by measuring various characteristics (such as the number of infected and recovered individuals) of the infected populations over time. However, these measurements also incur certain costs, depending on the population being tested and the times at which the tests are administered. We thus formulate the epidemic parameter estimation problem as an optimization problem, where the goal is to either minimize the total cost spent on collecting measurements, or to optimize the parameter estimates while remaining within a measurement budget. We show that these problems are NP-hard to solve in general, and then propose approximation algorithms with performance guarantees. We validate our algorithms using numerical examples.
|
electrical engineering and systems science
|
When spin polarised electrons flow through a magnetic texture a transfer torque is generated. We examine the effect of this torque on skyrmions and skyrmion bags, skyrmionic structures of arbitrary integer topological degree, in thin ferromagnetic films. Using micromagnetic simulations and analysis from the well known Thiele equation we explore the potential for sorting or binning skyrmions of varying degrees mechanically. We investigate the applicability of the Thiele equation to problems of this nature and derive a theory of skyrmion deflection ordered by topological degree.
|
condensed matter
|
Bell's theorem implies that any completion of quantum mechanics which uses hidden variables (that is, preexisting values of all observables) must be nonlocal in the Einstein sense. This customarily indicates that knowledge of the hidden variables would permit superluminal communication. Such superluminal signaling, akin to the existence of a preferred reference frame, is to be expected. However, here we provide a protocol that allows an observer with knowledge of the hidden variables to communicate with her own causal past, without superluminal signaling. That is, such knowledge would contradict causality, irrespectively of the validity of relativity theory. Among the ways we propose for bypassing the paradox there is the possibility of hidden variables that change their values even when the state does not, and that means that signaling backwards in time is prohibited in Bohmian mechanics.
|
quantum physics
|
As the second stage of the CEPC-SPPC project, SPPC (Super Proton-Proton Collider) aims at exploring new physics beyond the Standard Model. The key design goal for the SPPC accelerator complex is to reach 75 TeV in center of mass energy with a circumference of 100 km for the collider and an injector chain of four accelerators in cascade to support the collider. As an important part of the SPPC conceptual study, the longitudinal beam dynamics was studied systematically, which includes the dynamics in the collider and its complex injector chain. First, the bunch filling scheme of the SPPC complex was designed on the basis of various constraints such as the technical challenges of the kicker magnets, limited extraction energy per injection and so on. Next, the study on the longitudinal dynamics in the collider focused on the RF scheme to meet the requirements for luminosity and mitigate relevant instabilities. A higher harmonic RF system (800 MHz) together with the basic RF system (400 MHz) to form a dual-harmonic RF system was employed to mitigate collective instabilities and increase the luminosity by producing shorter bunches. In addition, the longitudinal matchings between the bunches in the different accelerator stages were studied, with special attention to the space charge effects and the beam loading effect in the two lower energy rings (p-RCS and MSS), which result in an optimization of the RF schemes. A set of self-consistent beam and RF parameters for the SPPC complex was obtained. The collider and the three proton synchrotrons of the injector chain have unprecedented features, thus this study demonstrates how a future proton-proton collider complex looks like.
|
physics
|
The weak-value (WV) measurement proposed by Aharonov, Albert and Vaidman (AAV) has attracted a great deal of interest in connection with quantum metrology. In this work, we extend the analysis beyond the AAV limit and obtain a few main results. (i) We obtain non-perturbative result for the signal-to-noise ratio (SNR). In contrast to the AAV's prediction, we find that the SNR asymptotically gets worse when the AAV's WV $A_w$ becomes large, i.e., in the case $g|A_w|^2>>1$, where $g$ is the measurement strength. (ii) With the increase of $g$ (but also small), we find that the SNR is comparable to the result under the AAV limit, while both can reach -- actually the former can slightly exceed -- the SNR of the standard measurement. However, along a further increase of $g$, the WV technique will become less efficient than the standard measurement, despite that the postselection probability is increased. (iii) We find that the Fisher information can characterize the estimate precision qualitatively well as the SNR, yet their difference will become more prominent with the increase of $g$. (iv) We carry out analytic expressions of the SNR in the presence of technical noises and illustrate the particular advantage of the imaginary WV measurement. The non-perturbative result of the SNR manifests a favorable range of the noise strength and allows an optimal determination.
|
quantum physics
|
We present compact analytic formulae for all one-loop amplitudes representing the production of a Higgs boson in association with two jets, mediated by a colour triplet scalar particle. Many of the integral coefficients present for scalar mediators are identical to the case when a massive fermion circulates in the loop, reflecting a close relationship between the two theories. The calculation is used to study Higgs boson production in association with two jets in a simplified supersymmetry (SUSY) scenario in which the dominant additional contributions arise from loops of top squarks. The results presented here facilitate an indirect search for top squarks in this channel, by a precision measurement of the corresponding cross-section. However, we find that the potential for improved discrimination between the SM and SUSY cases suggested by the pattern of results in the 1- and 2-jet samples is unlikely to be realized due to the loss in statistical power compared to an inclusive analysis.
|
high energy physics phenomenology
|
We study the learning properties of nonparametric ridge-less least squares. In particular, we consider the common case of estimators defined by scale dependent kernels, and focus on the role of the scale. These estimators interpolate the data and the scale can be shown to control their stability through the condition number. Our analysis shows that are different regimes depending on the interplay between the sample size, its dimensions, and the smoothness of the problem. Indeed, when the sample size is less than exponential in the data dimension, then the scale can be chosen so that the learning error decreases. As the sample size becomes larger, the overall error stop decreasing but interestingly the scale can be chosen in such a way that the variance due to noise remains bounded. Our analysis combines, probabilistic results with a number of analytic techniques from interpolation theory.
|
statistics
|
We investigate the self-adjointness of the two-dimensional Dirac operator $D$, with quantum-dot and Lorentz-scalar $\delta$-shell boundary conditions, on piecewise $C^2$ domains with finitely many corners. For both models, we prove the existence of a unique self-adjoint realization whose domain is included in the Sobolev space $H^{1/2}$, the formal form domain of the free Dirac operator. The main part of our paper consists of a description of the domain of $D^*$ in terms of the domain of $D$ and the set of harmonic functions that verify some mixed boundary conditions. Then, we give a detailed study of the problem on an infinite sector, where explicit computations can be made: we find the self-adjoint extensions for this case. The result is then translated to general domains by a coordinate transformation.
|
mathematics
|
Neural Processes (NPs; Garnelo et al., 2018a,b) are a rich class of models for meta-learning that map data sets directly to predictive stochastic processes. We provide a rigorous analysis of the standard maximum-likelihood objective used to train conditional NPs. Moreover, we propose a new member to the Neural Process family called the Gaussian Neural Process (GNP), which models predictive correlations, incorporates translation equivariance, provides universal approximation guarantees, and demonstrates encouraging performance.
|
statistics
|
We construct a generalized linear sigma model as an effective field theory (EFT) to describe nearly conformal gauge theories at low energies. The work is motivated by recent lattice studies of gauge theories near the conformal window, which have shown that the lightest flavor-singlet scalar state in the spectrum ($\sigma$) can be much lighter than the vector state ($\rho$) and nearly degenerate with the PNGBs ($\pi$) over a large range of quark masses. The EFT incorporates this feature. We highlight the crucial role played by the terms in the potential that explicitly break chiral symmetry. The explicit breaking can be large enough so that a limited set of additional terms in the potential can no longer be neglected, with the EFT still weakly coupled in this new range. The additional terms contribute importantly to the scalar and pion masses. In particular, they relax the inequality $M_{\sigma}^2 \ge 3 M_{\pi}^2$, allowing for consistency with current lattice data.
|
high energy physics phenomenology
|
Minimizing the Action integral of a Lagrangian provides the Euler-Lagrange equation of motion in the elegant machinery of Lagrangian Mechanics. However two relations define the divergence of current and energy-momentum, and provide an alternative motivation for the Euler-Lagrange equation without invoking the considerable machinery required for the principle of least Action. The derivation of these two relations proceeds with only local differential operations on the Lagrangian, and without a global integration defining an Action. The two relations connect local continuity equations (as a vanishing divergence) for current and energy-momentum to Lorentz force, symmetry, and the Euler-Lagrange equation. The Euler-Lagrange equation is common to both relations so providing sufficient motivation for acceptance as equations of motion. The essential relationships between the central concepts of energy-momentum, force, current, symmetry and equation of motion provide pedagogically interesting clarity. The student will see that how these concepts relate to each other as well as the definition of each concept in isolation.
|
physics
|
Angular x-ray cross-correlation analysis (AXCCA) is a technique which allows quantitative measurement of the angular anisotropy of x-ray diffraction patterns and provides insights into the orientational order in the system under investigation. This method is based on the evaluation of the angular cross-correlation function of the scattered intensity distribution on a two-dimensional (2D) detector and further averaging over many diffraction patterns for enhancement of the anisotropic signal. Over the last decade, AXCCA was successfully used to study the anisotropy in various soft matter systems, such as solutions of anisometric particles, liquid crystals, colloidal crystals, superlattices composed by nanoparticles, etc. This review provides an introduction to the technique and gives a survey of the recent experimental work in which AXCCA in combination with micro- or nanofocused x-ray microscopy was used to study the orientational order in various soft matter systems.
|
condensed matter
|
Increasing evidence suggests that cities are complex systems, with structural and dynamical features responsible for a broad spectrum of emerging phenomena. Here we use a unique data set of human flows and couple it with information on the underlying street network to study, simultaneously, the structural and functional organisation of 10 world megacities. We quantify the efficiency of flow exchange between areas of a city in terms of integration and segregation using well defined measures. Results reveal unexpected complex patterns that shed new light on urban organisation. Large cities tend to be more segregated and less integrated, while their overall topological organisation resembles that of small world networks. At the same time, the heterogeneity of flows distribution might act as a catalyst for further integrating a city. Our analysis unravels how human behaviour influences, and is influenced by, the urban environment, suggesting quantitative indicators to control integration and segregation of human flows that can be used, among others, for restriction policies to adopt during emergencies and, as an interesting byproduct, allows us to characterise functional (dis)similarities of different metropolitan areas, countries, and cultures.
|
physics
|
The development of useful photon-photon interactions can trigger numerous breakthroughs in quantum information science, however this has remained a considerable challenge spanning several decades. Here we demonstrate the first room-temperature implementation of large phase shifts ($\approx\pi$) on a single-photon level probe pulse (1.5us) triggered by a simultaneously-propagating few-photon-level signal field. This process is mediated by $Rb^{87}$ vapor in a double-$\Lambda$ atomic configuration. We use homodyne tomography to obtain the quadrature statistics of the phase-shifted quantum fields and perform maximum-likelihood estimation to reconstruct their quantum state in the Fock state basis. For the probe field, we have observed input-output fidelities higher than 90$\%$ for phase-shifted output states, and high overlap (over 90\%) with a theoretically perfect coherent state. Our noise-free, four-wave-mixing-mediated photon-photon interface is a key milestone towards developing quantum logic and nondemolition photon detection using schemes such as coherent photon conversion.
|
quantum physics
|
We study the dependence of the Banach-Mazur distance between two subspaces of vector-valued continuous functions on the scattered structure of their boundaries. In the spirit of a result of Gordon, we show that the constant $2$ appearing in the Amir-Cambern theorem may be replaced by $3$ for some class of subspaces. This we achieve by showing that the Banach-Mazur distance of two function spaces is at least 3, if the height of the set of weak peak points of one of the spaces is larger than the height of a closed boundary of the second space. Next we show that this estimate can be improved, if the considered heights are finite and significantly different. As a corollary, we obtain new results even for the case of $\mathcal{C}(K, E)$ spaces.
|
mathematics
|
Active feedback stabilization of the dominant resistive wall mode (RWM) for an ITER H-mode scenario at high plasma pressure using infinite-horizon model predictive control (MPC) is presented. The MPC approach is closely-related to linear-quadratic-Gaussian (LQG) control, improving the performance in the vicinity of constraints. The control-oriented model for MPC is obtained with model reduction from a high-dimensional model produced by CarMa code. Due to the limited time for on-line optimization, a suitable MPC formulation considering only input (coil voltage) constraints is chosen, and the primal fast gradient method is used for solving the associated quadratic programming problem. The performance is evaluated in simulation in comparison to LQG control. Sensitivity to noise, robustness to changes of unstable RWM dynamics, and size of the domain of attraction of the initial conditions of the unstable modes are examined.
|
electrical engineering and systems science
|
The surface quality of replicated CFRP mirrors is ideally expected to be as good as the mandrel from which they are manufactured. In practice, a number of factors produce surface imperfections in the final mirrors at different scales. To understand where this errors come from, and develop improvements to the manufacturing process accordingly, a wide range of metrology techniques and quality control methods must be adopted. Mechanical and optical instruments are employed to characterise glass mandrels and CFRP replicas at different spatial frequency ranges. Modal analysis is used to identify large scale aberrations, complemented with a spectral analysis at medium and small scales. It is seen that astigmatism is the dominant aberration in the CFRP replicas. On the medium and small scales, we have observed that fiber print-through and surface roughness can be improved significantly by an extra resin layer over the replica's surface, but still some residual irregularities are present.
|
astrophysics
|
We have studied entanglement entropy and Husimi $Q$ distribution as a tool to explore chaos in the quantum two-photon Dicke model. With the increase of the energy of system, the linear entanglement entropy of coherent state prepared in the classical chaotic and regular regions become more distinguishable, and the correspondence relationship between the distribution of time-averaged entanglement entropy and the classical Poincar\'{e} section has been improved obviously. Moreover, Husimi $Q$ distribution for the initial states corresponded to the points in the chaotic region in the higher energy system disperses more quickly than that in the lower energy system. Our result imply that higher system energy has contributed to distinguish the chaotic and regular behavior in the quantum two-photon Dicke model.
|
quantum physics
|
Neural networks are usually not the tool of choice for nonparametric high-dimensional problems where the number of input features is much larger than the number of observations. Though neural networks can approximate complex multivariate functions, they generally require a large number of training observations to obtain reasonable fits, unless one can learn the appropriate network structure. In this manuscript, we show that neural networks can be applied successfully to high-dimensional settings if the true function falls in a low dimensional subspace, and proper regularization is used. We propose fitting a neural network with a sparse group lasso penalty on the first-layer input weights. This results in a neural net that only uses a small subset of the original features. In addition, we characterize the statistical convergence of the penalized empirical risk minimizer to the optimal neural network: we show that the excess risk of this penalized estimator only grows with the logarithm of the number of input features; and we show that the weights of irrelevant features converge to zero. Via simulation studies and data analyses, we show that these sparse-input neural networks outperform existing nonparametric high-dimensional estimation methods when the data has complex higher-order interactions.
|
statistics
|
Non-thermal properties of galaxy clusters have been studied with detailed and deep radio images in comparison with X-ray data. While much progress has been made, most of the studied clusters are at a relatively low redshift (z < 0.3). We here investigate the evolutionary properties of the non-thermal cluster emission using two statistically complete samples at z > 0.3. We obtained short JVLA observations at L-band of the statistically complete sample of very X-ray luminous clusters from the Massive Cluster Survey (MACS) presented by Ebeling et al. (2010), and redshift range 0.3 - 0.5. We add to this list the complete sample of the 12 most distant MACS clusters (z > 0.5) presented in Ebeling et al. (2007). Most clusters show evidence of emission in the radio regime. We present the radio properties of all clusters in our sample and show images of newly detected diffuse sources. A radio halo is detected in 19 clusters, and five clusters contain a relic source. Most of the brightest cluster galaxies (BCG) in relaxed clusters show radio emission with powers typical of FRII radio galaxies, and some are surrounded by a radio mini-halo. The high frequency of radio emission from the BCG in relaxed clusters suggests that BCG feedback mechanisms are in place already at z about 0.6. The properties of radio halos and the small number of detected relics suggest redshift evolution in the properties of diffuse sources. The radio power (and size) of radio halos could be related to the number of past merger events in the history of the system. In this scenario, the presence of a giant and high-power radio halo is indicative of an evolved system with a large number of past major mergers.
|
astrophysics
|
High cadence transient surveys are able to capture supernovae closer to their first light than before. Applying analytical models to such early emission, we can constrain the progenitor stars properties. In this paper, we present observations of SN2018fif (ZTF18abokyfk). The supernova was discovered close to first light and monitored by the Zwicky Transient Facility (ZTF) and the Neil Gehrels Swift Observatory. Early spectroscopic observations suggest that the progenitor of SN2018fif was surrounded by relatively small amounts of circumstellar material (CSM) compared to all previous cases. This particularity, coupled with the high cadence multiple-band coverage, makes it a good candidate to investigate using shock-cooling models. We employ the SOPRANOS code, an implementation of the model by Sapir & Waxman and its extension to early times by Morag, Sapir & Waxman. Compared with previous implementations, SOPRANOS has the advantage of including a careful account of the limited temporal validity domain of the shock-cooling model as well as allowing usage of the entirety of the early UV data. We find that the progenitor of SN2018fif was a large red supergiant, with a radius of R=744.0_{-128.0}^{+183.0} solar radii and an ejected mass of Mej=9.3_{-5.8}^{+0.4} solar masses. Our model also gives information on the explosion epoch, the progenitor inner structure, the shock velocity and the extinction. The distribution of radii is double-peaked, with lower radii corresponding to lower values of the extinction, earlier recombination times and better match to the early UV data. If these correlations persist in future objects, denser spectroscopic monitoring constraining the time of recombination, as well as accurate UV observations (e.g. with ULTRASAT), will help break the radius-extinction degeneracy and independently determine both.
|
astrophysics
|
Inspired by the recent observations of the vector charmonium-like states at BES III Collaboration and $\psi(3842)$ at LHCb Collaboration, we comb the $D$ wave charmonium state in the present work. We first evaluate the possibility of $Y(4320)$ as $\psi(3^3D_1)$ by investigating its open charm decays in quark-pair creation model and we find the width of $Y(4320)$ can be reproduced in a reasonable parameter range. Moreover, we take $\psi(3770)$, $\psi(4160)$ and $Y(4320)$ as the scale of $1D$, $2D$ and $3D$ charmonia to estimate the open charm decays of other $D$ wave charmonia. The total and partial widths of $D$ wave charmonium states have been predicted, which could be tested by further measurements at LHCb and Belle II Collaborations.
|
high energy physics phenomenology
|
Microfading Spectrometry (MFS) is a method for assessing light sensitivity color (spectral) variations of cultural heritage objects. Each measured point on the surface gives rise to a time-series of stochastic observations that represents color fading over time. Color degradation is expected to be non-decreasing as a function of time and stabilize eventually. These properties can be expressed in terms of the derivatives of the functions. In this work, we propose spatially correlated splines-based time-varying functions and their derivatives for modeling and predicting MFS data collected on the surface of rock art paintings. The correlation among the splines models is modeled using Gaussian process priors over the spline coefficients across time-series. A multivariate covariance function in a Gaussian process allows the use of trichromatic image color variables jointly with spatial locations as inputs to evaluate the correlation among time-series, and demonstrated the colorimetric variables as useful for predicting new color fading time-series. Furthermore, modeling the derivative of the model and its sign demonstrated to be beneficial in terms of both predictive performance and application-specific interpretability.
|
statistics
|
Motivated by increasing penetration of distributed generators (DGs) and fast development of micro-phasor measurement units ({\mu}PMUs), this paper proposes a novel graph-based faulted line identification algorithm using a limited number of {\mu}PMUs in distribution networks. The core of the proposed method is to apply advanced distribution system state estimation (DSSE) techniques integrating {\mu}PMU data to the fault location. We propose a distributed DSSE algorithm to efficiently restrict the searching region for the fault source in the feeder between two adjacent {\mu}PMUs. Based on the graph model of the feeder in the reduced searching region, we further perform the DSSE in a hierarchical structure and identify the location of the fault source. Also, the proposed approach captures the impact of DGs on distribution system operation and remains robust against high-level noises in measurements. Numerical simulations verify the accuracy and efficiency of the proposed method under various fault scenarios covering multiple fault types and fault impedances.
|
electrical engineering and systems science
|
Radio frequency (RF) signals can be relied upon for conventional wireless information transfer (WIT) and for challenging wireless power transfer (WPT), which triggers the significant research interest in the topic of simultaneous wireless information and power transfer (SWIPT). By further exploiting the advanced non-orthogonal-multiple-access (NOMA) technique, we are capable of improving the spectrum efficiency of the resource-limited SWIPT system. In our SWIPT system, a hybrid access point (H-AP) superimposes the modulated symbols destined to multiple WIT users by exploiting the power-domain NOMA, while WPT users are capable of harvesting the energy carried by the superposition symbols. In order to maximise the amount of energy transferred to the WPT users, we propose a joint design of the energy interleaver and the constellation rotation based modulator in the symbol-block level by constructively superimposing the symbols destined to the WIT users in the power domain. Furthermore, a transmit power allocation scheme is proposed to guarantee the symbol-error-ratio (SER) of all the WIT users. By considering the sensitivity of practical energy harvesters, the simulation results demonstrate that our scheme is capable of substantially increasing the WPT performance without any remarkable degradation of the WIT performance.
|
electrical engineering and systems science
|
Topologically interlocked material systems are two-dimensional assemblies of unit elements from which no element can be removed from the assembly without disassembly of the entire system. Consequently, such tile assemblies are able to carry transverse mechanical loads. Archimedean and Laves tilings are investigated as templates for the material system architecture. It is demonstrated under point loads that the architecture significantly affects the force-deflection response. Stiffness, load carrying capacity and toughness varied by a factor of at least three from the system with the poorest performance to the system with the best performance. Across all architectures stiffness, strength and toughness are found to be strongly and linearly correlated. Architecture characterizing parameters and their relationship to the mechanical behavior are investigated. It is shown that the measure of the smallest tile area in an assembly provides the best predictor of mechanical behavior. With small tiles present in the assembly the contact force network structure is well developed and the internal load path is channeled through these stiffest components of the assembly.
|
condensed matter
|
The Health Effects Institute (HEI) reported recently that the deaths from the negative health effects of the air pollution in the Middle East Region is about 500,000 people. Therefore, this paper presents a new design and development of portable system; called GASDUINO that allows the user to measure the quality of air using the Internet of Things (IoT). The main components of developed GASDUINO system are the Arduino microcontroller board, Gas sensor (MQ-135), Android user interface (UI) connected with all things via Remote XY Arduino cloud. The developed system can alarm the users about the dangerous levels of the air quality index (AQI) or the particle per million (PPM) levels in the range of 0 to above 200 PPM. The developed GASDUINO system is considered as an essential environmental module in the development and sustainability of future smart cities.
|
computer science
|
We analyze neutrino-induced leptoquark production on atomic nuclei. A leptoquark term in the Lagrangian admits the possibility that neutrinos interact with gluons. The current lower limits on the leptoquark masses are of the order of 1 TeV depending on the leptoquark quantum numbers and couplings. Such heavy states can be produced in ultra-high energy cosmic neutrino scattering processes. The four-momentum transfer squared and the Bjorken variable simultaneously probed in these processes may reach values kinematically inaccessible at present collider experiments. We study the impact of the gluon density in a nucleus on the cross section for the leptoquark production. We show that taking into account the nuclear parton distributions shifts the production threshold to significantly lower neutrino energies. As a particular case we consider the interaction with oxygen, which is abundant in water/ice neutrino telescopes.
|
high energy physics phenomenology
|
Layered three-dimensional (3D) topological semimetals have attracted intensively attention due to the exotic phenomena and abundantly tunable properties. Here we report the experimental evidence for the 3D topological semimetal phase in layered material TaNiTe5 single crystals through quantum oscillations. Strong quantum oscillations have been observed with diamagnetism background in TaNiTe5. By analyzing the de Haas-van Alphen oscillations, multi-periodic oscillations were extracted, in content with magnetotransport measurements. Moreover, nontrivial "{\pi}" Berry phase with 3D Fermi surface is identified, indicating the topologically nontrivial feature in TaNiTe5. Additionally, we demonstrated the thin-layer of TaNiTe5 crystals is highly feasible by the mechanical exfoliation, which offers a platform to explore exotic properties in low dimensional topological semimetal and paves the way for potential applications in nanodevices.
|
condensed matter
|
Magnetic fields generated by a dynamo mechanism due to differential rotation during stellar mergers are often proposed as an explanation for the presence of strong fields in certain classes of magnetic stars, including high field magnetic white dwarfs (HFMWDs). In the case of the HFMWDs, the site of the differential rotation has been variously proposed to be the common envelope itself, the massive hot outer regions of a merged degenerate core or an accretion disc formed by a tidally disrupted companion that is subsequently incorporated into a degenerate core. In the present study I explore the possibility that the origin of HFMWDs is consistent with stellar interactions during the common envelope evolution (CEE). In this picture the observed fields are caused by an $\alpha-\Omega$ dynamo driven by differential rotation. The strongest fields would arise when the differential rotation equals the critical break up velocity and would occur from the merging of two stars during CEE or double degenerate (DD) mergers in a post common envelope (CE) stage. Those systems that do not coalesce but emerge from the CE on a close orbit and about to initiate mass transfer will evolve into magnetic cataclysmic variables (MCVs), The population synthesis calculations carried out in this work have shown that the origin of high fields in isolated white dwarfs (WDs) and in WDs in MCVs is consistent with stellar interaction during common envelope evolution. I compare the calculated field strengths to those observed and test the correlation between theory and observation by means of the Kolmogorov--Smirnov (K--S) test and show that the resulting correlation is good for values of the CE energy efficiency parameter, $\alpha{_{\rm{CE}}}$, in the range 0.1--0.3.
|
astrophysics
|
We discuss in a statistical physics framework the idea that ``the whole is less than the parts'', as sometimes advocated by sociologists in view of the intrinsic complexity of humans, and try to reconcile this idea with the statistical physicists wisdom according to which ``the whole is more than the sum of its parts'' due to collective phenomena. We consider a simple mean-field model of interacting agents having an intrinsic complexity modeled by a large number of internal configurations. We show by analytically solving the model that interactions between agents lead, in some parameter range, to a `standardization' of agents in the sense that all agents collapse in the same internal state, thereby drastically suppressing their complexity. Slightly generalizing the model, we find that agents standardization may lead to a global order if appropriate interactions are included. Hence, in this simple model, both agents standardization and collective organization may be viewed as two sides of the same coin.
|
physics
|
On the basis of the canonical quantization procedure, in which we need the indefinite metric Hilbert space, we formulate field diagonal representation for the scalar Lee-Wick model. Euclidean path integral for the model is then constructed in terms of the eigenvector of field operators. Taking the quantum mechanical Lee-Wick model as an example, we demonstrate how to formulate path integrals for such systems in detail. We show that, despite the use of indefinite metric representation, integration contours for ghost degrees can be taken along the real axis.
|
high energy physics theory
|
We present an extension of the Kolmogorov-Smirnov (KS) two-sample test, which can be more sensitive to differences in the tails. Our test statistic is an integral probability metric (IPM) defined over a higher-order total variation ball, recovering the original KS test as its simplest case. We give an exact representer result for our IPM, which generalizes the fact that the original KS test statistic can be expressed in equivalent variational and CDF forms. For small enough orders ($k \leq 5$), we develop a linear-time algorithm for computing our higher-order KS test statistic; for all others ($k \geq 6$), we give a nearly linear-time approximation. We derive the asymptotic null distribution for our test, and show that our nearly linear-time approximation shares the same asymptotic null. Lastly, we complement our theory with numerical studies.
|
statistics
|
We investigate the pure annihilation type radiative $B$ meson decays $B^0 \to \phi \gamma$ and $B_s \to \rho^0(\omega)\gamma$ in the soft-collinear effective theory. We consider three types of contributions to the decay amplitudes, including the direct annihilation topology, the contribution from the electro-magnetic penguin operator and the contribution of the neutral vector meson mixings. The numerical analysis shows that the decay amplitudes are dominated by the $\omega-\phi$ mixing effect in the $B^0 \to \phi\gamma$ and $B_s \to \omega\gamma$ modes. The corresponding decay branching ratios are enhanced about three orders of magnitudes relative to the pure annihilation type contribution in these two decay channels. The decay rate of $B_s \to \rho^0\gamma$ is much smaller than that of $B_s \to \omega\gamma$ because of the smaller $\rho^0-\phi$ mixing. The predicted branching ratios $B(B^{0}\rightarrow\phi\gamma)=(3.99^{+1.67}_{-1.46} )\times10^{-9},\,B(B_s\rightarrow\omega\gamma)=(2.01^{+0.81}_{-0.71} )\times10^{-7}$ are to be tested by the Belle-II and LHC-b experiments.
|
high energy physics phenomenology
|
Development of quantum architectures during the last decade has inspired hybrid classical-quantum algorithms in physics and quantum chemistry that promise simulations of fermionic systems beyond the capability of modern classical computers, even before the era of quantum computing fully arrives. Strong research efforts have been recently made to obtain minimal depth quantum circuits which could accurately represent chemical systems. Here, we show that unprecedented methods used in quantum chemistry, designed to simulate molecules on quantum processors, can be extended to calculate properties of periodic solids. In particular, we present minimal depth circuits implementing the variational quantum eigensolver algorithm and successfully use it to compute the band structure of silicon on a quantum machine for the first time. We are convinced that the presented quantum experiments performed on cloud-based platforms will stimulate more intense studies towards scalable electronic structure computation of advanced quantum materials.
|
quantum physics
|
The aim of a clinical decision support tool is to reduce the complexity of clinical decisions. However, when decision support tools are poorly implemented they may actually confuse physicians and complicate clinical care. This paper argues that information from decision support tools is often removed from the clinical context of the targeted decisions. Physicians largely depend on clinical context to handle the complexity of their day-to-day decisions. Clinical context enables them to take into account all ambiguous information and patient preferences. Decision support tools that provide analytic information to physicians, without its context, may then complicate the decision process of physicians. It is likely that the joint forces of physicians and technology will produce better decisions than either of them exclusively: after all, they do have different ways of dealing with the complexity of a decision and are thus complementary. Therefore, the future challenges of decision support do not only reside in the optimization of the predictive value of the underlying models and algorithms, but equally in the effective communication of information and its context to doctors.
|
computer science
|
We study the ergodic side of the many-body localization transition in its standard model, the disordered Heisenberg quantum spin chain. We show that the Thouless energy, extracted from long-range spectral statistics and the power-spectrum of the full momentum distribution fluctuations, is not large enough to guarantee thermalization. We find that both estimates coincide and behave non-monotonically, exhibiting a strong peak at an intermediate value of the disorder. Furthermore, we show that non-thermalizing initial conditions occur well within the ergodic phase with larger probability than expected. Finally, we propose a mechanism, driven by the Thouless energy and the presence of anomalous events, for the transition to the localized phase.
|
condensed matter
|
Mars has a thin (6 mbar) CO2 atmosphere currently. There is strong evidence for paleolakes and rivers formed by warm climates on Mars, including after 3.5 billion years (Ga) ago, which indicates that a CO2 atmosphere thick enough to permit a warm climate was present at these times. Since Mars no longer has a thick CO2 atmosphere, it must have been lost. One possibility is that Martian CO2 was lost to space. Oxygen escape rates from Mars are high enough to account for loss of a thick CO2 atmosphere, if CO2 was the main source of escaping O. But here, using H isotope ratios, O escape calculations, and quantification of the surface O sinks on Mars, we show for the first time that O escape from Mars after 3.5 Ga must have been predominantly associated with the loss of H2O, not CO2, and therefore it is unlikely that >250 mbar Martian CO2 has been lost to space in the last 3.5 Ga, because such results require highly unfavored O loss scenarios. It is possible that the presence of young rivers and lakes on Mars could be reconciled with limited CO2 loss to space if crater chronologies on Mars are sufficiently incorrect that all apparently young rivers and lakes are actually older than 3.5 Ga, or if climate solutions exist for sustained runoff on Mars with atmospheric CO2 pressure <250 mbar. However, our preferred solution to reconcile the presence of <3.5 Gya rivers and lakes on Mars with the limited potential for CO2 loss to space is a large, as yet undiscovered, geological C sink on Mars.
|
astrophysics
|
We study the dynamical mass generation in Pseudo Quantum Electrodynamics (PQED) coupled to the Gross-Neveu (GN) interaction, in (2+1) dimensions, at both zero and finite temperatures. We start with a gapless model and show that, under particular conditions, a dynamically generated mass emerges. In order to do so, we use a truncated Schwinger-Dyson equation, at the large-N approximation, in the imaginary-time formalism. In the instantaneous-exchange approximation (the static regime), we obtain two critical parameters, namely, the critical number of fermions $N_c(T)$ and the critical coupling constant $\alpha_c(T)$ as a function of temperature and of the cutoff $\Lambda$, which must be provided by experiments. In the dynamical regime, we find an analytical solution for the mass function $\Sigma(p,T)$ as well as a zero-external momentum solution for $p=0$. We compare our analytical results with numerical tests and a good agreement is found.
|
high energy physics theory
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.