text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
In this work, we investigate the application of Taylor expansions in reinforcement learning. In particular, we propose Taylor expansion policy optimization, a policy optimization formalism that generalizes prior work (e.g., TRPO) as a first-order special case. We also show that Taylor expansions intimately relate to off-policy evaluation. Finally, we show that this new formulation entails modifications which improve the performance of several state-of-the-art distributed algorithms. | computer science |
We consider the most general AdS$_3$ solutions of type IIB supergravity admitting a dynamical SU$(3)$ structure and preserving $\mathcal{N}=(0,2)$ supersymmetry. The analysis is broken into three distinct classes depending on whether the dynamical SU$(3)$ structure degenerates to a strict SU$(3)$ structure. The first class we consider allows for a holomorphically varying axio-dilaton consistent with the presence of $(p,q)$ 7-branes in addition to D3-branes and $(p,q)$ 5-branes. In the remaining two classes the axio-dilaton may vary but does not do so holomorphically. The second class of solution allows for 5-branes and 1-branes but no D3-branes, whilst in the final class all branes can be present. We illustrate our results with examples of such solutions including a new infinite family with all fluxes but the axion turned on. | high energy physics theory |
We propose a general purpose variational inference algorithm that forms a natural counterpart of gradient descent for optimization. Our method iteratively transports a set of particles to match the target distribution, by applying a form of functional gradient descent that minimizes the KL divergence. Empirical studies are performed on various real world models and datasets, on which our method is competitive with existing state-of-the-art methods. The derivation of our method is based on a new theoretical result that connects the derivative of KL divergence under smooth transforms with Stein's identity and a recently proposed kernelized Stein discrepancy, which is of independent interest. | statistics |
We discuss specific hypergeometric solutions of the conformal Ward identities (CWI's) of scalar 4-point functions of primary fields in momentum space, in $d$ spacetime dimensions. We determine such solutions using various dual conformal ans\"atze (DCA's). We start from a generic dual conformal correlator, and require it to be conformally covariant in coordinate space. The two requirements constrain such solutions to take a unique hypergeometric form. They describe correlators which are at the same time conformal and dual conformal in any dimension. These specific ans\"atze also show the existence of a link between 3- and 4-point functions of a CFT for such class of exact solutions, similarly to what found for planar ladder diagrams. We show that in $d=4$ only the box diagram and its melonic variants, in free field theory, satisfies such conditions, the remaining solutions being nonperturbative. We then turn to the analysis of some approximate high energy fixed angle solutions of the CWI's which also in this case take the form of generalized hypergeometric functions. We show that they describe the behaviour of the 4-point functions at large energy and momentum transfers, with a fixed $-t/s$. The equations, in this case, are solved by linear combinations of Lauricella functions of 3 variables and can be rewritten as generalized 4K integrals. In both cases the CWI's alone are sufficient to identify such solutions and their special connection with generalized hypergeometric systems of equations. | high energy physics theory |
We demonstrate a platform for synthetic dimensions based on coupled Rydberg levels in ultracold atoms, and we implement the single-particle Su-Schrieffer-Heeger (SSH) Hamiltonian. Rydberg levels are interpreted as synthetic lattice sites, with tunneling introduced through resonant millimeter-wave couplings. Tunneling amplitudes are controlled through the millimeter-wave amplitudes, and on-site potentials are controlled through detunings of the millimeter waves from resonance. Using alternating weak and strong tunneling with weak tunneling to edge lattice sites, we attain a configuration with symmetry-protected topological edge states. The band structure is probed through optical excitation to the Rydberg levels from the ground state, which reveals topological edge states at zero energy. We verify that edge-state energies are robust to perturbation of tunneling-rates, which preserves chiral symmetry, but can be shifted by the introduction of on-site potentials. | physics |
An algorithm for non-stationary spatial modelling using multiple secondary variables is developed. It combines Geostatistics with Quantile Random Forests to give a new interpolation and stochastic simulation algorithm. This paper introduces the method and shows that it has consistency results that are similar in nature to those applying to geostatistical modelling and to Quantile Random Forests. The method allows for embedding of simpler interpolation techniques, such as Kriging, to further condition the model. The algorithm works by estimating a conditional distribution for the target variable at each target location. The family of such distributions is called the envelope of the target variable. From this, it is possible to obtain spatial estimates, quantiles and uncertainty. An algorithm to produce conditional simulations from the envelope is also developed. As they sample from the envelope, realizations are therefore locally influenced by relative changes of importance of secondary variables, trends and variability. | statistics |
In this review I give an overview on the conceptual issues involved in the question how to interpret so-called `direct top quark mass measurements', which are based on the kinematic reconstruction of top quark decay products at the Large Hadron Collider (LHC). These measurements quote the top mass parameter $m_t^{\rm MC}$ of Monte-Carlo event generators with current uncertainties of around $0.5$ GeV. At present time the problem of finding a rigorous relation between $m_t^{\rm MC}$ and top mass renormalization schemes defined in field theory is unresolved and touches perturbative as well as nonperturbative aspects and the limitations of state-of-the-art Monte-Carlo event generators. I review the status of LHC top mass measurements, illustrate how conceptual limitations enter and explain a controversy that has permeated the community in the context of the interpretation problem. Recent advances in acquiring first principle insights are summarized, and it is outlined what else has to be understood to fully resolve the issue. For the time being, I give a recommendation how to deal with the interpretation problem when making top mass dependent theoretical predictions. | high energy physics phenomenology |
We study Feynman integrals and scattering amplitudes in ${\cal N}=4$ super-Yang-Mills by exploiting the duality with null polygonal Wilson loops. Certain Feynman integrals, including one-loop and two-loop chiral pentagons, are given by Feynman diagrams of a supersymmetric Wilson loop, where one can perform loop integrations and be left with simple integrals along edges. As the main application, we compute analytically for the first time, the symbol of the generic ($n\geq 12$) double pentagon, which gives two-loop MHV amplitudes and components of NMHV amplitudes to all multiplicities. We represent the double pentagon as a two-fold $\mathrm{d} \log$ integral of a one-loop hexagon, and the non-trivial part of the integration lies at rationalizing square roots contained in the latter. We obtain a remarkably compact "algebraic words" which contain $6$ algebraic letters for each of the $16$ square roots, and they all nicely cancel in combinations for MHV amplitudes and NMHV components which are free of square roots. In addition to $96$ algebraic letters, the alphabet consists of $152$ dual conformal invariant combinations of rational letters. | high energy physics theory |
The physical, observable spectrum in gauge theories is made up from gauge-invariant states. The Fr\"ohlich-Morchio-Strocchi mechanism allows in the standard model to map these states to the gauge-dependent elementary $W$, $Z$ and Higgs states. This is no longer necessarily the case in theories with a more general gauge group and Higgs sector. We classify and predict the physical spectrum for a wide range of such theories, with special emphasis on GUT-like cases, and show that discrepancies between the spectrum of elementary fields and physical particles frequently arise. | high energy physics phenomenology |
The ternary nitride CaZn2N2, composed only of earth-abundant elements, is a novel semiconductor with a band gap of 1.8 eV. First-principles calculations predict that continuous Mg substitution at the Zn site will change the optical band gap in a wide range from ~3.3 eV to ~1.9 eV for Ca(Mg1-xZnx)2N2 (x = 0-1). In this study, we demonstrate that a solid-state reaction at ambient pressure and a high-pressure synthesis at 5 GPa produce x = 0 and 0.12, and x = 0.12-1 polycrystalline samples, respectively. It is experimentally confirmed that the optical band gap can be continuously tuned from ~3.2 eV to ~1.8 eV, a range very close to that predicted by theory. Band-to-band photoluminescence is observed at room temperature in the ultravioletfired region depending on x. A 2% Na doping at the Ca site of CaZn2N2 converts its highly resistive state to a p-type conducting state. Particularly, the x = 0.50 sample exhibits intense green emission with a peak at 2.45 eV (506 nm) without any other emission from deep-level defects. These features meet the demands of the III-V group nitride and arsenide/phosphide light-emitting semiconductors. | condensed matter |
We study the lepton portal dark matter (DM) model in which the relic abundance is determined by the portal coupling among the Majorana fermion DM candidate $\chi$, the singlet charged scalar mediator $S^\pm$ and the Standard Model (SM) right-handed lepton. The direct and indirect searches are not sensitive to this model. This article studies the lepton portal coupling as well as the scalar portal coupling (between $S^\pm$ and SM Higgs boson), as the latter is generally allowed in the Lagrangian. The inclusion of scalar portal coupling not only significantly enhances the LHC reach via the $gg\to h^*\to S^+S^-$ process, but also provides a few novel signal channels, such as the exotic decays and coupling deviations of the Higgs boson, offering new opportunities to probe the model. In addition, we also study the Drell-Yan production of $S^+S^-$ at future lepton colliders, and find out that the scenario where one $S^\pm$ is off-shell can be used to measure the lepton portal coupling directly. In particular, we are interested in the possibility that the scalar potential triggers a first-order phase transition and hence provides the stochastic gravitational wave (GW) signals. In this case, the terrestrial collider experiments and space-based GW detectors serve as complementary approaches to probe the model. | high energy physics phenomenology |
Electrical measurement of nano-scale devices and structures requires skills and hardware to make nano-contacts. Such measurements have been difficult for number of laboratories due to cost of probe station and nano-probes. In the present work, we have demonstrated possibility of assembling low cost probe station using USB microscope (US $ 30) coupled with in-house developed probe station. We have explored the effect of shape of etching electrodes on the geometry of the microprobes developed. The variation in the geometry of copper wire electrode is observed to affect the probe length (0.58 mm to 2.15 mm) and its half cone angle (1.4 to 8.8 degree). These developed probes were used to make contact on micro patterned metal films and was used for electrical measurement along with semiconductor parameter analyzer. These probes show low contact resistance ( 4 ohm) and follows ohmic behavior. Such probes can be used for laboratories involved in teaching and multidisciplinary research activities and Atomic Force Microscopy. | physics |
Objective: Epilepsy is a chronic neurological disorder characterized by the occurrence of spontaneous seizures, which affects about one percent of the world's population. Most of the current seizure detection approaches strongly rely on patient history records and thus fail in the patient-independent situation of detecting the new patients. To overcome such limitation, we propose a robust and explainable epileptic seizure detection model that effectively learns from seizure states while eliminates the inter-patient noises. Methods: A complex deep neural network model is proposed to learn the pure seizure-specific representation from the raw non-invasive electroencephalography (EEG) signals through adversarial training. Furthermore, to enhance the explainability, we develop an attention mechanism to automatically learn the importance of each EEG channels in the seizure diagnosis procedure. Results: The proposed approach is evaluated over the Temple University Hospital EEG (TUH EEG) database. The experimental results illustrate that our model outperforms the competitive state-of-the-art baselines with low latency. Moreover, the designed attention mechanism is demonstrated ables to provide fine-grained information for pathological analysis. Conclusion and significance: We propose an effective and efficient patient-independent diagnosis approach of epileptic seizure based on raw EEG signals without manually feature engineering, which is a step toward the development of large-scale deployment for real-life use. | electrical engineering and systems science |
In this work, we examine thermalon phase transition between AdS and dS vacua in Einstein-Gauss-Bonnet gravity by considering the R\'{e}nyi statistics. The thermalon changes the asymptotic structure of spacetimes via the bubble nucleation of spherical thin-shells which host a black hole in the interior. All relevant thermodynamical quantities are computed in terms of the R\'{e}nyi statistics in order to demonstrate the possible existence of the AdS to dS phase transition. In addition, we also comment on the behaviors of the phase transitions in the R\'{e}nyi statistics. | high energy physics theory |
We prove analogues of the Hardy-Littlewood generalised twin prime conjecture for almost primes hold on average. Our main theorem establishes an asymptotic formula for the number of integers $n=p_1p_2 \leq X$ such that $n+h$ is a product of exactly two primes which holds for almost all $|h|\leq H$ with $\log^{19+\varepsilon}X\leq H\leq X^{1-\varepsilon}$, under a restriction on the size of $p_1$. Additionally, we consider correlations $n,n+h$ where $n$ is a prime and $n+h$ has exactly two prime factors, establishing an asymptotic formula which holds for almost all $|h| \leq H$ with $X^{1/6+\varepsilon}\leq H\leq X^{1-\varepsilon}$. | mathematics |
We show that a class of $\mathcal{PT}$ symmetric non-Hermitian Hamiltonians realizing the Yang-Lee edge singularity exhibits an entanglement transition in the long-time steady state evolved under the Hamiltonian. Such a transition is induced by a level crossing triggered by the critical point associated with the Yang-Lee singularity and hence is first-order in nature. At the transition, the entanglement entropy of the steady state jumps discontinuously from a volume-law to an area-law scaling. We exemplify this mechanism using a one-dimensional transverse field Ising model with additional imaginary fields, as well as the spin-1 Blume-Capel model and the three-state Potts model. We further make a connection to the forced-measurement induced entanglement transition in a Floquet non-unitary circuit subject to continuous measurements followed by post-selections. Our results demonstrate a new mechanism for entanglement transitions in non-Hermitian systems harboring a critical point. | condensed matter |
SUNNY is an Algorithm Selection (AS) technique originally tailored for Constraint Programming (CP). SUNNY enables to schedule, from a portfolio of solvers, a subset of solvers to be run on a given CP problem. This approach has proved to be effective for CP problems, and its parallel version won many gold medals in the Open category of the MiniZinc Challenge -- the yearly international competition for CP solvers. In 2015, the ASlib benchmarks were released for comparing AS systems coming from disparate fields (e.g., ASP, QBF, and SAT) and SUNNY was extended to deal with generic AS problems. This led to the development of sunny-as2, an algorithm selector based on SUNNY for ASlib scenarios. A preliminary version of sunny-as2 was submitted to the Open Algorithm Selection Challenge (OASC) in 2017, where it turned out to be the best approach for the runtime minimization of decision problems. In this work, we present the technical advancements of sunny-as2, including: (i) wrapper-based feature selection; (ii) a training approach combining feature selection and neighbourhood size configuration; (iii) the application of nested cross-validation. We show how sunny-as2 performance varies depending on the considered AS scenarios, and we discuss its strengths and weaknesses. Finally, we also show how sunny-as2 improves on its preliminary version submitted to OASC. | computer science |
We investigate numerically $f(R)$ gravity effects on certain AdS/CFT tools including holographic entanglement entropy and two-point correlation functions for a charged single accelerated Anti-de Sitter black hole in four dimensions. We find that both holographic entanglement entropy and two-point correlation functions decrease by increasing acceleration parameter $A$, matching perfectly with literature. Taking into account the $f(R)$ gravity parameter $\eta$, the decreasing scheme of the holographic quantities persist. However, we observe a transition-like point where the behavior of the holographic tools change. Two regions meeting at such a transit-like point are shown up. In such a nomination, the first one is associated with slow accelerating black holes while the second one corresponds to a fast accelerating solution. In the first region, the holographic entanglement entropy and two-point correlation functions decrease by increasing the $\eta$ parameter. However, the behavioral situation is reversed in the second one. Moreover, a cross-comparison between the entropy and the holographic entanglement entropy is presented providing another counter-example showing that such two quantities do not exhibit similar behaviors. | high energy physics theory |
Understanding, optimizing, and controlling the optical absorption process, exciton gemination, and electron-hole separation and conduction in low dimensional systems is a fundamental problem in materials science. However, robust and efficient methods capable of modelling the optical absorbance of low dimensional macromolecular systems and providing physical insight into the processes involved have remained elusive. We employ a highly efficient linear combination of atomic orbitals (LCAOs) representation of the Kohn--Sham (KS) orbitals within time dependent density functional theory (TDDFT) in the reciprocal space ($k$) and frequency ($\omega$) domains, as implemented within our LCAO-TDDFT-$k$-$\omega$ code, and apply the derivative discontinuity correction of the exchange functional $\Delta_x$ to the KS eigenenergies. In so doing we are able to provide a semi-quantitative description of the optical absorption, conductivity, and polarizability spectra for prototypical 0D, 1D, 2D, and 3D systems within the optical limit ($\|\bf{q}\|\to0^+$) as compared to both available measurements and from solving the Bethe$-$Salpeter equation with quasiparticle $G_0 W_0$ eigenvalues ($G_0 W_0$-BSE). Specifically, we consider 0D fullerene (C$_{60}$), 1D metallic (10,0) and semiconducting (10,10) single-walled carbon nanotubes (SWCNTs), 2D graphene (GR) and phosphorene (PN), and 3D rutile (R-TiO$_2$) and anatase (A-TiO$_2$). For each system, we also employ the spatially resolved electron-hole density to provide direct physical insight into the nature of their optical excitations. These results demonstrate the reliability, applicability, efficiency, and robustness of our LCAO-TDDFT-$k$-$\omega$ code, and open the pathway to the computational design of macromolecular systems for optoelectronic, photovoltaic, and photocatalytic applications $in$ $silico$. | condensed matter |
We show that, up to multiplication by a factor $\frac{1}{(cq;q)_{\infty}}$, the weighted words version of Capparelli's identity is a particular case of the weighted words version of Primc's identity. We prove this first using recurrences, and then bijectively. We also give finite versions of both identities. | mathematics |
In different many-body systems, the specific heat shows an anomalous temperature dependence which signals the onset of phase transitions or intrinsic features of the excitation spectrum. In a one-dimensional Bose gas, we reveal an intriguing anomaly, although phase transitions cannot occur and the microscopic complicated spectrum has not permitted so far a link with the thermodynamics. We find that the anomaly temperature is ruled by the dark soliton energy, corresponding to the maximum of the hole-excitation branch in the spectrum. We rely on Bethe Ansatz to obtain the specific heat exactly and provide interpretations of the analytically tractable limits. The dynamic structure factor is computed with the Path Integral Monte Carlo method, gaining insight into the pattern of the excitations. This allows us to formulate a microscopic interpretation of the anomaly origin when the quantum and thermal effects are comparable. We provide indications for future observations and how the anomaly can be employed for in-situ thermometry and for identifying different collisional regimes. The dark-soliton anomaly is a quantum simulator of other anomalies in solid, electronic and spin chain systems. | condensed matter |
We formulate the optimal energy arbitrage problem for a piecewise linear cost function for energy storage devices using linear programming (LP). The LP formulation is based on the equivalent minimization of the epigraph. This formulation considers ramping and capacity constraints, charging and discharging efficiency losses of the storage, inelastic consumer load and local renewable generation in presence of net-metering which facilitates selling of energy to the grid and incentivizes consumers to install renewable generation and energy storage. We consider the case where the consumer loads, electricity prices, and renewable generations at different instances are uncertain. These uncertain quantities are predicted using an Auto-Regressive Moving Average (ARMA) model and used in a model predictive control (MPC) framework to obtain the arbitrage decision at each instance. In numerical results we present the sensitivity analysis of storage performing arbitrage with varying ramping batteries and different ratio of selling and buying price of electricity. | electrical engineering and systems science |
Prediction of human motions is key for safe navigation of autonomous robots among humans. In cluttered environments, several motion hypotheses may exist for a pedestrian, due to its interactions with the environment and other pedestrians. Previous works for estimating multiple motion hypotheses require a large number of samples which limits their applicability in real-time motion planning. In this paper, we present a variational learning approach for interaction-aware and multi-modal trajectory prediction based on deep generative neural networks. Our approach can achieve faster convergence and requires significantly fewer samples comparing to state-of-the-art methods. Experimental results on real and simulation data show that our model can effectively learn to infer different trajectories. We compare our method with three baseline approaches and present performance results demonstrating that our generative model can achieve higher accuracy for trajectory prediction by producing diverse trajectories. | computer science |
In this work, we analyze the inertial migration of an electrophoretic particle in a 2-D Poiseuille flow with an electric field applied parallel to the walls. For a thin electrical double layer, the particle exhibits a slip-driven electrokinetic motion along the direction of the applied electric field, which causes the particle to lead or lag the flow (depending on its surface charge). The fluid disturbance caused by this slip-driven motion is characterized by a rapidly decaying source-dipole field which alters the inertial lift on the particle. We determine this inertial lift using the reciprocal theorem. Assuming no wall effects, we derive an analytical expression for a phoretic-lift which captures the modification to the inertial lift due to electrophoresis. We also take wall effects into account and find that the analytical expression is valid away from the walls. We find that for a leading particle, the phoretic-lift acts towards the regions of high shear (i.e. walls), while the reverse is true for a lagging particle. Using an order-of-magnitude analysis, we obtain different components which contribute to the inertial force and classify them on the basis of the interactions from which they emerge. We show that the dominant contribution to the phoretic-lift originates from the interaction of slip-driven source-dipole field with the stresslet field (generated due to particle's resistance to strain in the background flow). Furthermore, to contrast the slip-driven phenomenon from a force-driven phenomenon in terms of their influence on the inertial migration, we also study a non-neutrally buoyant particle. We show that the gravitational effects alter the inertial lift primarily through the interaction of background shear with buoyancy induced stokeslet field. | physics |
We introduce a special class of bimetric theories with preserved classical energy conditions at quantized level. Our theory solves many open questions in physics such as the arrow of time, matter-antimatter asymmetry, weakness of gravity and hierarchy problem. Moreover, it gives logical explanation to the probabilistic nature of quantum mechanics through the construction of bimetric quantum mechanics. | high energy physics theory |
In multiagent dynamical systems, privacy protection corresponds to avoid disclosing the initial states of the agents while accomplishing a distributed task. The system-theoretic framework described in this paper for this scope, denoted dynamical privacy, relies on introducing output maps which act as masks, rendering the internal states of an agent indiscernible by the other agents as well as by external agents monitoring all communications. Our output masks are local (i.e., decided independently by each agent), time-varying functions asymptotically converging to the true states. The resulting masked system is also time-varying, and has the original unmasked system as its limit system. When the unmasked system has a globally exponentially stable equilibrium point, it is shown in the paper that the masked system has the same point as a global attractor. It is also shown that existence of equilibrium points in the masked system is not compatible with dynamical privacy. Application of dynamical privacy to popular examples of multiagent dynamics, such as models of social opinions, average consensus and synchronization, is investigated in detail. | computer science |
We consider the quantum traversal time of an incident wave packet across a potential well using the theory of quantum time of arrival (TOA)-operators. This is done by constructing the corresponding TOA-operator across a potential well via quantization. The expectation value of the potential well TOA-operator is compared to the free particle case for the same incident wave packet. The comparison yields a closed-form expression of the quantum well traversal time which explicitly shows the classical contributions of the positive and negative momentum components of the incident wave packet and a purely quantum mechanical contribution significantly dependent on the well depth. An incident Gaussian wave packet is then used as an example. It is shown that for shallow potential wells, the quantum well traversal time approaches the classical traversal time across the well region when the incident wave packet is spatially broad and approaches the expected quantum free particle traversal time when the wave packet is localized. For deep potential wells, the quantum traversal time oscillates from positive to negative implying that the wave packet can be advanced or delayed. | quantum physics |
Films of titanate nanosheets (approx. 1.8-nm layer thickness and 200-nm size) having a lamellar structure can form electrolytefilled semi-permeable channels containing tetrabutylammonium cations. By evaporation of a colloidal solution, persistent deposits are readily formed with approx. 10 micrometer thickness on a 6-micrometer-thick poly(ethylene-terephthalate) (PET) substrate with a 20 micrometer diameter microhole. When immersed in aqueous solution, the titanate nanosheets exhibit a p.z.c. of -37 mV, consistent with the formation of a cation conducting (semi-permeable) deposit. With a sufficiently low ionic strength in the aqueous electrolyte, ionic current rectification is observed (cationic diode behaviour). Currents can be dissected into (i) electrolyte cation transport, (ii) electrolyte anion transport and (iii) water heterolysis causing additional proton transport. For all types of electrolyte cations, a water heterolysis mechanism is observed. For Ca2+ and Mg2+ ions, water heterolysis causes ion current blocking, presumably due to localised hydroxide-induced precipitation processes. Aqueous NBu4+ is shown to invert the diode effect (from cationic to anionic diode). Potential for applications in desalination and/or ion sensing are discussed | physics |
Volume of fluid(VOF) method is a sharp interface method employed for simulations of two phase flows. Interface in VOF is usually represented using piecewise linear line segments in each computational grid based on the volume fraction field. While VOF for cartesian coordinates conserve mass exactly, existing algorithms do not show machine-precision mass conservation for axisymmetric coordinate systems. In this work, we propose analytic formulae for interface reconstruction in axisymmetric coordinates, similar to those proposed by Scardovelli and Zaleski (J. Comput. Phys. 2000) for cartesian coordinates. We also propose modifications to the existing advection schemes in VOF for axisymmetric coordinates to obtain higher accuracy in mass conservation | physics |
The large-scale overdensities of galaxies at z~2-7 known as protoclusters are believed to be the sites of cluster formation, and deep, wide survey projects such as the Large Synoptic Survey Telescope (LSST) and the Wide Field Infrared Survey Telescope (WFIRST) will deliver significant numbers of these interesting structures. Spectroscopic confirmation and interpretation of these targets, however, is still challenging, and will require wide-field multi-plexed spectroscopy on >20 m-class telescopes in the optical and near-infrared. In the coming decade, detailed studies of protoclusters will enable us, for the first time, to systematically connect these cluster progenitors in the early universe to their virialized counterparts at lower redshifts. This will allow us to address observationally the formation of brightest cluster galaxies and other cluster galaxy populations, the buildup of the intra-cluster light, the chemical enrichment history of the intra-cluster medium, and the formation and triggering of supermassive black holes in dense environments, all of which are currently almost exclusively approached either through the fossil record in clusters or through numerical simulations. Furthermore, at the highest redshifts (z~5-10), these large extended overdensities of star-forming galaxies are believed to have played an important role in the reionization of the universe, which needs to be tested by upcoming experiments. Theory and recent simulations also suggest important links between these overdensities and the formation of supermassive black holes, but observational evidence is still lacking. In this white paper we review our current understanding of this important phase of galaxy cluster history that will be explored by the next generation of large aperture ground-based telescopes GMT and TMT. | astrophysics |
Using deep neural networks to solve PDEs has attracted a lot of attentions recently. However, why the deep learning method works is falling far behind its empirical success. In this paper, we provide a rigorous numerical analysis on deep Ritz method (DRM) \cite{wan11} for second order elliptic equations with Neumann boundary conditions. We establish the first nonasymptotic convergence rate in $H^1$ norm for DRM using deep networks with $\mathrm{ReLU}^2$ activation functions. In addition to providing a theoretical justification of DRM, our study also shed light on how to set the hyper-parameter of depth and width to achieve the desired convergence rate in terms of number of training samples. Technically, we derive bounds on the approximation error of deep $\mathrm{ReLU}^2$ network in $H^1$ norm and on the Rademacher complexity of the non-Lipschitz composition of gradient norm and $\mathrm{ReLU}^2$ network, both of which are of independent interest. | mathematics |
We study the effect of non-Abelian T-duality (NATD) on D-brane solutions of type II supergravity. Knowledge of the full interpolating brane solution allows us to track the brane charges and the corresponding brane configurations, thus providing justification for brane setups previously proposed in the literature and for the common lore that Dp brane solutions give rise to D(p+1)-D(p+3)-NS5 backgrounds under SU(2) NATD transverse to the brane. In brane solutions where spacetime is empty and flat at spatial infinity before the NATD, the spatial infinity of the NATD is universal, i.e. independent of the initial brane configuration. Furthermore, it gives enough information to determine the ranges of all coordinates after NATD. In the more complicated examples of the D2 branes considered here, where spacetime is not asymptotically flat before NATD, the interpretation of the dual solutions remains unclear. In the case of supersymmetric D2 branes arising from M2 reductions to IIA on Sasaki-Einstein seven-manifolds, we explicitly verify that the solution obeys the appropriate generalized spinor equations for a supersymmetric domain wall in four dimensions. We also investigate the existence of supersymmetric mass-deformed D2 brane solutions. | high energy physics theory |
We study proton decay in a six-dimensional orbifold GUT model with gauge group $SO(10)\times U(1)_A$. Magnetic $U(1)_A$ flux in the compact dimensions determines the multiplicity of quark-lepton generations, and it also breaks supersymmetry by giving universal GUT scale masses to scalar quarks and leptons. The model can successfully account for quark and lepton masses and mixings. Our analysis of proton decay leads to the conclusion that the proton lifetime must be close to the current experimental lower bound. Moreover, we find that the branching ratios for the decay channels $p \rightarrow e^+\pi^0$ and $p\rightarrow \mu^+\pi^0$ are of similar size, in fact the latter one can even be dominant. This is due to flavour non-diagonal couplings of heavy vector bosons together with large off-diagonal Higgs couplings, which appears to be a generic feature of flux compactifications. | high energy physics phenomenology |
The ice-rich dwarf planet Ceres is the largest object in the main asteroid belt and is thought to have a brine or mud layer at a depth of tens of kilometers. Furthermore, recent surface deposits of brine-sourced material imply shallow feeder structures such as sills or dikes. Inductive sounding of Ceres can be performed using the solar wind as a source, as was done for the Moon during Apollo. However, the magnetotelluric method -- measuring both electric and magnetic fields at the surface -- is not sensitive to plasma effects that were experienced for Apollo, which used an orbit-to-surface magnetic transfer function. The highly conductive brine targets are readily separable from the resistive ice and rock interior, such that the depth to deep and shallow brines can be assessed simultaneously. The instrumentation will be tested on the Moon in 2023 and is ready for implementation on a Ceres landed mission. | astrophysics |
Spectral modification of energetic magnetar flares by resonant cyclotron scattering (RCS) is considered. During energetic flares, photons emitted from the magnetically-trapped fireball near the stellar surface should resonantly interact with magnetospheric electrons or positrons. We show by a simple thought experiment that such scattering particles are expected to move at mildly relativistic speeds along closed magnetic field lines, which would slightly shift the incident photon energy due to the Doppler effect. We develop a toy model for the spectral modification by a single RCS that incorporates both a realistic seed photon spectrum from the trapped fireball and the velocity field of particles, which is unique to the flaring magnetosphere. We show that our spectral model can be effectively characterized by a single parameter; the effective temperature of the fireball, which enables us to fit observed spectra with low computational cost. We demonstrate that our single scattering model is in remarkable agreement with Swift/BAT data of intermediate flares from SGR 1900+14, corresponding to effective fireball temperatures of $T_{\rm eff}=6$-$7$ keV, whereas BeppoSAX/GRBM data of giant flares from the same source may need more elaborate models including the effect of multiple scatterings. Nevertheless, since there is no standard physically-motivated model for magnetar flare spectra, our model could be a useful tool to study magnetar bursts, shedding light on the hidden properties of the flaring magnetosphere. | astrophysics |
For semilinear stochastic evolution equations whose coefficients are more general than the classical global Lipschitz, we present results on the strong convergence rates of numerical discretizations. The proof of them provides a new approach to strong convergence analysis of numerical discretizations for a large family of second order parabolic stochastic partial differential equations driven by space-time white noises. We apply these results to the stochastic advection-diffusion-reaction equation with a gradient term and multiplicative white noise, and show that the strong convergence rate of a fully discrete scheme constructed by spectral Galerkin approximation and explicit exponential integrator is exactly $\frac12$ in space and $\frac14$ in time. Compared with the optimal regularity of the mild solution, it indicates that the spetral Galerkin approximation is superconvergent and the convergence rate of the exponential integrator is optimal. Numerical experiments support our theoretical analysis. | mathematics |
This work presents comprehensive results to detect in the early stage the pancreatic neuroendocrine tumors (PNETs), a group of endocrine tumors arising in the pancreas, which are the second common type of pancreatic cancer, by checking the abdominal CT scans. To the best of our knowledge, this task has not been studied before as a computational task. To provide radiologists with tumor locations, we adopt a segmentation framework to classify CT volumes by checking if at least a sufficient number of voxels is segmented as tumors. To quantitatively analyze our method, we collect and voxelwisely label a new abdominal CT dataset containing $376$ cases with both arterial and venous phases available for each case, in which $228$ cases were diagnosed with PNETs while the remaining $148$ cases are normal, which is currently the largest dataset for PNETs to the best of our knowledge. In order to incorporate rich knowledge of radiologists to our framework, we annotate dilated pancreatic duct as well, which is regarded as the sign of high risk for pancreatic cancer. Quantitatively, our approach outperforms state-of-the-art segmentation networks and achieves a sensitivity of $89.47\%$ at a specificity of $81.08\%$, which indicates a potential direction to achieve a clinical impact related to cancer diagnosis by earlier tumor detection. | electrical engineering and systems science |
The COVID-19 pandemic has left its marks in the sports world, forcing the full-stop of all sports-related activities in the first half of 2020. Football leagues were suddenly stopped and each country was hesitating between a relaunch of the competition and a premature ending. Some opted for the latter option, and took as the final standing of the season the ranking from the moment the competition got interrupted. This decision has been perceived as unfair, especially by those teams who had remaining matches against easier opponents. In this paper, we introduce a tool to calculate in a fairer way the final standings of domestic leagues that have to stop prematurely: our Probabilistic Final Standing Calculator (PFSC). It is based on a stochastic model taking into account the results of the matches played and simulating the remaining matches, yielding the probabilities for the various possible final rankings. We have compared our PFSC with state-of-the-art prediction models, using previous seasons which we pretend to stop at different points in time. We illustrate our PFSC by showing how a probabilistic ranking of the French Ligue 1 in the stopped 2019-2020 season could have led to alternative, potentially fairer, decisions on the final standing. | statistics |
Quantum systems coupled to environments exhibit intricate dynamics. The master equation gives a Markov approximation of the dynamics, allowing for analytic and numerical treatments. It is ubiquitous in theoretical and applied quantum sciences. The accuracy of the master equation approximation was so far proven in the regime where time must not exceed an upper bound depending on the system-environment interaction strength (weak coupling regime). Here, we show that the Markov approximation is valid for fixed coupling strength and for all times. We also construct a new approximate markovian dynamics -- a completely positive, trace preserving semigroup -- which is asymptotically in time exact, to all orders in the coupling. | quantum physics |
Automatic airplane detection in aerial imagery has a variety of applications. Two of the significant challenges in this task are variations in the scale and direction of the airplanes. To solve these challenges, we present a rotation-and-scale invariant airplane proposal generator. We call this generator symmetric line segments (SLS) that is developed based on the symmetric and regular boundaries of airplanes from the top view. Then, the generated proposals are used to train a deep convolutional neural network for removing non-airplane proposals. Since each airplane can have multiple SLS proposals, where some of them are not in the direction of the fuselage, we collect all proposals corresponding to one ground-truth as a positive bag and the others as the negative instances. To have multiple instance deep learning, we modify the loss function of the network to learn from each positive bag at least one instance as well as all negative instances. Finally, we employ non-maximum suppression to remove duplicate detections. Our experiments on NWPU VHR-10 and DOTA datasets show that our method is a promising approach for automatic airplane detection in very high-resolution images. Moreover, we estimate the direction of the airplanes using box-level annotations as an extra achievement. | computer science |
We show that Modified Newtonian Dynamics (MOND) predict distinct galactic acceleration curve geometries in $g2$-space - the space of total observed centripetal accelerations $g_{\rm tot}$ vs the inferred Newtonian acceleration from baryonic matter $g_{\rm N}$ - and corresponding rotation speed curves: MOND modified gravity predicts cored geometries for isolated galaxies while MOND modified inertia yields neutral geometries, ie. neither cuspy or cored, based on a cusp-core classification of galaxy rotation curve geometry in $g2$-space - rather than on inferred DM density profiles. The classification can be applied both to DM and modified gravity models as well as data and implies a {\it cusp-core} challenge for MOND from observations, for example of cuspy galaxies, which is different from the so-called cusp-core problem of dark matter (DM). We illustrate this challenge by a number of cuspy and also cored galaxies from the SPARC rotation curve database, which deviate significantly from the MOND modified gravity and MOND modified inertia predictions. | astrophysics |
We outline a framework for multiple imputation of nonignorable item nonresponse when the marginal distributions of some of the variables with missing values are known. In particular, our framework ensures that (i) the completed datasets result in design-based estimates of totals that are plausible, given the margins, and (ii) the completed datasets maintain associations across variables as posited in the imputation models. To do so, we propose an additive nonignorable model for nonresponse, coupled with a rejection sampling step. The rejection sampling step favors completed datasets that result in design-based estimates that are plausible given the known margins. We illustrate the framework using simulations with stratified sampling. | statistics |
Electronic nematicity is often found in unconventional superconductors, suggesting its relevance for electronic pairing. In the strongly hole-doped iron-based superconductors, the symmetry channel and strength of the nematic fluctuations, as well as the possible presence of long-range nematic order, remain controversial. Here, we address these questions using transport measurements under elastic strain. By decomposing the strain response into the appropriate symmetry channels, we demonstrate the emergence of a giant in-plane symmetric contribution, associated with the growth of both strong electronic correlations and the sensitivity of these correlations to strain. We find weakened remnants of the nematic fluctuations that are present at optimal doping, but no change in the symmetry channel of nematic fluctuations with hole doping. Furthermore, we find no evidence for a nematic-ordered state in the AFe$_2$As$_2$(A = K, Rb, Cs) superconductors. These results revise the current understanding of nematicity in hole-doped iron-based superconductors. | condensed matter |
We study the Standard Model (SM) in Weyl conformal geometry. This embedding is truly minimal, {\it with no new fields} beyond the SM spectrum and Weyl geometry. The action inherits a gauged scale symmetry $D(1)$ (known as Weyl gauge symmetry) from the underlying geometry. The associated Weyl quadratic gravity undergoes spontaneous breaking of $D(1)$ by a geometric Stueckelberg mechanism in which the Weyl gauge field ($\omega_\mu$) acquires mass by "absorbing" the spin-zero mode of the $\tilde R^2$ term in the action. This mode also generates the Planck scale. The Einstein-Hilbert action emerges in the broken phase. In the presence of the SM, this mechanism receives corrections (from the Higgs) and it can induce electroweak (EW) symmetry breaking. The Higgs field has direct couplings to the Weyl gauge field while the SM fermions only acquire such couplings following the kinetic mixing of the gauge fields of $D(1)\times U(1)_Y$. One consequence is that part of the mass of $Z$ boson is not due to the usual Higgs mechanism, but to its mixing with massive $\omega_\mu$. Precision measurements of $Z$ mass set lower bounds on the mass of $\omega_\mu$ which can be light (few TeV), depending on the mixing angle and Weyl gauge coupling. The Higgs mass and the EW scale are proportional to the vev of the Stueckelberg field. Inflation is driven by the Higgs field which in the early Universe can in principle have a geometric origin by Weyl vector fusion. The dependence of the tensor-to-scalar ratio $r$ on the spectral index $n_s$ is similar to that in Starobinsky inflation but mildly shifted to lower $r$ by the Higgs non-minimal coupling to Weyl geometry. | high energy physics phenomenology |
In this paper, we explore neural network-based strategies for performing symbol detection in a MIMO-OFDM system. Building on a reservoir computing (RC)-based approach towards symbol detection, we introduce a symmetric and decomposed binary decision neural network to take advantage of the structure knowledge inherent in the MIMO-OFDM system. To be specific, the binary decision neural network is added in the frequency domain utilizing the knowledge of the constellation. We show that the introduced symmetric neural network can decompose the original $M$-ary detection problem into a series of binary classification tasks, thus significantly reducing the neural network detector complexity while offering good generalization performance with limited training overhead. Numerical evaluations demonstrate that the introduced hybrid RC-binary decision detection framework performs close to maximum likelihood model-based symbol detection methods in terms of symbol error rate in the low SNR regime with imperfect channel state information (CSI). | electrical engineering and systems science |
The continuous quantum measurement within the probability representation of quantum mechanics is discussed. The partial classical propagator of the symplectic tomogram associated to a particular measurement outcome is introduced, for which the representation of a continuous measurement through the restricted path integral is applied. The classical propagator for the system undergoing a non-selective measurement is derived by summing these partial propagators over the entire outcome set. The elaborated approach is illustrated by considering non-selective position measurement of a quantum oscillator and a particle. | quantum physics |
In this work, a deep learning-based method for log-likelihood ratio (LLR) lossy compression and quantization is proposed, with emphasis on a single-input single-output uncorrelated fading communication setting. A deep autoencoder network is trained to compress, quantize and reconstruct the bit log-likelihood ratios corresponding to a single transmitted symbol. Specifically, the encoder maps to a latent space with dimension equal to the number of sufficient statistics required to recover the inputs - equal to three in this case - while the decoder aims to reconstruct a noisy version of the latent representation with the purpose of modeling quantization effects in a differentiable way. Simulation results show that, when applied to a standard rate-1/2 low-density parity-check (LDPC) code, a finite precision compression factor of nearly three times is achieved when storing an entire codeword, with an incurred loss of performance lower than 0.1 dB compared to straightforward scalar quantization of the log-likelihood ratios. | computer science |
The current generation of short baseline neutrino experiments is approaching intrinsic source limitations in the knowledge of flux, initial neutrino energy and flavor. A dedicated facility based on conventional accelerator techniques and existing infrastructures designed to overcome these impediments would have a remarkable impact on the entire field of neutrino oscillation physics. It would improve by about one order of magnitude the precision on $\nu_\mu$ and $\nu_e$ cross sections, enable the study of electroweak nuclear physics at the GeV scale with unprecedented resolution and advance searches for physics beyond the three-neutrino paradigm. In turn, these results would enhance the physics reach of the next generation long baseline experiments (DUNE and Hyper-Kamiokande) on CP violation and their sensitivity to new physics. In this document, we present the physics case and technology challenge of high precision neutrino beams based on the results achieved by the ENUBET Collaboration in 2016-2018. We also set the R&D milestones to enable the construction and running of this new generation of experiments well before the start of the DUNE and Hyper-Kamiokande data taking. We discuss the implementation of this new facility at three different level of complexity: $\nu_\mu$ narrow band beams, $\nu_e$ monitored beams and tagged neutrino beams. We also consider a site specific implementation based on the CERN-SPS proton driver providing a fully controlled neutrino source to the ProtoDUNE detectors at CERN. | physics |
We apply the ${\tt weirddetector}$, a nonparametric signal detection algorithm based on phase dispersion minimization, in a search for low duty-cycle periodic signals in the Transiting Exoplanet Survey Satellite (${\it TESS}$) photometry. Our approach, in contrast to commonly used model-based approaches specifically for flagging transits, eclipsing binaries, or other similarly periodic events, makes minimal assumptions about the shape of a periodic signal, with the goal of finding "weird" signals of unexpected or arbitrary shape. In total, 248,301 ${\it TESS}$ sources from the first-year Southern sky survey are run through the ${\tt weirddetector}$, of which we manually inspect the top 21,500 for periodicity. To minimize false-positives, we here only report on the upper decile in terms of signal score, a sample for which we obtain 97% recall of ${\it TESS}$ eclipsing binaries and 62% of the TOIs. In our sample, we find 377 previously unreported periodic signals, for which we make a first-pass assignment that 26 are ultra-short periods ($<0.3$ d), 313 are likely eclipsing binaries, 28 appear planet-like, and 10 are miscellaneous signals. | astrophysics |
Multi-messenger emissions from SN1987A and GW170817/GRB170817A suggest a Universe rife with multi-messenger transients associated with black holes and neutron stars. For LIGO-Virgo, soon to be joined by KAGRA, these observations promise unprecedented opportunities to probe the central engines of core-collapse supernovae (CC-SNe) and gamma-ray bursts. Compared to neutron stars, central engines powered by black hole-disk or torus systems may be of particular interest to multi-messenger observations by the relatively large energy reservoir $E_J$ of angular momentum, up to 29\% of total mass in the Kerr metric. These central engines are expected from relatively massive stellar progenitors and compact binary coalescence involving a neutron star. We review prospects of multi-messenger emission by catalytic conversion of $E_J$ by a non-axisymmetric disk or torus. Observational support for this radiation process is found in a recent identification of ${\cal E}\simeq (3.5\pm1)\%M_\odot c^2$ in Extended Emission to GW170817 at a significance of 4.2\,$\sigma$ concurrent with GRB170817A. A prospect on similar emissions from nearby CC-SNe justifies the need for all-sky blind searches of long duration bursts by heterogeneous computing. | astrophysics |
Morphological reconstruction of dendritic spines from fluorescent microscopy is a critical open problem in neuro-image analysis. Existing segmentation tools are ill-equipped to handle thin spines with long, poorly illuminated neck membranes. We address this issue, and introduce an unsupervised path prediction technique based on a stochastic framework which seeks the optimal solution from a path-space of possible spine neck reconstructions. Our method is specifically designed to reduce bias due to outliers, and is adept at reconstructing challenging shapes from images plagued by noise and poor contrast. Experimental analyses on two photon microscopy data demonstrate the efficacy of our method, where an improvement of 12.5% is observed over the state-of-the-art in terms of mean absolute reconstruction error. | electrical engineering and systems science |
A spectrally positive additive L\'evy field is a multidimensional field obtained as the sum $\mathbf{X}_{\rm t}={\rm X}^{(1)}_{t_1}+{\rm X}^{(2)}_{t_2}+\dots+{\rm X}^{(d)}_{t_d}$, ${\rm t}=(t_1,\dots,t_d)\in\mathbb{R}_+^d$, where ${\rm X}^{(j)}={}^t (X^{1,j},\dots,X^{d,j})$, $j=1,\dots,d$, are $d$ independent $\mathbb{R}^d$-valued L\'evy processes issued from 0, such that $X^{i,j}$ is non decreasing for $i\neq j$ and $X^{j,j}$ is spectrally positive. It can also be expressed as $\mathbf{X}_{\rm t}=\mathbb{X}_{\rm t}{\bf 1}$, where ${\bf 1}={}^t(1,1,\dots,1)$ and $\mathbb{X}_{\rm t}=(X^{i,j}_{t_j})_{1\leq i,j\leq d}$. The main interest of spaLf's lies in the Lamperti representation of multitype continuous state branching processes. In this work, we study the law of the first passage times $\mathbf{T}_{\rm r}$ of such fields at levels $-{\rm r}$, where ${\rm r}\in\mathbb{R}_+^d$. We prove that the field $\{(\mathbf{T}_{\rm r},\mathbb{X}_{\mathbf{T}_{\rm r}}),{\rm r}\in\mathbb{R}_+^d\}$ has stationary and independent increments and we describe its law in terms of this of the spaLf $\mathbf{X}$. In particular, the Laplace exponent of $(\mathbf{T}_{\rm r},\mathbb{X}_{\mathbf{T}_{\rm r}})$ solves a functional equation leaded by the Laplace exponent of $\mathbf{X}$. This equation extends in higher dimension a classical fluctuation identity satisfied by the Laplace exponents of the ladder processes. Then we give an expression of the distribution of $\{(\mathbf{T}_{\rm r},\mathbb{X}_{\mathbf{T}_{\rm r}}),{\rm r}\in\mathbb{R}_+^d\}$ in terms of the distribution of $\{\mathbb{X}_{\rm t},{\rm t}\in\mathbb{R}_+^d\}$ by the means of a Kemperman-type formula, well-known for spectrally positive L\'evy processes. | mathematics |
CO~Cam (TIC 160268882) is the second ``single-sided pulsator'' to be discovered. These are stars where one hemisphere pulsates with a significantly higher amplitude than the other side of the star. CO~Cam is a binary star comprised of an Am $\delta$~Sct primary star with $T_{\rm eff} = 7070 \pm 150$\,K, and a spectroscopically undetected G main-sequence secondary star. The dominant pulsating side of the primary star is centred on the L$_1$ point. We have modelled the spectral energy distribution combined with radial velocities, and independently the {\em TESS} light curve combined with radial velocities. Both of these give excellent agreement and robust system parameters for both stars. The $\delta$~Sct star is an oblique pulsator with at least four low radial overtone (probably) f~modes with the pulsation axis coinciding with the tidal axis of the star, the line of apsides. Preliminary theoretical modelling indicates that the modes must produce much larger flux perturbations near the L$_1$ point, although this is difficult to understand because the pulsating star does not come near to filling its Roche lobe. More detailed models of distorted pulsating stars should be developed. These newly discovered single-sided pulsators offer new opportunities for astrophysical inference from stars that are oblique pulsators in close binary stars. | astrophysics |
We homogeneously analyse $\sim 3.2\times 10^5$ photometric measurements for $\sim 1100$ transit lightcurves belonging to $17$ exoplanet hosts. The photometric data cover $16$ years 2004--2019 and include amateur and professional observations. Old archival lightcurves were reprocessed using up-to-date exoplanetary parameters and empirically debiased limb-darkening models. We also derive self-consistent transit and radial-velocity fits for $13$ targets. We confirm the nonlinear TTV trend in the WASP-12 data at a high significance, and with a consistent magnitude. However, Doppler data reveal hints of a radial acceleration about $(-7.5\pm 2.2)$~m/s/yr, indicating the presence of unseen distant companions, and suggesting that roughly $10$ per cent of the observed TTV was induced via the light-travel (or Roemer) effect. For WASP-4, a similar TTV trend suspected after the recent TESS observations appears controversial and model-dependent. It is not supported by our homogeneus TTV sample, including $10$ ground-based EXPANSION lightcurves obtained in 2018 simultaneously with TESS. Even if the TTV trend itself does exist in WASP-4, its magnitude and tidal nature are uncertain. Doppler data cannot entirely rule out the Roemer effect induced by possible distant companions. | astrophysics |
We present a sample of luminous red-sequence galaxies to study the large-scale structure in the fourth data release of the Kilo-Degree Survey. The selected galaxies are defined by a red-sequence template, in the form of a data-driven model of the colour-magnitude relation conditioned on redshift. In this work, the red-sequence template is built using the broad-band optical+near infrared photometry of KiDS-VIKING and the overlapping spectroscopic data sets. The selection process involves estimating the red-sequence redshifts, assessing the purity of the sample, and estimating the underlying redshift distributions of redshift bins. After performing the selection, we mitigate the impact of survey properties on the observed number density of galaxies by assigning photometric weights to the galaxies. We measure the angular two-point correlation function of the red galaxies in four redshift bins, and constrain the large scale bias of our red-sequence sample assuming a fixed $\Lambda$CDM cosmology. We find consistent linear biases for two luminosity-threshold samples (dense and luminous). We find that our constraints are well characterized by the passive evolution model. | astrophysics |
We investigate the new contributions to the parameters $g_L$ and $g_R$ of the $Z b \bar b$ vertex in a multi-Higgs-doublet model (MHDM). We emphasize that those contributions generally worsen the fit of those parameters to the experimental data. We propose a solution to this problem, wherein $g_R$ has the opposite sign from the one predicted by the Standard Model; this solution, though, necessitates light scalars and large Yukawa couplings in the MHDM. | high energy physics phenomenology |
Charts are an essential part of both graphicacy (graphical literacy), and statistical literacy. As chart understanding has become increasingly relevant in data science, automating chart analysis by processing raster images of the charts has become a significant problem. Automated chart reading involves data extraction and contextual understanding of the data from chart images. In this paper, we perform the first step of determining the computational model of chart images for data extraction for selected chart types, namely, bar charts, and scatter plots. We demonstrate the use of positive semidefinite second-order tensor fields as an effective model. We identify an appropriate tensor field as the model and propose a methodology for the use of its degenerate point extraction for data extraction from chart images. Our results show that tensor voting is effective for data extraction from bar charts and scatter plots, and histograms, as a special case of bar charts. | computer science |
Unmanned aerial vehicles (UAVs) can provide an effective solution for improving the coverage, capacity, and the overall performance of terrestrial wireless cellular networks. In particular, UAV-assisted cellular networks can meet the stringent performance requirements of the fifth generation new radio (5G NR) applications. In this paper, the problem of energy-efficient resource allocation in UAV-assisted cellular networks is studied under the reliability and latency constraints of 5G NR applications. The framework of ruin theory is employed to allow solar-powered UAVs to capture the dynamics of harvested and consumed energies. First, the surplus power of every UAV is modeled, and then it is used to compute the probability of ruin of the UAVs. The probability of ruin denotes the vulnerability of draining out the power of a UAV. Next, the probability of ruin is used for efficient user association with each UAV. Then, power allocation for 5G NR applications is performed to maximize the achievable network rate using the water-filling approach. Simulation results demonstrate that the proposed ruin-based scheme can enhance the flight duration up to 61% and the number of served users in a UAV flight by up to 58\%, compared to a baseline SINR-based scheme. | computer science |
This note contains discussions on the entanglement entropy and mutual information of a strongly coupled field theory with a critical point which has a holographic dual. We investigate analytically, in the specific regimes of parameters, how these non-local operators behave near the critical point. Interestingly, we observe that although the mutual information is constant at the critical point, its slope shows a power-law divergence in the vicinity of the critical point. We show that the leading behavior of mutual information at and near the critical point could yield a set of critical exponents if we regard it as an order parameter. Our result for this set of static critical exponents is (1/2,1/2,1/2,2) which is identical to the one calculated via the thermodynamic quantities. Hence it suggests that beside the numerous merits of mutual information, this quantity also captures the critical behavior of the underlying field theory and it could be used as a proper measure to probe the phase structure associated with the strongly coupled systems. | high energy physics theory |
Interference of a discrete quantum state with a continuum of states gives rise to asymmetric line shapes that have been observed in measurements across nuclear, atomic, molecular as well as solid-state physics. Information about the interference is captured by some but not all measurable quantities. For example, for quantum resonances arising in single channel scattering, the signature of such interference may disappear due to the orthogonality of partial waves. Here, we show that probing the angular dependence of the cross section allows for unveiling the coherence between the partial waves which leads to the appearance of the characteristic asymmetric Fano profiles. We observe a shift of the resonance position with observation angle, in excellent agreement with theoretical predictions from full quantum scattering calculations. Using a model description for the interference between the resonant and background states, we extract the relative phase responsible for the characteristic Fano-like profile from our experimental measurements. | quantum physics |
We study Dyakonov surface waveguide modes in a waveguide represented by an interface of two anisotropic media confined between two air half-spaces. We analyze such modes in terms of perturbation theory in the approximation of weak anisotropy. We show that in contrast to conventional Dyakonov surface waves that decay monotonically with distance from the interface, Dyakonov waveguide modes can have local maxima of the field intensity away from the interface. We confirm our analytical results by comparing them with full-wave electromagnetic simulations. We believe that this work can bring new ideas in the research of Dyakonov surface waves. | physics |
In this paper, we consider enumeration problems for edge-distinct and vertex-distinct Eulerian trails. Here, two Eulerian trails are \emph{edge-distinct} if the edge sequences are not identical, and they are \emph{vertex-distinct} if the vertex sequences are not identical. As the main result, we propose optimal enumeration algorithms for both problems, that is, these algorithm runs in $\mathcal{O}(N)$ total time, where $N$ is the number of solutions. Our algorithms are based on the reverse search technique introduced by [Avis and Fukuda, DAM 1996], and the push out amortization technique introduced by [Uno, WADS 2015]. | computer science |
Many decisions involve choosing an uncertain course of actions in deep and wide decision trees, as when we plan to visit an exotic country for vacation. In these cases, exhaustive search for the best sequence of actions is not tractable due to the large number of possibilities and limited time or computational resources available to make the decision. Therefore, planning agents need to balance breadth (exploring many actions at each level of the tree) and depth (exploring many levels in the tree) to allocate optimally their finite search capacity. We provide efficient analytical solutions and numerical analysis to the problem of allocating finite sampling capacity in one shot to large decision trees. We find that in general the optimal policy is to allocate few samples per level so that deep levels can be reached, thus favoring depth over breadth search. In contrast, in poor environments and at low capacity, it is best to broadly sample branches at the cost of not sampling deeply, although this policy is marginally better than deep allocations. Our results provide a theoretical foundation for the optimality of deep imagination for planning and show that it is a generally valid heuristic that could have evolved from the finite constraints of cognitive systems. | statistics |
Deep reinforcement learning has been recognized as an efficient technique to design optimal strategies for different complex systems without prior knowledge of the control landscape. To achieve a fast and precise control for quantum systems, we propose a novel deep reinforcement learning approach by constructing a curriculum consisting of a set of intermediate tasks defined by a fidelity threshold. Tasks among a curriculum can be statically determined using empirical knowledge or adaptively generated with the learning process. By transferring knowledge between two successive tasks and sequencing tasks according to their difficulties, the proposed curriculum-based deep reinforcement learning (CDRL) method enables the agent to focus on easy tasks in the early stage, then move onto difficult tasks, and eventually approaches the final task. Numerical simulations on closed quantum systems and open quantum systems demonstrate that the proposed method exhibits improved control performance for quantum systems and also provides an efficient way to identify optimal strategies with fewer control pulses. | quantum physics |
A path-planning algorithm for connected and non-connected automated road vehicles on multilane motorways is derived from the opportune formulation of an optimal control problem. In this framework, the objective function to be minimized contains appropriate respective terms to reflect: the goals of vehicle advancement; passenger comfort; and avoidance of collisions with other vehicles, of road departures and of negative speeds. Connectivity implies that connected vehicles are able to exchange with each other (V2V) or the infrastructure (V2I), real-time information about their last generated path. For the numerical solution of the optimal control problem, an efficient feasible direction algorithm is used. To ensure high-quality local minima, a simplified Dynamic Programming algorithm is also conceived to deliver the initial guess trajectory for the feasible direction algorithm. Thanks to low computation times, the approach is readily executable within a model predictive control (MPC) framework. The proposed MPC-based approach is embedded within the Aimsun microsimulation platform, which enables the evaluation of a plethora of realistic vehicle driving and advancement scenarios. Results obtained on a multilane motorway stretch indicate higher efficiency of the optimally controlled vehicles in driving closer to their desired speed, compared to ordinary Aimsun vehicles. Increased penetration rates of automated vehicles are found to increase the efficiency of the overall traffic flow, benefiting manual vehicles as well. Moreover, connected controlled vehicles appear to be more efficient compared to the corresponding non-connected controlled vehicles, due to the improved real-time information and short-term prediction. | electrical engineering and systems science |
This chapter provides an overview of coded caching in the context of heterogeneous wireless networks. We begin by briefly describing the key idea behind coded caching and then discuss in detail the impact of various aspects such as non-uniform content popularity, multiple cache access, and interference. | computer science |
In the current study, model expressions for fifth-order velocity moments obtained from the truncated Gram-Charlier series expansions model for a turbulent flow field probability density function are validated using data from direct numerical simulation (DNS) of a planar turbulent flow in a strained channel. Simplicity of the model expressions, the lack of unknown coefficients, and their applicability to non-Gaussian turbulent flows make this approach attractive to use for closing turbulent models based on the Reynolds-averaged Navier-Stokes equations. The study confirms validity of the model expressions. It also shows that the imposed flow deformation improves an agreement between the model and DNS profiles for the fifth-order moments in the flow buffer zone including when the flow reverses its direction. The study reveals sensitivity of particularly odd velocity moments to the grid resolution. A new length scale is proposed as a criterion for the grid generation near the wall and in the other flow areas dominated by high mean velocity gradients when higher-order statistics have to be collected from DNS. | physics |
It is natural to ask whether solvsolitons are global maxima for the Ricci pinching functional F:=scal^2/|Ric|^2 on the set of all left-invariant metrics on a given solvable Lie group S, as it is to ask whether they are the only global maxima. A positive answer to both questions was given in a recent paper by the same authors when the Lie algebra s of S is either unimodular or has a codimension-one abelian ideal. In the present paper, we prove that this also holds in the following two more general cases: 1) s has a nilradical of codimension-one; 2) the nilradical n of s is abelian and the functional F is restricted to the set of metrics such that a is orthogonal to n, where a is the orthogonal complement of n with respect to the solvsoliton. | mathematics |
It has recently been suggested that the Standard Model Higgs boson could act as the inflaton while minimally coupled to gravity - given that the gravity sector is extended with an $\alpha R^2$ term and the underlying theory of gravity is of Palatini, rather than metric, type. In this paper, we revisit the idea and correct some shortcomings in earlier studies. We find that in this setup the Higgs can indeed act as the inflaton and that the tree-level predictions of the model for the spectral index and the tensor-to-scalar ratio are $n_s\simeq 0.941$, $r\simeq 0.3/(1+10^{-8}\alpha)$, respectively, for a typical number of e-folds, $N=50$, between horizon exit of the pivot scale $k=0.05\, {\rm Mpc}^{-1}$ and the end of inflation. Even though the tensor-to-scalar ratio is suppressed compared to the usual minimally coupled case and can be made compatible with data for large enough $\alpha$, the result for $n_s$ is in severe tension with the Planck results. We briefly discuss extensions of the model. | astrophysics |
The interaction between a quantum charge and a quantized source of a magnetic field is considered in the Aharonov-Bohm scenario. It is shown that, if the source has a relatively small uncertainty while the particle encircles it, an effective magnetic vector potential arises and the final state of the joint system is approximately a tensor product. Furthermore, if a post-selection of the source is considered, the effective vector potential is, in general, complex-valued. This leads to a new prediction in the Aharonov-Bohm scenario before the magnetic field is fully enclosed that has a parallel with Berry phases in open quantum systems. Also, new insights into the correspondence principle, which makes complex vector potentials relevant in the study of classical systems, are discussed. | quantum physics |
Photothermal effects can alter the response of an optical cavity, for example, by inducing self-locking behavior or unstable anomalies. The consequences of these effects are often regarded as parasitic and generally cause limited operational performance of the cavity. Despite their importance, however, photothermal parameters are usually hard to characterize precisely. In this work we use an optical cavity strongly coupled to photothermal effects to experimentally observe an optical back-action on the photothermal relaxation rate. This effect, reminiscent of the radiation-pressure-induced optical spring effect in cavity optomechanical systems, uses optical detuning as a fine control to change the photothermal relaxation process. The photothermal relaxation rate of the system can be accordingly modified by more than an order of magnitude. This approach offers an opportunity to obtain precise in-situ estimations of the parameters of the cavity, in a way that is compatible with a wide range of optical resonator platforms. Through this back-action effect we are able to determine the natural photothermal relaxation rate and the effective thermal conductivity of the cavity mirrors with unprecedented resolution. | physics |
We consider the Assouad dimension analogues of two important problems in geometric measure theory. These problems are tied together by the common theme of `passing to weak tangents'. First, we solve an analogue of Falconer's distance set problem for Assouad dimension in the plane: if a planar set has Assouad dimension strictly greater than 1, then its distance set has Assouad dimension 1. We also obtain partial results in higher dimensions. Second, we consider how Assouad dimension behaves under orthogonal projection. We extend the planar projection theorem of Fraser and Orponen to higher dimensions, provide estimates on the (Hausdorff) dimension of the exceptional set of projections, and provide a recipe for obtaining results about restricted families of projections. We provide several illustrative examples throughout. | mathematics |
We study the joint route assignment and charge scheduling problem of a transit system dispatcher operating a fleet of electric buses in order to maximize solar energy integration and reduce energy costs. Specifically, we consider a complex bus transit system with preexisting routes, limited charging infrastructure, limited number of electric buses, and time-varying electricity rates. We present a mixed integer linear program (MILP) that yields the minimal cost daily operation strategy for the fleet (i.e., route assignments and charging schedules using daily solar forecasts). We present numerical results from a real-world case study with Stanford University's Marguerite Shuttle (a large-scale electric bus fleet) to demonstrate the validity of our solution and highlight the significant cost savings compared to the status quo. | electrical engineering and systems science |
It is widely accepted that topological superconductors can only have an effective interpretation in terms of curved geometry rather than gauge fields due to their charge neutrality. This approach is commonly employed in order to investigate their properties, such as the behaviour of their energy currents. Nevertheless, it is not known how accurately curved geometry can describe actual microscopic models. Here, we demonstrate that the low-energy properties of the Kitaev honeycomb lattice model, a topological superconductor that supports localised Majorana zero modes at its vortex excitations, are faithfully described in terms of Riemann-Cartan geometry. In particular, we show analytically that the continuum limit of the model is given in terms of the Majorana version of the Dirac Hamiltonian coupled to both curvature and torsion. We numerically establish the accuracy of the geometric description for a wide variety of couplings of the microscopic model. Our work opens up the opportunity to accurately predict dynamical properties of the Kitaev model from its effective geometric description. | condensed matter |
Relational models generalize log-linear models to arbitrary discrete sample spaces by specifying effects associated with any subsets of their cells. A relational model may include an overall effect, pertaining to every cell after a reparameterization, and in this case, the properties of the maximum likelihood estimates (MLEs) are analogous to those computed under traditional log-linear models, and the goodness-of-fit tests are also the same. If an overall effect is not present in any reparameterization, the properties of the MLEs are considerably different, and the Poisson and multinomial MLEs are not equivalent. In the Poisson case, if the overall effect is not present, the observed total is not always preserved by the MLE, and thus, the likelihood ratio statistic is not identical with twice the Kullback-Leibler divergence. However, as demonstrated, its general form may be obtained from the Bregman divergence. The asymptotic equivalence of the Pearson chi-squared and likelihood ratio statistics holds, but the generality considered here requires extended proofs. | statistics |
Gate-based quantum computations represent an essential to realize near-term quantum computer architectures. A gate-model quantum neural network (QNN) is a QNN implemented on a gate-model quantum computer, realized via a set of unitaries with associated gate parameters. Here, we define a training optimization procedure for gate-model QNNs. By deriving the environmental attributes of the gate-model quantum network, we prove the constraint-based learning models. We show that the optimal learning procedures are different if side information is available in different directions, and if side information is accessible about the previous running sequences of the gate-model QNN. The results are particularly convenient for gate-model quantum computer implementations. | quantum physics |
The coincident detection of GW170817 in gravitational waves and electromagnetic radiation spanning the radio to MeV gamma-ray bands provided the first direct evidence that short gamma-ray bursts (GRBs) can originate from binary neutron star (BNS) mergers. On the other hand, the properties of short GRBs in high-energy gamma rays are still poorly constrained, with only $\sim$20 events detected in the GeV band, and none in the TeV band. GRB~160821B is one of the nearest short GRBs known at $z=0.162$. Recent analyses of the multiwavelength observational data of its afterglow emission revealed an optical-infrared kilonova component, characteristic of heavy-element nucleosynthesis in a BNS merger. Aiming to better clarify the nature of short GRBs, this burst was automatically followed up with the MAGIC telescopes, starting from 24 seconds after the burst trigger. Evidence of a gamma-ray signal is found above $\sim$0.5 TeV at a significance of $\sim3\,\sigma$ during observations that lasted until 4 hours after the burst. Assuming that the observed excess events correspond to gamma-ray emission from GRB 160821B, in conjunction with data at other wavelengths, we investigate its origin in the framework of GRB afterglow models. The simplest interpretation with one-zone models of synchrotron-self-Compton emission from the external forward shock has difficulty accounting for the putative TeV flux. Alternative scenarios are discussed where the TeV emission can be relatively enhanced. The role of future GeV-TeV observations of short GRBs in advancing our understanding of BNS mergers and related topics is briefly addressed. | astrophysics |
Exploiting wavefront curvature enables localization with limited infrastructure and hardware complexity. With the introduction of reconfigurable intelligent surfaces (RISs), new opportunities arise, in particular when the RIS is functioning as a lens receiver. We investigate the localization of a transmitter using a RIS-based lens in close proximity to a single receive antenna element attached to reception radio frequency chain. We perform a Fisher information analysis, evaluate the impact of different lens configurations, and propose a two-stage localization algorithm. Our results indicate that positional beamforming can lead to better performance when a priori location information is available, while random beamforming is preferred when a priori information is lacking. Our simulation results for a moderate size lens operating at 28 GHz showcased that decimeter-level accuracy can be attained within 3 meters to the lens. | electrical engineering and systems science |
Gauge theories appear broadly in physics, ranging from the standard model of particle physics to long-wavelength descriptions of topological systems in condensed matter. However, systems with sign problems are largely inaccessible to classical computations and also beyond the current limitations of digital quantum hardware. In this work, we develop an analog approach to simulating gauge theories with an experimental setup that employs dipolar spins (molecules or Rydberg atoms). We consider molecules fixed in space and interacting through dipole-dipole interactions, avoiding the need for itinerant degrees of freedom. Each molecule represents either a site or gauge degree of freedom, and Gauss law is preserved by a direct and programmatic tuning of positions and internal state energies. This approach can be regarded as a form of analog systems programming and charts a path forward for near-term quantum simulation. As a first step, we numerically validate this scheme in a small-system study of U(1) quantum link models in (1+1) dimensions with link spin S = 1/2 and S = 1 and illustrate how dynamical phenomena such as string inversion and string breaking could be observed in near-term experiments. Our work brings together methods from atomic and molecular physics, condensed matter physics, high-energy physics, and quantum information science for the study of nonperturbative processes in gauge theories. | quantum physics |
Exoplanet discoveries have reached into the realm of terrestrial planets that are becoming the subject of atmospheric studies. One such discovery is LHS 3844b, a 1.3 Earth radius planet in a 0.46 day orbit around an M4.5-5 dwarf star. Follow-up observations indicate that the planet is largely devoid of substantial atmosphere. This lack of significant atmosphere places astrophysical and geophysical constraints on LHS 3844b, primarily the degree of volatile outgassing and the rate of atmosphere erosion. We estimate the age of the host star as $7.8\pm1.6$ Gyrs and find evidence of an active past comparable to Proxima Centauri. We use geodynamical models of volcanic outgassing and atmospheric erosion to show that the apparent lack of atmosphere is consistent with a volatile-poor mantle for LHS 3844b. We show the core is unlikely to host enough C to produce a sufficiently volatile-poor mantle, unless the bulk planet is volatile-poor relative to Earth. While we cannot rule out a giant impact stripping LHS 3844b's atmosphere, we show this mechanism would require significant mantle stripping, potentially leaving LHS 3844b as an Fe-rich "super-Mercury". Atmospheric erosion by smaller impacts is possible, but only if the planet has already begun degassing and is bombarded by $10^3$ impactors of radius 500-1000 km traveling at escape velocity. We discuss formation and migration scenarios that could account for a volatile poor origin, including the potential for an unobserved massive companion planet. A relatively volatile-poor composition of LHS 3844b suggests that the planet formed interior to the system snow-line. | astrophysics |
Deep learning has largely reduced the need for manual feature selection in image segmentation. Nevertheless, network architecture optimization and hyperparameter tuning are mostly manual and time consuming. Although there are increasing research efforts on network architecture search in computer vision, most works concentrate on image classification but not segmentation, and there are very limited efforts on medical image segmentation especially in 3D. To remedy this, here we propose a framework, SegNAS3D, for network architecture search of 3D image segmentation. In this framework, a network architecture comprises interconnected building blocks that consist of operations such as convolution and skip connection. By representing the block structure as a learnable directed acyclic graph, hyperparameters such as the number of feature channels and the option of using deep supervision can be learned together through derivative-free global optimization. Experiments on 43 3D brain magnetic resonance images with 19 structures achieved an average Dice coefficient of 82%. Each architecture search required less than three days on three GPUs and produced architectures that were much smaller than the state-of-the-art manually created architectures. | electrical engineering and systems science |
In the present short paper, we obtain a general lower bound for the $2$-adic valuation of the algebraic part of the central value of the complex $L$-series for the quadratic twists of any elliptic curve $E$ over $\mathbb{Q}$. We also prove the existence of an explicit infinite family of quadratic twists with analytic rank $0$ for a large family of elliptic curves. | mathematics |
Generative Modelling has become a promising use case for near term quantum computers. In particular, due to the fundamentally probabilistic nature of quantum mechanics, quantum computers naturally model and learn probability distributions, perhaps more efficiently than can be achieved classically. The Born machine is an example of such a model, easily implemented on near term quantum computers. However, in its original form, the Born machine only naturally represents discrete distributions. Since probability distributions of a continuous nature are commonplace in the world, it is essential to have a model which can efficiently represent them. Some proposals have been made in the literature to supplement the discrete Born machine with extra features to more easily learn continuous distributions, however, all invariably increase the resources required to some extent. In this work, we present the continuous variable Born machine, built on the alternative architecture of continuous variable quantum computing, which is much more suitable for modelling such distributions in a resource-minimal way. We provide numerical results indicating the models ability to learn both quantum and classical continuous distributions, including in the presence of noise. | quantum physics |
Data movement between the CPU and main memory is a first-order obstacle against improving performance, scalability, and energy efficiency in modern systems. Computer systems employ a range of techniques to reduce overheads tied to data movement, spanning from traditional mechanisms (e.g., deep multi-level cache hierarchies, aggressive hardware prefetchers) to emerging techniques such as Near-Data Processing (NDP), where some computation is moved close to memory. Our goal is to methodically identify potential sources of data movement over a broad set of applications and to comprehensively compare traditional compute-centric data movement mitigation techniques to more memory-centric techniques, thereby developing a rigorous understanding of the best techniques to mitigate each source of data movement. With this goal in mind, we perform the first large-scale characterization of a wide variety of applications, across a wide range of application domains, to identify fundamental program properties that lead to data movement to/from main memory. We develop the first systematic methodology to classify applications based on the sources contributing to data movement bottlenecks. From our large-scale characterization of 77K functions across 345 applications, we select 144 functions to form the first open-source benchmark suite (DAMOV) for main memory data movement studies. We select a diverse range of functions that (1) represent different types of data movement bottlenecks, and (2) come from a wide range of application domains. Using NDP as a case study, we identify new insights about the different data movement bottlenecks and use these insights to determine the most suitable data movement mitigation mechanism for a particular application. We open-source DAMOV and the complete source code for our new characterization methodology at https://github.com/CMU-SAFARI/DAMOV. | computer science |
We introduce a complete physical model for the single-particle electronic structure of twisted bilayer graphene (tBLG), which incorporates the crucial role of lattice relaxation. Our model, based on $k \cdot p$ perturbation theory, combines the accuracy of DFT calculations through effective tight-binding Hamiltonians with the computational efficiency and complete control of the twist angle offered by continuum models. The inclusion of relaxation significantly changes the bandstructure at the first magic-angle twist corresponding to flat bands near the Fermi level (the "low-energy" states), and eliminates the appearance of a second magic-angle twist. We show that minimal models for the low-energy states of tBLG can be easily modified to capture the changes in electronic states as a function of twist angle. | condensed matter |
Non-ideal MHD effects have been shown recently as a robust mechanism of averting the magnetic braking "catastrophe" and promoting protostellar disc formation. However, the magnetic diffusivities that determine the efficiency of non-ideal MHD effects are highly sensitive to microphysics. We carry out non-ideal MHD simulations to explore the role of microphysics on disc formation and the interplay between ambipolar diffusion (AD) and Hall effect during the protostellar collapse. We find that removing the smallest grain population ($\lesssim$10 nm) from the standard MRN size distribution is sufficient for enabling disc formation. Further varying the grain sizes can result in either a Hall-dominated or an AD-dominated collapse; both form discs of tens of AU in size regardless of the magnetic field polarity. The direction of disc rotation is bimodal in the Hall dominated collapse but unimodal in the AD-dominated collapse. We also find that AD and Hall effect can operate either with or against each other in both radial and azimuthal directions, yet the combined effect of AD and Hall is to move the magnetic field radially outward relative to the infalling envelope matter. In addition, microphysics and magnetic field polarity can leave profound imprints both on observables (e.g., outflow morphology, disc to stellar mass ratio) and on the magnetic field characteristics of protoplanetary discs. Including Hall effect relaxes the requirements on microphysics for disc formation, so that prestellar cores with cosmic-ray ionization rate of $\lesssim$2--3$\times10^{-16}$ s$^{-1}$ can still form small discs of $\lesssim$10 AU radius. We conclude that disc formation should be relatively common for typical prestellar core conditions, and that microphysics in the protostellar envelope is essential to not only disc formation, but also protoplanetary disc evolution. | astrophysics |
In this paper, we propose a deep learning-based beam tracking method for millimeter-wave (mmWave)communications. Beam tracking is employed for transmitting the known symbols using the sounding beams and tracking time-varying channels to maintain a reliable communication link. When the pose of a user equipment (UE) device varies rapidly, the mmWave channels also tend to vary fast, which hinders seamless communication. Thus, models that can capture temporal behavior of mmWave channels caused by the motion of the device are required, to cope with this problem. Accordingly, we employa deep neural network to analyze the temporal structure and patterns underlying in the time-varying channels and the signals acquired by inertial sensors. We propose a model based on long short termmemory (LSTM) that predicts the distribution of the future channel behavior based on a sequence of input signals available at the UE. This channel distribution is used to 1) control the sounding beams adaptively for the future channel state and 2) update the channel estimate through the measurement update step under a sequential Bayesian estimation framework. Our experimental results demonstrate that the proposed method achieves a significant performance gain over the conventional beam tracking methods under various mobility scenarios. | electrical engineering and systems science |
Inspired by the great success of machine learning (ML), researchers have applied ML techniques to visualizations to achieve a better design, development, and evaluation of visualizations. This branch of studies, known as ML4VIS, is gaining increasing research attention in recent years. To successfully adapt ML techniques for visualizations, a structured understanding of the integration of ML4VIS is needed. In this paper, we systematically survey \paperNum ML4VIS studies, aiming to answer two motivating questions: "what visualization processes can be assisted by ML?" and "how ML techniques can be used to solve visualization problems?" This survey reveals six main processes where the employment of ML techniques can benefit visualizations: VIS-driven Data Processing, Data Presentation, Insight Communication, Style Imitation, VIS Interaction, VIS Perception. The six processes are related to existing visualization theoretical models in an ML4VIS pipeline, aiming to illuminate the role of ML-assisted visualization in general visualizations. Meanwhile, the six processes are mapped into main learning tasks in ML to align the capabilities of ML with the needs in visualization. Current practices and future opportunities of ML4VIS are discussed in the context of the ML4VIS pipeline and the ML-VIS mapping. While more studies are still needed in the area of ML4VIS, we hope this paper can provide a stepping-stone for future exploration. A web-based interactive browser of this survey is available at https://ml4vis.github.io. | computer science |
A nearly free electron metal and a Mott insulating state can be thought of as opposite ends of possibilities for the motion of electrons in a solid. In the magnetic oxide metal PdCrO$_{2}$, these two coexist as alternating layers. Using angle resolved photoemission, we surprisingly find sharp band-like features in the one-electron removal spectral function of the correlated subsystem. We show that these arise because a hole created in the Mott layer moves to and propagates in the metallic layer while retaining memory of the Mott layer's magnetism. This picture is quantitatively supported by a strong coupling analysis capturing the physics of PdCrO$_{2}$ in terms of a Kondo lattice Hamiltonian. Our findings open new routes to use the non-magnetic probe of photoemission to gain insights into the spin-susceptibility of correlated electron systems. | condensed matter |
Heavy-tailed continuous shrinkage priors, such as the horseshoe prior, are widely used for sparse estimation problems. However, there is limited work extending these priors to predictors with grouping structures. Of particular interest in this article, is regression coefficient estimation where pockets of high collinearity in the covariate space are contained within known covariate groupings. To assuage variance inflation due to multicollinearity we propose the group inverse-gamma gamma (GIGG) prior, a heavy-tailed prior that can trade-off between local and group shrinkage in a data adaptive fashion. A special case of the GIGG prior is the group horseshoe prior, whose shrinkage profile is correlated within-group such that the regression coefficients marginally have exact horseshoe regularization. We show posterior consistency for regression coefficients in linear regression models and posterior concentration results for mean parameters in sparse normal means models. The full conditional distributions corresponding to GIGG regression can be derived in closed form, leading to straightforward posterior computation. We show that GIGG regression results in low mean-squared error across a wide range of correlation structures and within-group signal densities via simulation. We apply GIGG regression to data from the National Health and Nutrition Examination Survey for associating environmental exposures with liver functionality. | statistics |
The concordance of the $\Lambda$CDM cosmological model in light of current observations has been the subject of an intense debate in recent months. The 2018 Planck Cosmic Microwave Background (CMB) temperature anisotropy power spectrum measurements appear at face value to favour a spatially closed Universe with curvature parameter $\Omega_K<0$. This preference disappears if Baryon Acoustic Oscillation (BAO) measurements are combined with Planck data to break the geometrical degeneracy, although the reliability of this combination has been questioned due to the strong tension present between the two datasets when assuming a curved Universe. Here, we approach this issue from yet another point of view, using measurements of the full-shape (FS) galaxy power spectrum, $P(k)$, from the Baryon Oscillation Spectroscopic Survey DR12 CMASS sample. By combining Planck data with FS measurements, we break the geometrical degeneracy and find $\Omega_K=0.0023 \pm 0.0028$. This constrains the Universe to be spatially flat to sub-percent precision, in excellent agreement with results obtained using BAO measurements. However, as with BAO, the overall increase in the best-fit $\chi^2$ suggests a similar level of tension between Planck and $P(k)$ under the assumption of a curved Universe. While the debate on spatial curvature and the concordance between cosmological datasets remains open, our results provide new perspectives on the issue, highlighting the crucial role of FS measurements in the era of precision cosmology. | astrophysics |
Machine learning is seen as a promising application of quantum computation. For near-term noisy intermediate-scale quantum (NISQ) devices, parametrized quantum circuits (PQCs) have been proposed as machine learning models due to their robustness and ease of implementation. However, the cost function is normally calculated classically from repeated measurement outcomes, such that it is no longer encoded in a quantum state. This prevents the value from being directly manipulated by a quantum computer. To solve this problem, we give a routine to embed the cost function for machine learning into a quantum circuit, which accepts a training dataset encoded in superposition or an easily preparable mixed state. We also demonstrate the ability to evaluate the gradient of the encoded cost function in a quantum state. | quantum physics |
Auger-Meitner processes are electronic decay processes of energetically low-lying vacancies. In these processes, the vacancy is filled by an electron of an energetically higher lying orbital, while another electron is simultaneously emitted to the continuum. In low-lying orbitals relativistic effects can not even be neglected for light elements. At the same time lifetime calculations are computationally expensive. In this context, we investigate which effect spin-orbit coupling has on Auger-Meitner decay widths and aim for a rule of thumb for the relative decay widths of initial states split by spin-orbit coupling. We base this rule of thumb on Auger-Meitner decay widths of Sr4$p^{-1}$ and Ra6$p^{-1}$ obtained by relativistic FanoADC-Stieltjes calculations. | physics |
Radiative shock waves in the Cygnus Loop and other supernova remnants show different morphologies in [O III] and H{\alpha} emission. We use HST spectra and narrowband images to study the development of turbulence in the cooling region behind a shock on the west limb of the Cygnus Loop. We refine our earlier estimates of shock parameters that were based upon ground-based spectra, including ram pressure, vorticity and magnetic field strength. We apply several techniques, including Fourier power spectra and the Rolling Hough Transform, to quantify the shape of the rippled shock front as viewed in different emission lines. We assess the relative importance of thermal instabilities, the thin shell instability, upstream density variations, and upstream magnetic field variations in producing the observed structure. | astrophysics |
The fractal statistics were applied to the daily new cases of COVID-19 in the USA. The Hurst parameter, which indicates the long-range correlations in the growth, was calculated using a simple R/S method based on the fluctuations of the daily growth for several US states. The values of Hurst parameters for different states were analyzed using two controlling parameters, stay-at-home order, and the population density. | physics |
We derive and test a novel holographic duality in the B-model topological string theory. The duality relates the B-model on certain Calabi-Yau three-folds to two-dimensional chiral algebras defined as gauged $\beta\gamma\,$ systems. The duality conjecturally captures a topological sector of more familiar $\mathrm{AdS}_5 / \mathrm{CFT}_4$ holographic dualities. | high energy physics theory |
Recently, a new type of second-order topological insulator has been theoretically proposed by introducing an in-plane Zeeman field into the Kane-Mele model in the two-dimensional honeycomb lattice. A pair of topological corner states arise at the corners with obtuse angles of an isolated diamond-shaped flake. To probe the corner states, we study their transport properties by attaching two leads to the system. Dressed by incoming electrons, the dynamic corner state is very different from its static counterpart. Resonant tunneling through the dressed corner state can occur by tuning the in-plane Zeeman field. At the resonance, the pair of spatially well separated and highly localized corner states can form a dimer state, whose wavefunction extends almost the entire bulk of the diamond-shaped flake. By varying the Zeeman field strength, multiple resonant tunneling events are mediated by the same dimer state. This re-entrance effect can be understood by a simple model. These findings extend our understanding of dynamic aspects of the second-order topological corner states. | condensed matter |
Despite growing attention in autonomy, there are still many open problems, including how autonomous vehicles will interact and communicate with other agents, such as human drivers and pedestrians. Unlike most approaches that focus on pedestrian detection and planning for collision avoidance, this paper considers modeling the interaction between human drivers and pedestrians and how it might influence map estimation, as a proxy for detection. We take a mapping inspired approach and incorporate people as sensors into mapping frameworks. By taking advantage of other agents' actions, we demonstrate how we can impute portions of the map that would otherwise be occluded. We evaluate our framework in human driving experiments and on real-world data, using occupancy grids and landmark-based mapping approaches. Our approach significantly improves overall environment awareness and out-performs standard mapping techniques. | computer science |
Von Neumann use 4 assumptions to derive the Hilbert space (HS) formulation of quantum mechanics (QM). Within this theory dispersion free ensembles do not exist. To accommodate a theory of quantum mechanics that allow dispersion free ensemble some of the assumptions need be modified. An existing formulation of QM, the phase space (PS) formulation allow dispersion free ensembles and thus is qualifies as an hidden variable theory. Within the PS theory we identify the violated assumption (dubbed I in the text) to be the one that requires that the value r for the quantity $\mathbb{R}$ implies the value f(r) for the quantity $f(\mathbb{R})$. We note that this violation arise due to tracking within c-number hidden variable theory of the operator ordering involved in HS theory as is required for a 1-1 correspondence between the theories. | quantum physics |
The pandemic of coronavirus disease 2019 (COVID-19) has lead to a global public health crisis spreading hundreds of countries. With the continuous growth of new infections, developing automated tools for COVID-19 identification with CT image is highly desired to assist the clinical diagnosis and reduce the tedious workload of image interpretation. To enlarge the datasets for developing machine learning methods, it is essentially helpful to aggregate the cases from different medical systems for learning robust and generalizable models. This paper proposes a novel joint learning framework to perform accurate COVID-19 identification by effectively learning with heterogeneous datasets with distribution discrepancy. We build a powerful backbone by redesigning the recently proposed COVID-Net in aspects of network architecture and learning strategy to improve the prediction accuracy and learning efficiency. On top of our improved backbone, we further explicitly tackle the cross-site domain shift by conducting separate feature normalization in latent space. Moreover, we propose to use a contrastive training objective to enhance the domain invariance of semantic embeddings for boosting the classification performance on each dataset. We develop and evaluate our method with two public large-scale COVID-19 diagnosis datasets made up of CT images. Extensive experiments show that our approach consistently improves the performances on both datasets, outperforming the original COVID-Net trained on each dataset by 12.16% and 14.23% in AUC respectively, also exceeding existing state-of-the-art multi-site learning methods. | electrical engineering and systems science |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.