text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
We show that for any $n$-dimensional lattice $\mathcal{L} \subseteq \mathbb{R}^n$, the torus $\mathbb{R}^n/\mathcal{L}$ can be embedded into Hilbert space with $O(\sqrt{n\log n})$ distortion. This improves the previously best known upper bound of $O(n\sqrt{\log n})$ shown by Haviv and Regev (APPROX 2010) and approaches the lower bound of $\Omega(\sqrt{n})$ due to Khot and Naor (FOCS 2005, Math. Annal. 2006). | mathematics |
The adoption of WebAssembly has rapidly increased in the last few years as it provides a fast and safe model for program execution. However, WebAssembly is not exempt from vulnerabilities that could be exploited by side channels attacks. This class of vulnerabilities that can be addressed by code diversification. In this paper, we present the first fully automated workflow for the diversification of WebAssembly binaries. We present CROW, an open-source tool implementing this workflow. We evaluate CROW's capabilities on 303 C programs and study its use on a real-life security-sensitive program: libsodium, a cryptographic library. Overall, CROWis able to generate diverse variants for 239 out of 303,(79%) small programs. Furthermore, our experiments show that our approach and tool is able to successfully diversify off-the-shelf cryptographic software (libsodium). | computer science |
A pin liquid anode DC discharge is generated in open air without any additional gas feeding to form self-organized patterns (SOPs) on various liquid interfaces. Axially resolved emission spectra of the whole discharge reveal that the self-organized patterns are formed below a dark region and are visible mainly due to the N2 transitions. The high energy N2 (C) level is mainly excited by the impact of electrons heated by the local increased electric field at the interface. For the first time, the effect of the liquid type on the SOP formation is presented. With almost the same other discharge conditions, the formed SOPs are significantly different from HCl and H2SO4 liquid anodes. The SOP difference is repeated when the discharge current and gap distance change for both liquid anodes. The variations of SOP size and discretization as a function of discharge current and gap distance are discussed and confirm that different SOPs are formed by the HCl liquid anode from tap water or the H2SO4 liquid anode. A possible explanation is brought up to explain the dependence of SOPs on the liquid type. | physics |
The large Land\'{e} g-factor, high spin-orbit coupling, and low effective mass of the two-dimensional electron gas in InSb quantum wells combined with proximal superconductivity may realize a scalable platform for topological quantum computation. Aluminum thin films directly deposited on top of InSb planar structures result in the formation of a reactive AlInSb layer at the interface. This interlayer progressively consumes the whole Al film, resulting in a disordered AlInSb layer after few months at room temperature. We report on a heterostructure design that results in a significant increase of the durability of these hybrid Al-InSb heterostructures with the preservation of a pure Al film and sharp superconductor-semiconductor interface for more than one year. Two monolayers of epitaxial InAs at the superconductor-semiconductor interface prevent interfacial reactivity as evidenced by X-ray reflectivity and energy dispersive spectroscopy measurements. Structural characterizations of the Al films by transmission electron microscopy reveal the presence of tens of nanometers wide grains predominantly oriented with Al(110) parallel to InSb(001). | condensed matter |
We recently found the globular cluster (GC) EXT8 in M31 to have an extremely low metallicity of [Fe/H]=-2.91+/-0.04 using high-resolution spectroscopy. Here we present a colour-magnitude diagram (CMD) for EXT8, obtained with the Wide Field Camera 3 on board the Hubble Space Telescope. Compared with the CMDs of metal-poor Galactic GCs, we find that the upper red giant branch (RGB) of EXT8 is about 0.03 mag bluer in F606W-F814W and slightly steeper, as expected from the low spectroscopic metallicity. The observed colour spread on the upper RGB is consistent with being caused entirely by the measurement uncertainties, and we place an upper limit of sigma(F606W-F814W)=0.015 mag on any intrinsic colour spread. The corresponding metallicity spread can be up to sigma([Fe/H])=0.2 dex or >0.7 dex, depending on the isochrone library adopted. The horizontal branch (HB) is located mostly on the blue side of the instability strip and has a tail extending to at least M(F606W)=+3, as in the Galactic GC M15. We identify two candidate RR Lyrae variables and several UV-luminous post-HB/post AGB star candidates, including one very bright (M(F300X)=-3.2) source near the centre of EXT8. The surface brightness of EXT8 out to a radius of 25 arcsec is well fitted by a Wilson-type profile with an ellipticity of epsilon=0.20, a semi-major axis core radius of 0.25", and a central surface brightness of 15.2 mag per square arcsec in the F606W band, with no evidence of extra-tidal structure. Overall, EXT8 has properties consistent with it being a "normal", but very metal-poor GC, and its combination of relatively high mass and very low metallicity thus remains challenging to explain in the context of GC formation theories operating within the hierarchical galaxy assembly paradigm. | astrophysics |
In the Humanities and Social Sciences, there is increasing interest in approaches to information extraction, prediction, intelligent linkage, and dimension reduction applicable to large text corpora. With approaches in these fields being grounded in traditional statistical techniques, the need arises for frameworks whereby advanced NLP techniques such as topic modelling may be incorporated within classical methodologies. This paper provides a classical, supervised, statistical learning framework for prediction from text, using topic models as a data reduction method and the topics themselves as predictors, alongside typical statistical tools for predictive modelling. We apply this framework in a Social Sciences context (applied animal behaviour) as well as a Humanities context (narrative analysis) as examples of this framework. The results show that topic regression models perform comparably to their much less efficient equivalents that use individual words as predictors. | statistics |
It has been pointed out that the mixing of an eV-scale sterile neutrino with active flavors can lead to loss of sensitivity to the $\theta_{23}$ octant (sign of $\sin^2\theta_{23}-1/2$) in long baseline experiments, because the main oscillation probability $P_0=4\sin^2\theta_{23}\sin^2\theta_{13}\sin^2\Delta_{31}$ can be degenerate with the sum of the interferences with the solar oscillation amplitude and an active-sterile oscillation amplitude in both neutrino and antineutrino oscillations, depending on CP phases. In this paper, we show that the above degeneracy is resolved by measuring the same beam at different baseline lengths. We demonstrate that Tokai-to-Hyper-Kamiokande-to-Korea (T2HKK) experiment (one 187~kton fiducial volume water Cerenkov detector is placed at Kamioka, $L=295$~km, and another detector is put in Korea, $L\sim1000$~km) exhibits a better sensitivity to the $\theta_{23}$ octant in those parameter regions where the experiment with two detectors at Kamioka is insensitive to it. Therefore, if a hint of sterile-active mixings is discovered in short baseline experiments, T2HKK is a better option than the plan of placing two detectors at Kamioka. We also consider an alternative case where one detector is placed at Kamioka and a different detector is at Oki Islands, $L=653$~km, and show that this configuration also leads to a better sensitivity to the $\theta_{23}$ octant. | high energy physics phenomenology |
Additive manufacturing has revolutionized the building of materials direct from design, allowing high resolution rapid prototyping in complex 3D designs with many materials. 3D printing hasenabled high strength damage-tolerant structures, bioprinted artificial organs and tissues, ultralight metals, medicine, education, prosthetics, architecture, consumer electronics,and as a prototyping tool for engineers and hobbyists alike. 3D printing has emerged as a useful tool for complex electrode and material assembly method for batteries and supercapacitors in recent years. The field initially grew from extrusion-based methods such as fused deposition modelling, and evolved to photopolymerization printing of intricate composites, while supercapacitor technologies less sensitive to solvents more often involved material jetting processes. Underpinning every part of a 3D printable battery and many other devices is the printing method and the nature of the feed material. Material purity, printing fidelity, accuracy, complexity, and the ability to form conductive, ceramic, glassy, or solvent-stable plastics relies on the nature of the feed material or composite to such an extent, that the future of 3D printable batteries and electrochemical energy storage devices will depend on materials and printing methods that are co-operatively informed by the requirements of the device and how it is fabricated. In this Perspective, we address the materials and methods requirements in 3D printable batteries and supercapacitors and outline requirements for the future of the field by linking existing performance limitations to the requirements of printable energy storage materials, casing materials and the direct printing of electrodes and electrolytes. We also look to the future by taking inspiration from additive manufacturing, to posit links between materials and printing methods to allow new form factor cells. | physics |
The gauge symmetry $SU(5) \times U(1)_\chi$ is the unique maximal subgroup of SO(10) which retains manifest unification at $M_{GUT}$ of the Standard Model gauge couplings, especially if low scale supersymmetry is present. The spontaneous breaking of $U(1)_\chi$ at some intermediate scale leaves unbroken a $Z_2$ symmetry which is precisely `matter' parity. This yields a stable supersymmetric dark matter particle as well as topologically stable cosmic strings. Motivated by the weak gravity conjecture we impose unification of $SU(5)$ and $U(1)_\chi$ at an ultraviolet cutoff $\Lambda \sim \alpha_\Lambda ^{1/2} M_{P} \approx 5 \times 10^{17}$ GeV, where $\alpha_\Lambda$ denotes the $SU(5)$ gauge coupling at $\Lambda$ and $M_P \approx 2.4 \times 10^{18}$ GeV is the reduced Planck Scale. The impact of dimension five operators suppressed by $\Lambda$ on gauge coupling unification, proton lifetime estimates and $b-\tau$ Yukawa unification is discussed. In particular, the gauge boson mediated proton decay into $e^+\pi^0$ can lie within the $2-\sigma$ sensitivity of HyperKamiokande. We also discuss how the intermediate scale strings may survive inflation while the $SU(5)$ monopoles are inflated away. The unbroken $Z_2$ symmetry provides an intriguing link between dark matter, black holes carrying `quantum hair' and cosmic strings. | high energy physics phenomenology |
We present an efficient electrocatalytic material based on anchored MoO3 nanoparticles on reduced graphene oxide (RGO) nanosheets. After preparation of graphene oxide (GO), the MoO3 nanoparticles anchored on GO nanosheet by using the arc-discharge method. X-ray diffraction patterns show that the MoO3 nanoparticles are well crystallized on RGO in the orthorhombic crystalline phase with a crystallite size of 83 nm. In addition, FT-IR and Raman spectroscopy results show that during the arc-discharge process, the GO nanosheets have been reduced and RGO nanosheets are decorated with MoO3 nanoparticles which form a porous structure. The surface energy of the prepared electrode was measured as 44.56 mJ/m2, which shows the desirable spreading ability of the electrolyte on the electrode. Finally, electrochemical performance was measured in the symmetrical dummy cell by impedance spectroscopy and cyclic voltammogram, and the photochemical test was measured in the dye-sensitized solar cell by current density measurement. Our results show that the electrochemical performance of the RGM electrode is better than the RGO electrode and is comparable with the Platinum electrode and also the efficiency of RGM electrode used in a dye-sensitized solar cell as a counter electrode is 5.55% near to Platinum electrode performance. | physics |
Phononic wire waveguides of subwavelength cross-section support two orthogonal polarization modes: the out-of-plane motion dominated Rayleigh-like and the in-plane motion dominated Love-like modes, analogous to transverse-electric and transverse-magnetic modes in photonic waveguides. Due to the anisotropic elasticity of the substrate material, the polarization states of phonons propagating along certain crystallographic orientations can strongly hybridize. Here we experimentally investigate the orientation-dependent mode hybridization in phononic wire waveguides patterned from GaN-on-sapphire thin films. Such mode hybridization allows efficient actuation of piezoelectrically inactive Love-like modes using common interdigital electrodes designed for Rayleigh-like modes, and further enables on-chip polarization conversion between guided transverse modes. Both are important for on-chip implementation of complex phononic circuits. | physics |
This paper is to create a basis of theoretical contribution for a new PhD thesis in the area of Clinical Decision Support Systems (CDSS) acceptance. Over the past three years, we conducted qualitative research into three distinctive phases to develop an extended Task-Technology Fit (TTF) Framework. These phases are for initiating requirement generation of the framework, discovering the factors of the framework through perspectives, and evaluating the new proposed framework. The new condition is related to developing country in which various sectors such as healthcare is mostly under attention. We conduct a new inspective for assisting decisions to support technology and its usefulness in this sector to integrate with other frameworks for assisting the value, use, and how can be better accepted in the context of healthcare professionals. | computer science |
6D spacetime with $SO(3,3)$ symmetry is utilized to efficiently encode three generations of matter. A graviGUT model is generalized to a class of models that all contain three generations and Higgs boson candidates. Pati-Salam, $SU(5)$, and $SO(10)$ grand unified theories are found when a single generation is isolated. For example, a $SO(4,2)$ spacetime group may be used for conformal symmetry, $AdS_5\rightarrow dS_4$, or simply broken to $SO(3,1)$ of Minkowski space. Another class of models finds $SO(2,2)$ and can give $AdS_3$. | physics |
We use scanning tunneling microscopy to investigate Bi2Sr2Ca2Cu3O10+{\delta} trilayer cuprates from the optimally doped to overdoped regime. We find that the two distinct superconducting gaps from the inner and outer CuO2 planes both decrease rapidly with doping, in sharp contrast to the nearly constant Tc. Spectroscopic imaging reveals the absence of quasiparticle interference in the antinodal region of overdoped samples, showing an opposite trend to that in single- and double-layer compounds. We propose that the existence of two types of inequivalent CuO2 planes and the intricate interaction between them are responsible for these highly anomalous observations in trilayer cuprates. | condensed matter |
The success probability of a search of $M$ targets from a database of size $N$, using Grover's search algorithm depends critically on the number of iterations of the composite operation of the oracle followed by Grover's diffusion operation. Although the required number of iterations scales as $\mathcal{O}(\sqrt{N})$ for large $N$, the asymptote is not a good indicator of the optimal number of iterations when $\sqrt{M/N}$ is not small. A scheme for the determination of the exact number of iterations, subject to a threshold set for the success probability of the search (probability of detecting the target state(s)), is crucial for the efficacy of the algorithm. In this work, a general scheme for the construction of $n$-qubit Grover's search algorithm with $1 \leq M \leq N$ target states is presented, along with the procedure to find the optimal number of iterations for a successful search. It is also shown that for given $N$ and $M$, there is an upper-bound on the success probability of the algorithm. | quantum physics |
In this paper, we introduce the citation data of the Czech apex courts (Supreme Court, Supreme Administrative Court and Constitutional Court). This dataset was automatically extracted from the corpus of texts of Czech court decisions - CzCDC 1.0. We obtained the citation data by building the natural language processing pipeline for extraction of the court decision identifiers. The pipeline included the (i) document segmentation model and the (ii) reference recognition model. Furthermore, the dataset was manually processed to achieve high-quality citation data as a base for subsequent qualitative and quantitative analyses. The dataset will be made available to the general public. | computer science |
(1) If $R$ is an affine algebra of dimension $d\geq 4$ over $\overline{\mathbb{F}}_{p}$ with $p>3$, then the group structure on ${\rm Um}_d(R)/{\rm E}_d(R)$ is nice. (2) If $R$ is a commutative noetherian ring of dimension $d\geq 2$ such that ${\rm E}_{d+1}(R)$ acts transitively on ${\rm Um}_{d+1}(R),$ then the group structure on ${\rm Um}_{d+1}(R[X])/{\rm E}_{d+1}(R[X])$ is nice. | mathematics |
We investigate phases of 3d ${\cal N}=2$ Chern-Simons-matter theories, extending to three dimensions the celebrated correspondence between 2d gauged Wess-Zumino-Witten (GWZW) models and non-linear sigma models (NLSMs) with geometric targets. We find that although the correspondence in 3d and 2d are closely related by circle compactification, an important subtlety arises in this process, changing the phase structure of the 3d theory. Namely, the effective theory obtained from the circle compactification of a phase of a 3d ${\cal N}=2$ gauge theory is, in general, different from the phase of the 3d ${\cal N}=2$ theory on $\RR^2\times S^{1}$, which means taking phases of a 3d gauge theory do not necessary commute with compactification. We compute the Witten index of each effective theory to check this observation. Furthermore, when the matter fields have the same non-minimal charges, the 3d ${\cal N}=2$ Chern-Simons-matter theory with a proper Chern-Simons level will decompose into several identical 2d gauged linear sigma models (GLSMs) for the same target upon reduction to 2d. To illustrate this phenomenon, we investigate how vacua of the 3d gauge theory for a weighted projective space $W\mathbb{P}_{[l,\cdots,l]}$ move on the field space when we change the radius of $S^{1}$. | high energy physics theory |
Purification is a tool that allows to represent mixed quantum states as pure states on enlarged Hilbert spaces. A purification of a given state is not unique and its entanglement strongly depends on the particular choice made. Moreover, in one-dimensional systems, the amount of entanglement is linked to how efficiently the purified state can be represented using matrix-product states (MPS). We introduce an MPS based method that allows to find the minimally entangled representation by iteratively minimizing the second Renyi entropy. First, we consider the thermofield double purification and show that its entanglement can be strongly reduced especially at low temperatures. Second, we show that a slowdown of the entanglement growth following a quench of an infinite temperature state is possible. | condensed matter |
Fluctuations of the human heart beat constitute a complex system that has been studied mostly under resting conditions using conventional time series analysis methods. During physical exercise, the variability of the fluctuations is reduced, and the time series of beat-to-beat RR intervals (RRIs) become highly non-stationary. Here we develop a dynamical approach to analyze the time evolution of RRI correlations in running across various training and racing events under real-world conditions. In particular, we introduce dynamical detrended fluctuation analysis and dynamical partial autocorrelation functions, which are able to detect real-time changes in the scaling and correlations of the RRIs as functions of the scale and the lag. We relate these changes to the exercise intensity quantified by the heart rate (HR). Beyond subject-specific HR thresholds the RRIs show multiscale anticorrelations with both universal and individual scale-dependent structure that is potentially affected by the stride frequency. These preliminary results are encouraging for future applications of the dynamical statistical analysis in exercise physiology and cardiology, and the presented methodology is also applicable across various disciplines. | physics |
Predictive models are being increasingly used to support consequential decision making at the individual level in contexts such as pretrial bail and loan approval. As a result, there is increasing social and legal pressure to provide explanations that help the affected individuals not only to understand why a prediction was output, but also how to act to obtain a desired outcome. To this end, several works have proposed optimization-based methods to generate nearest counterfactual explanations. However, these methods are often restricted to a particular subset of models (e.g., decision trees or linear models) and differentiable distance functions. In contrast, we build on standard theory and tools from formal verification and propose a novel algorithm that solves a sequence of satisfiability problems, where both the distance function (objective) and predictive model (constraints) are represented as logic formulae. As shown by our experiments on real-world data, our algorithm is: i) model-agnostic ({non-}linear, {non-}differentiable, {non-}convex); ii) data-type-agnostic (heterogeneous features); iii) distance-agnostic ($\ell_0, \ell_1, \ell_\infty$, and combinations thereof); iv) able to generate plausible and diverse counterfactuals for any sample (i.e., 100% coverage); and v) at provably optimal distances. | computer science |
Spatial light modulators can typically only modulate the phase or the amplitude of an incident wavefront, with only a limited number of discrete values available. This is often accounted for in computer-generated holography algorithms by setting hologram pixel values to the nearest achievable value during what is known as quantisation. Sympathetic quantisation is an alternative to this nearest-neighbour approach that takes into account the underlying diffraction relationships in order to obtain a significantly improved post-quantisation performance. The concept of sympathetic quantisation is introduced in this paper and a simple implementation, soft sympathetic quantisation, is presented which is shown to improve mean squared error and structural similarity index error metrics by 50% for the considered case of single-transform algorithms. | electrical engineering and systems science |
We investigate a model with two real scalar fields that minimally generates exponentially different scales in an analog of the Coleman-Weinberg mechanism. The classical scale invariance -- the absence of dimensionful parameters in the tree-level action, required in such a scale generation -- can naturally be understood as a special case of the multipoint criticality principle. This two-scalar model can couple to the Standard Model Higgs field to realize a maximum multiplicity of criticality for field values around the electroweak scale, providing a generalization of the classical scale invariance to a wider class of criticality. As a bonus, one of the two scalars can be identified as Higgs-portal dark matter. We find that this model can be consistent with the constraints from dark matter relic abundance, its direct detection experiments, and the latest LHC data, while keeping the perturbativity up to the Planck scale. We then present successful benchmark points satisfying all these constraints: The mass of dark matter is a few TeV, and its scattering cross section with nuclei is of the order of $10^{-9}$ pb, reachable in near future experiments. The mass of extra Higgs boson $H$ is smaller than or of the order of 100 GeV, and the cross section of $e^+e^- \to ZH$ can be of fb level for collision energy 250 GeV, targetted at future lepton colliders. | high energy physics phenomenology |
Millimeter wave (mmWave) and massive MIMO systems are intrinsic components of 5G and beyond. These systems rely on using beamforming codebooks for both initial access and data transmission. Current beam codebooks, however, generally consist of a large number of narrow beams that scan all possible directions, even if these directions are never used. This leads to very large training overhead. Further, these codebooks do not normally account for the hardware impairments or the possible non-uniform array geometries, and their calibration is an expensive process. To overcome these limitations, this paper develops an efficient online machine learning framework that learns how to adapt the codebook beam patterns to the specific deployment, surrounding environment, user distribution, and hardware characteristics. This is done by designing a novel complex-valued neural network architecture in which the neuron weights directly model the beamforming weights of the analog phase shifters, accounting for the key hardware constraints such as the constant-modulus and quantized-angles. This model learns the codebook beams through online and self-supervised training avoiding the need for explicit channel state information. This respects the practical situations where the channel is either unavailable, imperfect, or hard to obtain, especially in the presence of hardware impairments. Simulation results highlight the capability of the proposed solution in learning environment and hardware aware beam codebooks, which can significantly reduce the training overhead, enhance the achievable data rates, and improve the robustness against possible hardware impairments. | electrical engineering and systems science |
Energy consumption of computing has found increasing prominence but the area still suffers from the lack of a consolidated formal theory. In this paper, a theory for the energy consumption of computing is structured as an axiomatic system. The work is pure theoretical, involving theorem proving and mathematical reasoning. It is also interdisciplinary, so that while it targets computing, it involves theoretical physics (thermodynamics and statistical mechanics) and information theory. The theory does not contradict existing theories in theoretical physics and conforms to them as indeed it adopts its axioms from them. Nevertheless, the theory leads to interesting and important conclusions that have not been discussed in previous work. Some of them are: (i) Landauer's principle is shown to be a provable theorem provided that a precondition, named macroscopic determinism, holds. (ii) It is proved that real randomness (not pseudo randomness) can be used in computing in conjunction with or as an alternative to reversibility to achieve more energy saving. (iii) The theory propounds the concept that computers that use real randomness may apparently challenge the second law of thermodynamics. These are computational counterpart to Maxwell's daemon in thermodynamics and hence are named daemon computers. (iv) It is proved that if we do not accept the existence of daemon computers (to conform to the second law of thermodynamics), another type of computers, named clairvoyant computers, must exist that can gain information about other physical systems through real randomness. This theorem probably provides a theoretical explanation for strange observations about real randomness made in the global consciousness project at Princeton University. | condensed matter |
The NIKA2 polarization channel at 260 GHz (1.15 mm) has been proposed primarily to observe galactic star-forming regions and probe the critical scales between 0.01-0.05 pc at which magnetic field lines may channel the matter of interstellar filaments into growing dense cores. The NIKA2 polarimeter consists of a room temperature continuously rotating multi-mesh HWP and a cold polarizer that separates the two orthogonal polarizations onto two 260 GHz KIDs arrays. We describe in this paper the preliminary results obtained during the most recent commissioning campaign performed in December 2018. We concentrate here on the analysis of the extended sources, while the observation of compact sources is presented in a companion paper [12]. We present preliminary NIKA2 polarization maps of the Crab nebula. We find that the integrated polarization intensity flux measured by NIKA2 is consistent with expectations.In terms of polarization angle, we are still limited by systematic uncertainties that will be further investigated in the forthcoming commissioning campaigns. | astrophysics |
A robust density fitting method for calculating Coulomb matrix elements over Bloch functions based on calculation of two- and three-center matrix elements of the Ewald potential is described and implemented in a Gaussian orbital basis in the Exciton code. The method is tested by comparing Coulomb and exchange energies from density fitting to corresponding energies from SCF HF calculations for diamond, magnesium oxide and bulk Ne. Density fitting coefficients from the robust method are compared to coefficients from a variational method applied to wave function orbital products in bulk Ne. Four center Coulomb matrix elements from density fitting are applied to time dependent Hartree-Fock (TDHF) calculations in diamond, magnesium oxide and anatase and rutile polytypes of titanium dioxide. Shifting virtual states downwards uniformly relative to occupied states and scaling the electron-hole attraction term in the TDHF Hamiltonian by 0.4 yields good agreement with either experiment and/or Bethe-Salpeter equation calculations. This approach mirrors similar 'scissors' adjustments of occupied and virtual states and introduction of a scaled electron-hole attraction term in some time dependent DFT calculations. | condensed matter |
This paper proposes a methodology for enhancing power systems resiliency by proactively splitting an interconnected grid into small self-sustaining islands in preparation for extreme events. The idea is to posture the system so that cascading outages can be bound within affected areas, preventing the propagation of disturbances to the rest of the system. This mitigation strategy will prove especially useful when advance notification of a threat is available but its nature not well understood. In our method, islands are determined using a constrained hierarchical spectral clustering technique. We further check the viability of the resultant islands using steady-state AC power flow. The performance of the approach is illustrated using a detailed PSS/E model of the heavily meshed transmission network operated by PJM Interconnection in the eastern USA. Representative cases from different seasons show that variations in power flow patterns influence island configuration. | electrical engineering and systems science |
Let $A$ be an algebraically simple, separable, nuclear, $\mathcal{Z}$-stable $C^*$-algebra for which the trace space $T(A)$ is a Bauer simplex and the extremal boundary $\partial_e T(A)$ has finite covering dimension. We prove that each automorphism $\alpha$ on $A$ is cocycle conjugate to its tensor product with the trivial automorphism on the Jiang-Su algebra. At least for single automorphisms this generalizes a recent result by Gardella-Hirshberg. If $\alpha$ is strongly outer as an action of $\mathbb{Z}$, we prove it has finite Rokhlin dimension with commuting towers. As a consequence it tensorially absorbs any automorphism on the Jiang-Su algebra. | mathematics |
Quantum LDPC codes are a promising direction for low overhead quantum computing. In this paper, we propose a generalization of the Union-Find decoder as adecoder for quantum LDPC codes. We prove that this decoder corrects all errors with weight up to An^{\alpha} for some A, {\alpha} > 0 for different classes of quantum LDPC codes such as toric codes and hyperbolic codes in any dimension D \geq 3 and quantum expander codes. To prove this result, we introduce a notion of covering radius which measures the spread of an error from its syndrome. We believe this notion could find application beyond the decoding problem. We also perform numerical simulations, which show that our Union-Find decoder outperforms the belief propagation decoder in the low error rate regime in the case of a quantum LDPC code with length 3600. | quantum physics |
The Generalised Differential Image Motion Monitor (GDIMM) was proposed a few years ago as a new generation instrument for turbulence monitoring. It measures integrated parameters of the optical turbulence, i.e the seeing, isoplanatic angle, scintillation index, coherence time and wavefront coherence outer scale. GDIMM is based on a fully automatic small telescope (28cm diameter), equipped with a 3-holes mask at its entrance pupil. The instrument is installed at the Calern observatory (France) and performs continuous night-time monitoring of turbulence parameters. In this communication we present long-term and seasonnal statistics obtained at Calern, and combine GDIMM data to provide quantities such as the equivalent turbulence altitude and the effective wind speed. | astrophysics |
Majorana zero-modes (MZMs) are spatially-localized zero-energy fractional quasiparticles with non-Abelian braiding statistics that hold a great promise for topological quantum computing. Due to its particle-antiparticle equivalence, an MZM exhibits robust resonant Andreev reflection and 2e2/h quantized conductance at low temperature. By utilizing variable-tunnel-coupled scanning tunneling spectroscopy, we study tunneling conductance of vortex bound states on FeTe0.55Se0.45 superconductors. We report observations of conductance plateaus as a function of tunnel coupling for zero-energy vortex bound states with values close to or even reaching the 2e2/h quantum conductance. In contrast, no such plateau behaviors were observed on either finite energy Caroli-de Genne-Matricon bound states or in the continuum of electronic states outside the superconducting gap. This unique behavior of the zero-mode conductance reaching a plateau strongly supports the existence of MZMs in this iron-based superconductor, which serves as a promising single-material platform for Majorana braiding at a relatively high temperature. | condensed matter |
By employing magnetization and small angle neutron scattering (SANS) measurements, we have investigated the behavior of the skyrmion lattice (SKL) and the helical order in MnSi0.992Ga0.008. Our results indicate that the order of the SKL is sensitive to the orientation of an applied magnetic field with respect to the crystal lattice and small variations in the sequence of temperature and applied magnetic field changes. The disorder caused by the substitution of the heavier element Ga for Si is sufficient to reduce the pinning of the SKL to the underlying crystalline lattice. This reduces the propensity for the SKL to be aligned with the crystal lattice. This tendency is most evident when the applied field is not well oriented with respect to the high symmetry axes of the crystal resulting in disorder in the long range SKL while maintaining sharp radial order. We have also investigated the effect of substituting heavier elements into MnSi on the reorientation process of the helical domains with field cycling in MnSi0.992Ga0.008 and Mn0.985Ir0.015Si. A comparison of the reorientation process in these materials with field reduction indicates that the substitution of heavier elements on either Mn or Si sites creates a higher energy barrier for the reorientation of the helical order and for the formation of domains. | condensed matter |
We propose a scheme to prepare a macroscopic mechanical oscillator in a catlike state, close to a coherent state superposition. The mechanical oscillator, coupled by radiation-pressure interaction to a field in an optical cavity, is first prepared close to a squeezed vacuum state using a reservoir engineering technique. The system is then probed using a short optical pulse tuned to the lower motional sideband of the cavity resonance, realizing a photon-phonon swap interaction. A photon number measurement of the photons emerging from the cavity then conditions a phonon-subtracted catlike state with a negative Wigner distribution exhibiting separated peaks and multiple interference fringes. We show that this scheme is feasible using state-of-the-art photonic crystal optomechanical system. | quantum physics |
We derive bounds analogous to the Froissart bound for the absorptive part of CFT$_d$ Mellin amplitudes. Invoking the AdS/CFT correspondence, these amplitudes correspond to scattering in AdS$_{d+1}$. We can take a flat space limit of the corresponding bound. We find the standard Froissart-Martin bound, including the coefficient in front for $d+1=4$ being $\pi/\mu^2$, $\mu$ being the mass of the lightest exchange. For $d>4$, the form is different. We show that while for $CFT_{d\leq 6}$, the number of subtractions needed to write a dispersion relation for the Mellin amplitude is equal to 2, for $CFT_{d>6}$ the number of subtractions needed is greater than 2 and goes to infinity as $d$ goes to infinity. | high energy physics theory |
This paper presents a Gaussian process (GP) model for estimating piecewise continuous regression functions. In scientific and engineering applications of regression analysis, the underlying regression functions are piecewise continuous in that data follow different continuous regression models for different regions of the data with possible discontinuities between the regions. However, many conventional GP regression approaches are not designed for piecewise regression analysis. We propose a new GP modeling approach for estimating an unknown piecewise continuous regression function. The new GP model seeks for a local GP estimate of an unknown regression function at each test location, using local data neighboring to the test location. To accommodate the possibilities of the local data from different regions, the local data is partitioned into two sides by a local linear boundary, and only the local data belonging to the same side as the test location is used for the regression estimate. This local split works very well when the input regions are bounded by smooth boundaries, so the local linear approximation of the smooth boundaries works well. We estimate the local linear boundary jointly with the other hyperparameters of the GP model, using the maximum likelihood approach. Its computation time is as low as the local GP's time. The superior numerical performance of the proposed approach over the conventional GP modeling approaches is shown using various simulated piecewise regression functions. | statistics |
Direct numerical simulations, performed with a high-order spectral-element method, are used to study coherent structures in turbulent pipe flow at friction Reynolds numbers $Re_{\tau} = 180$ and $550$. The database was analysed using spectral proper orthogonal decomposition (SPOD) to identify energetically dominant coherent structures, most of which turn out to be streaks and quasi-streamwise vortices. To understand how such structures can be modelled, the linear flow responses to harmonic forcing were computed using the singular value decomposition of the resolvent operator, using the mean field as a base flow. The SPOD and resolvent analysis were calculated for several combinations of frequencies and wavenumbers, allowing to map out the similarities between SPOD modes and optimal responses for a wide range of relevant scales in turbulent pipe flows. In order to explore physical reasons behind the agreement between both methods, an indicator of lift-up mechanism in the resolvent analysis was introduced, activated when optimal forcing represents quasi-streamwise vortices and associated response corresponds to streaks. Good agreement between leading SPOD and resolvent modes is observed in a large region of parameter space. In this region, a significant gain separation is found in resolvent analysis, which may be attributed to the strong amplification associated with the lift-up mechanism. For both Reynolds numbers, the observed concordances were generally for structures with large energy in the buffer layer. The results highlight resolvent analysis as a pertinent reduced-order model for coherent structures in wall-bounded turbulence, particularly for streamwise elongated structures corresponding to near-wall streamwise vortices and streaks. | physics |
From virtual reality and telepresence, to augmented reality, holoportation, and remotely controlled robotics, these future network applications promise an unprecedented development for society, economics and culture by revolutionizing the way we live, learn, work and play. In order to deploy such futuristic applications and to cater to their performance requirements, recent trends stressed the need for the Tactile Internet, an Internet that, according to the International Telecommunication Union, combines ultra low latency with extremely high availability, reliability and security. Unfortunately, today's Internet falls short when it comes to providing such stringent requirements due to several fundamental limitations in the design of the current network architecture and communication protocols. This brings the need to rethink the network architecture and protocols, and efficiently harness recent technological advances in terms of virtualization and network softwarization to design the Tactile Internet of the future. In this paper, we start by analyzing the characteristics and requirements of future networking applications. We then highlight the limitations of the traditional network architecture and protocols and their inability to cater to these requirements. Afterward, we put forward a novel network architecture adapted to the Tactile Internet called FlexNGIA, a Flexible Next-Generation Internet Architecture. We then describe some use-cases where we discuss the potential mechanisms and control loops that could be offered by FlexNGIA in order to ensure the required performance and reliability guarantees for future applications. Finally, we identify the key research challenges to further develop FlexNGIA towards a full-fledged architecture for the future Tactile Internet. | computer science |
Higher shares of fluctuating generation from renewable energy sources in the power system lead to an increase in grid balancing demand. One approach for avoiding curtailment of renewable energies is to use excess electricity feed-in for heating applications. To assess in which regions power-to-heat technologies can contribute to renewable energy integration, detailed data on the spatial distribution of the heat demand are needed. We determine the overall heat load in the residential building sector and the share covered by electric heating technologies for each administrative district in Germany, with a temporal resolution of 15 minutes. Using a special evaluation of German census data, we defined 729 building categories and assigned individual heat demand values. Furthermore, heating types and different classes of installed heating capacity were defined. Our analysis showed that the share of small-scale single-storey heating and large-scale central heating is higher in cities, whereas there is more medium-scale central heating in rural areas. This results from the different shares of single and multi-family houses in the respective regions. To determine the electrically-covered heat demand, we took into account heat pumps and resistive heating technologies. All results, as well as the developed code, are published under open source licenses and can thus also be used by other researchers for the assessment of power-to-heat for renewable energy integration. | physics |
In this paper we initiate the study of broadcast dimension, a variant of metric dimension. Let $G$ be a graph with vertex set $V(G)$, and let $d(u,w)$ denote the length of a $u-w$ geodesic in $G$. For $k \ge 1$, let $d_k(x,y)=\min \{d(x,y), k+1\}$. A function $f: V(G) \rightarrow \mathbb{Z}^+ \cup \{0\}$ is called a resolving broadcast of $G$ if, for any distinct $x,y \in V(G)$, there exists a vertex $z \in V(G)$ such that $f(z)=i>0$ and $d_{i}(x,z) \neq d_{i}(y,z)$. The broadcast dimension, $bdim(G)$, of $G$ is the minimum of $c_f(G)=\sum_{v \in V(G)} f(v)$ over all resolving broadcasts of $G$, where $c_f(G)$ can be viewed as the total cost of the transmitters (of various strength) used in resolving the entire network described by the graph $G$. Note that $bdim(G)$ reduces to $adim(G)$ (the adjacency dimension of $G$, introduced by Jannesari and Omoomi in 2012) if the codomain of resolving broadcasts is restricted to $\{0,1\}$. We determine its value for cycles, paths, and other families of graphs. We prove that $bdim(G) = \Omega(\log{n})$ for all graphs $G$ of order $n$, and that the result is sharp up to a constant factor. We show that $\frac{adim(G)}{bdim(G)}$ and $\frac{bdim(G)}{dim(G)}$ can both be arbitrarily large, where $dim(G)$ denotes the metric dimension of $G$. We also examine the effect of vertex deletion on the adjacency dimension and the broadcast dimension of graphs. | mathematics |
We describe a new convolutional framework for waveform evaluation, WEnets, and build a Narrowband Audio Waveform Evaluation Network, or NAWEnet, using this framework. NAWEnet is single-ended (or no-reference) and was trained three separate times in order to emulate PESQ, POLQA, or STOI with testing correlations 0.95, 0.92, and 0.95, respectively when training on only 50% of available data and testing on 40%. Stacks of 1-D convolutional layers and non-linear downsampling learn which features are important for quality or intelligibility estimation. This straightforward architecture simplifies the interpretation of its inner workings and paves the way for future investigations into higher sample rates and accurate no-reference subjective speech quality predictions. | electrical engineering and systems science |
Inter-scanner and inter-protocol discrepancy in MRI datasets are known to lead to significant quantification variability. Hence image-to-image or scanner-to-scanner translation is a crucial frontier in the area of medical image analysis with a lot of potential applications. Nonetheless, a significant percentage of existing algorithms cannot explicitly exploit and preserve texture details from target scanners and offers individual solutions towards specialized task-specific architectures. In this paper, we design a multi-scale texture transfer to enrich the reconstruction images with more details. Specifically, after calculating textural similarity, the multi-scale texture can adaptively transfer the texture information from target images or reference images to restored images. Different from the pixel-wise matching space as done by previous algorithms, we match texture features in a multi-scale scheme implemented in the neural space. The matching mechanism can exploit multi-scale neural transfer that encourages the model to grasp more semantic-related and lesion-related priors from the target or reference images. We evaluate our multi-scale texture GAN on three different tasks without any task-specific modifications: cross-protocol super-resolution of diffusion MRI, T1-Flair, and Flair-T2 modality translation. Our multi-texture GAN rehabilitates more high-resolution structures (i.e., edges and anatomy), texture (i.e., contrast and pixel intensities), and lesion information (i.e., tumor). The extensively quantitative and qualitative experiments demonstrate that our method achieves superior results in inter-protocol or inter-scanner translation over state-of-the-art methods. | electrical engineering and systems science |
This paper introduces a concept of approximate spectral gap to analyze the mixing time of Markov Chain Monte Carlo (MCMC) algorithms for which the usual spectral gap is degenerate or almost degenerate. We use the idea to analyze a class of MCMC algorithms to sample from mixtures of densities. As an application we study the mixing time of a Gibbs sampler for variable selection in linear regression models. Under some regularity conditions on the signal and the design matrix of the regression problem, we show that for well-chosen initial distributions the mixing time of the Gibbs sampler is polynomial in the dimension of the space. | statistics |
Generalizing wavelets by adding desired redundancy and flexibility,framelets are of interest and importance in many applications such as image processing and numerical algorithms. Several key properties of framelets are high vanishing moments for sparse multiscale representation, fast framelet transforms for numerical efficiency, and redundancy for robustness. However, it is a challenging problem to study and construct multivariate nonseparable framelets, mainly due to their intrinsic connections to factorization and syzygy modules of multivariate polynomial matrices. In this paper, we circumvent the above difficulties through the approach of quasi-tight framelets, which behave almost identically to tight framelets. Employing the popular oblique extension principle (OEP), from an arbitrary compactly supported $\dm$-refinable vector function $\phi$ with multiplicity greater than one, we prove that we can always derive from $\phi$ a compactly supported multivariate quasi-tight framelet such that (i) all the framelet generators have the highest possible order of vanishing moments;(ii) its associated fast framelet transform is compact with the highest balancing order.For a refinable scalar function $\phi$, the above item (ii) often cannot be achieved intrinsically but we show that we can always construct a compactly supported OEP-based multivariate quasi-tight framelet derived from $\phi$ satisfying item (i).This paper provides a comprehensive investigation on OEP-based multivariate quasi-tight multiframelets and their associated framelet transforms with high balancing orders. This deepens our theoretical understanding of multivariate quasi-tight multiframelets and their associated fast multiframelet transforms. | mathematics |
Ultraviolet (UV) exposure significantly contributes to non-melanoma skin cancer. In the context of health, UV exposure is the product of time and the UV Index (UVI), a weighted sum of the irradiance I(lambda) over all wavelengths from lambda = 250 to 400nm. In our analysis of the United States Environmental Protection Agency's UV-Net database of over four-hundred thousand spectral irradiance measurements taken over several years, we found that the UVI is well estimated by UVI = 77 I(310nm). To better understand this result, we applied an optical atmospheric model of the terrestrial irradiance spectra and found that it applies across a wide range of conditions. | physics |
We report here an extension of a previous work in which we have shown that matrix models provide a tool to compute the intersection numbers of p-spin curves. We discuss further an extension to half-integer p, and in more details for p=1/2 and p=3/2. In those new cases one finds contributions from the Ramond sector, which were not present for positive integer p.The existence of Virasoro constraints, in particular a string equation, is considered also for half-integral spins. The contribution of the boundary of a Riemann surface, is investigated through a logarithmic matrix model The supersymmetric random matrices provide extensions to mixed positive and negative p punctures. | high energy physics theory |
We study infinite ``$+$'' or ``$-$'' clusters for an Ising model on an connected, transitive, non-amenable, planar, one-ended graph $G$ with finite vertex degree. If the critical percolation probability $p_c^{site}$ for the i.i.d.~Bernoulli site percolation on $G$ is less than $\frac{1}{2}$, we find an explicit region for the coupling constant of the Ising model such that there are infinitely many infinite ``$+$''-clusters and infinitely many infinite ``$-$''-clusters, while the random cluster representation of the Ising model has no infinite 1-clusters. If $p_c^{site}>\frac{1}{2}$, we obtain a lower bound for the critical probability in the random cluster representation of the Ising model in terms of $p_c^{site}$. | mathematics |
What makes two images similar? We propose new approaches to generate model-agnostic explanations for image similarity, search, and retrieval. In particular, we extend Class Activation Maps (CAMs), Additive Shapley Explanations (SHAP), and Locally Interpretable Model-Agnostic Explanations (LIME) to the domain of image retrieval and search. These approaches enable black and grey-box model introspection and can help diagnose errors and understand the rationale behind a model's similarity judgments. Furthermore, we extend these approaches to extract a full pairwise correspondence between the query and retrieved image pixels, an approach we call "joint interpretations". Formally, we show joint search interpretations arise from projecting Harsanyi dividends, and that this approach generalizes Shapley Values and The Shapley-Taylor indices. We introduce a fast kernel-based method for estimating Shapley-Taylor indices and empirically show that these game-theoretic measures yield more consistent explanations for image similarity architectures. | computer science |
We present a facility for direct measurements at low and very low energies typical for nuclear astrophysics (NA). The facility consists of a small and robust tandem accelerator where irradiations are made, and an ultra-low background laboratory located in a salt mine where very low radio-activities can be measured. Both belong to Horia Hulubei National Institute for Physics and Nuclear Engineering (IFIN-HH) but are situated 120 km apart. Their performances are shown using a few cases where they are used. We argue that this facility is competitive for the study of nuclear reactions induced by alpha particles and by light ions at energies close or down into the Gamow windows. A good case study was the 13C+12C fusion reaction, where the proton evaporation channel leads to an activity with T1/2 = 15 h, appropriate for samples' transfer to the salt mine. Measurements were done using the thick target method down into the Gamow window for energies from Ecm=2.2 MeV, which is the lowest energy ever reached for this reaction, up to 5.3 MeV, using 13C beams from the 3 MV tandetron. The activation method allowed us to determine a cross section of the order of 100 pb. Reactions induced by alphas were also measured. Proton induced resonant reactions were used to calibrate the accelerator terminal voltage. Some results of the experiemnts characterizing the assembly are sown and discussed. | physics |
The continuous monitoring of a quantum system strongly influences the emergence of chaotic dynamics near the transition from the quantum regime to the classical regime. Here we present a feedback control scheme that uses adaptive measurement techniques to control the degree of chaos in the driven-damped quantum Duffing oscillator. This control relies purely on the measurement backaction on the system, making it a uniquely quantum control, and is only possible due to the sensitivity of chaos to measurement. We quantify the effectiveness of our control by numerically computing the quantum Lyapunov exponent over a wide range of parameters. We demonstrate that adaptive measurement techniques can control the onset of chaos in the system, pushing the quantum-classical boundary further into the quantum regime. | quantum physics |
We present two different techniques for achieving low resistance ($<$20 n$\rm \Omega$) contacts between copper and aluminium at cryogenic temperatures. The best method is based on gold plating of the surfaces in an e-beam evaporator immediately after Ar plasma etching in the same apparatus, yielding resistances as low as 3 n$\rm \Omega$ that are stable over time. The second approach involves inserting indium in the Al/Cu joint. For both methods, we believe key elements are surface polishing, total removal of the aluminum oxide surface layer, and temporary application of large (typ. 11 kN) compression forces. We believe the values for gold plated contacts are the lowest ever reported for a Cu/Al joint of a few $\rm cm^{2}$. This technology could simplify the construction of thermal links for advanced cryogenics applications, in particular that of extremely low resistance heat switches for nuclear demagnetization refrigerators. | physics |
For the first time, a gravitational calculation was recently shown to yield the Page curve for the entropy of Hawking radiation, consistent with unitary evolution. However, the calculation takes as essential input Hawking's result that the radiation entropy becomes large at late times. We call this apparent contradiction the state paradox. We exhibit its manifestations in standard and doubly-holographic settings, with and without an external bath. We clarify which version(s) of the Ryu-Takayanagi prescription apply in each setting. We show that the two possible homology rules in the presence of a braneworld generate a bulk dual of the state paradox. The paradox is resolved if the gravitational path integral computes averaged quantities in a suitable ensemble of unitary theories, a possibility supported independently by several recent developments. | high energy physics theory |
The SIGIR 2019 Workshop on eCommerce (ECOM19), was a full day workshop that took place on Thursday, July 25, 2019 in Paris, France. The purpose of the workshop was to serve as a platform for publication and discussion of Information Retrieval and NLP research and their applications in the domain of eCommerce. The workshop program was designed to bring together practitioners and researchers from academia and industry to discuss the challenges and approaches to product search and recommendation in the eCommerce domain. A second goal was to run a data challenge on real-world eCommerce data. The workshop drew contributions from both industry as well as academia, in total the workshop received 38 submissions, and accepted 24 (63%). There were two keynotes by invited speakers, a poster session where all the accepted submissions were presented, a panel discussion, and three short talks by invited speakers. | computer science |
We consider in this paper the optimal approximations of convex univariate functions with feed-forward Relu neural networks. We are interested in the following question: what is the minimal approximation error given the number of approximating linear pieces? We establish the necessary and sufficient conditions and uniqueness of optimal approximations, and give lower and upper bounds of the optimal approximation errors. Relu neural network architectures are then presented to generate these optimal approximations. Finally, we propose an algorithm to find the optimal approximations, as well as prove its convergence and validate it with experimental results. | computer science |
Based on the rate of resolved stellar origin black hole and neutron star mergers measured by LIGO and Virgo, it is expected that these detectors will also observe an unresolved Stochastic Gravitational Wave Background (SGWB) by the time they reach design sensitivity. Since the same binaries observed by LIGO and Virgo pass through the LISA mHz frequency band at an earlier stage of their orbital evolution,it is foreseen that their SGWB will also be observable by LISA with Signal to Noise Ratio (SNR) $\sim 53$. Unlike the stochastic signal from Galactic white dwarf binaries, for which a subtraction is expected to be possible by exploiting its yearly modulation (induced by the motion of the LISA constellation), the background from unresolved stellar origin black hole and neutron star binaries acts as a foreground for other stochastic signals of cosmological or astrophysical origin, which may also be present in the LISA band. Here, we employ a principal component analysis to model and extract an additional hypothetical SGWB in the LISA band, without making any a priori assumptions on its spectral shape. At the same time, we account for the presence of the foreground from stellar origin black holes and neutron stars, as well as for possible uncertainties in the LISA noise calibration. We find that our technique leads to a linear problem and is therefore suitable for fast and reliable extraction of SGWBs with SNR up to ten times weaker than the LIGO/Virgo foreground, quite independently of their spectral shape. | astrophysics |
We discuss how four-lepton decays of the Z-boson probe currently unconstrained flat directions in the parameter space of the Standard Model Effective Field Theory (SMEFT). We derive the constraints from these decays on four-lepton operators in the SMEFT and show how the LHC data for this process complements probes from neutrino-trident production. Future differential measurements with high-luminosity data can strongly constrain four-lepton operators and remove all flat directions in the four-muon sector of the SMEFT. We comment briefly on the possibility of using rare Z-decays to tau-leptons to probe untested directions in the SMEFT parameter space. | high energy physics phenomenology |
Autoregressive models capture stochastic processes in which past realizations determine the generative distribution of new data; they arise naturally in a variety of industrial, biomedical, and financial settings. A key challenge when working with such data is to determine when the underlying generative model has changed, as this can offer insights into distinct operating regimes of the underlying system. This paper describes a novel dynamic programming approach to localizing changes in high-dimensional autoregressive processes and associated error rates that improve upon the prior state of the art. When the model parameters are piecewise constant over time and the corresponding process is piecewise stable, the proposed dynamic programming algorithm consistently localizes change points even as the dimensionality, the sparsity of the coefficient matrices, the temporal spacing between two consecutive change points, and the magnitude of the difference of two consecutive coefficient matrices are allowed to vary with the sample size. Furthermore, the accuracy of initial, coarse change point localization estimates can be boosted via a computationally-efficient refinement algorithm that provably improves the localization error rate. Finally, a comprehensive simulation experiments and a real data analysis are provided to show the numerical superiority of our proposed methods. | mathematics |
In the framework of the instanton liquid model (ILM), we consider thermal modifications of the gluon properties in different scenarios of temperature $T$ dependence of the average instanton size $\bar{\rho}(T)$ and the instanton density $n(T)$ known from the literature. Due to interactions with instantons, the gluons acquire the dynamical temperature dependent "electric" gluon mass $M_{el}(q,T).$ We found that at small momenta and zero temperature $M_{el}(0,0)\approx362\,{\rm MeV}$ at the phenomenological values of $\bar{\rho}(0)=1/3\,{\rm fm}$ and $n(0)=1\,{\rm fm}^{-4}$, however the $T$-dependence of the mass is very sensitive to the temperature dependence of the instanton vacuum parameters $\bar{\rho}(T),\,n(T)$: it is very mild in case of the lattice-motivated dependence and decreases steeply in the whole range with theoretical parametrization. We see that in region $0<T<T_{c}$ ILM is able to reproduce lattice results for the dynamical "electric" gluon mass. | high energy physics phenomenology |
We present constraints on the tensor-to-scalar ratio r using Planck data. We use the latest release of Planck maps (PR4), processed with the NPIPE code, which produces calibrated frequency maps in temperature and polarization for all Planck channels from 30 GHz to 857 GHz using the same pipeline. We computed constraints on r using the BB angular power spectrum, and we also discuss constraints coming from the TT spectrum. Given Planck's noise level, the TT spectrum gives constraints on r that are cosmic-variance limited (with $\sigma$(r)=0.093), but we show that the marginalized posterior peaks towards negative values of r at about the 1.2$\sigma$ level. We derived Planck constraints using the BB power spectrum at both large angular scales (the 'reionization bump') and intermediate angular scales (the 'recombination bump') from $\ell$=2 to 150, and find a stronger constraint than that from TT, with $\sigma$(r)=0.069. The Planck BB spectrum shows no systematic bias, and is compatible with zero, given both the statistical noise and the systematic uncertainties. The likelihood analysis using B modes yields the constraint r<0.158 at 95% confidence using more than 50% of the sky. This upper limit tightens to r<0.069 when Planck EE, BB, and EB power spectra are combined consistently, and it tightens further to r<0.056 when the Planck TT power spectrum is included in the combination. Finally, combining Planck with BICEP2/Keck 2015 data yields an upper limit of r<0.044. | astrophysics |
The relationship between the macroscopic response of the slope and the macrostructure of the force chain network under the action of the metal plate was studied by the particle discrete element method and the persistent homology. The particle accumulation model was used to simulate the instability process of slope under the continuous downward action of metal plate by the particle discrete element method. The macroscopic responses such as the total velocity vector of the two-dimensional slope deposit, the angle of the slip cracking surface when the slope is unstable, and the average velocity in the y-direction of the slope were studied. Then, the normal force chain undirected network model of the natural accumulation of slope stacking particles was constructed. Finally, the topological characteristics of the particle contact force chain network of the slope top were analyzed by the persistent homology method to obtain the barcode. Finally, the relationship between the instability evolution and the characteristics of persistent homology is established. This research provides a new method for the study of slope instability topology identification. Thus, the instability destruction of slope can be predicted effectively. | condensed matter |
Recent breakthroughs in object detection and image classification using Convolutional Neural Networks (CNNs) are revolutionizing the state of the art in medical imaging, and microscopy in particular presents abundant opportunities for computer vision algorithms to assist medical professionals in diagnosis of diseases ranging from malaria to cancer. High resolution scans of microscopy slides called Whole Slide Images (WSIs) offer enough information for a cancer pathologist to come to a conclusion regarding cancer presence, subtype, and severity based on measurements of features within the slide image at multiple scales and resolutions. WSIs' extremely high resolutions and feature scales ranging from gross anatomical structures down to cell nuclei preclude the use of standard CNN models for object detection and classification, which have typically been designed for images with dimensions in the hundreds of pixels and with objects on the order of the size of the image itself. We explore parallel approaches based on Reinforcement Learning and Beam Search to learn to progressively zoom into the WSI to detect Regions of Interest (ROIs) in liver pathology slides containing one of two types of liver cancer, namely Hepatocellular Carcinoma (HCC) and Cholangiocarcinoma (CC). These ROIs can then be presented directly to the pathologist to aid in measurement and diagnosis or be used for automated classification of tumor subtype. | electrical engineering and systems science |
We introduce an electro-optic hardware platform for nonlinear activation functions in optical neural networks. The optical-to-optical nonlinearity operates by converting a small portion of the input optical signal into an analog electric signal, which is used to intensity-modulate the original optical signal with no reduction in processing speed. Our scheme allows for complete nonlinear on-off contrast in transmission at relatively low optical power thresholds and eliminates the requirement of having additional optical sources between each layer of the network. Moreover, the activation function is reconfigurable via electrical bias, allowing it to be programmed or trained to synthesize a variety of nonlinear responses. Using numerical simulations, we demonstrate that this activation function significantly improves the expressiveness of optical neural networks, allowing them to perform well on two benchmark machine learning tasks: learning a multi-input exclusive-OR (XOR) logic function and classification of images of handwritten numbers from the MNIST dataset. The addition of the nonlinear activation function improves test accuracy on the MNIST task from 85% to 94%. | electrical engineering and systems science |
We show that thermal radiation from a topological insulator carries a nonzero average spin angular momentum. | physics |
As modern scientific simulations grow ever more in size and complexity, even their analysis and post-processing becomes increasingly demanding, calling for the use of HPC resources and methods. yt is a parallel, open source post-processing python package for numerical simulations in astrophysics, made popular by its cross-format compatibility, its active community of developers and its integration with several other professional Python instruments. The Intel Distribution for Python enhances yt's performance and parallel scalability, through the optimization of lower-level libraries Numpy and Scipy, which make use of the optimized Intel Math Kernel Library (Intel-MKL) and the Intel MPI library for distributed computing. The library package yt is used for several analysis tasks, including integration of derived quantities, volumetric rendering, 2D phase plots, cosmological halo analysis and production of synthetic X-ray observation. In this paper, we provide a brief tutorial for the installation of yt and the Intel Distribution for Python, and the execution of each analysis task. Compared to the Anaconda python distribution, using the provided solution one can achieve net speedups up to 4.6x on Intel Xeon Scalable processors (codename Skylake). | astrophysics |
Representation learning for knowledge graphs (KGs) has focused on the problem of answering simple link prediction queries. In this work we address the more ambitious challenge of predicting the answers of conjunctive queries with multiple missing entities. We propose Bi-Directional Query Embedding (BIQE), a method that embeds conjunctive queries with models based on bi-directional attention mechanisms. Contrary to prior work, bidirectional self-attention can capture interactions among all the elements of a query graph. We introduce a new dataset for predicting the answer of conjunctive query and conduct experiments that show BIQE significantly outperforming state of the art baselines. | computer science |
Previous work on visual storytelling mainly focused on exploring image sequence as evidence for storytelling and neglected textual evidence for guiding story generation. Motivated by human storytelling process which recalls stories for familiar images, we exploit textual evidence from similar images to help generate coherent and meaningful stories. To pick the images which may provide textual experience, we propose a two-step ranking method based on image object recognition techniques. To utilize textual information, we design an extended Seq2Seq model with two-channel encoder and attention. Experiments on the VIST dataset show that our method outperforms state-of-the-art baseline models without heavy engineering. | computer science |
Tasks such as classification of data and determining the groundstate of a Hamiltonian cannot be carried out through purely unitary quantum evolution. Instead, the inherent non-unitarity of the measurement process must be harnessed. Post-selection and its extensions provide a way to do this. However they make inefficient use of time resources -- a typical computation might require $O(2^m)$ measurements over $m$ qubits to reach a desired accuracy. We propose a method inspired by the eigenstate thermalisation hypothesis, that harnesses the induced non-linearity of measurement on a subsystem. Post-selection on $m$ ancillae qubits is replaced with tracing out $O(\log\epsilon / \log(1-p))$ (where p is the probability of a successful measurement) to attain the same accuracy as the post-selection circuit. We demonstrate this scheme on the quantum perceptron and phase estimation algorithm. This method is particularly advantageous on current quantum computers involving superconducting circuits. | quantum physics |
We aim to understand the properties at the locations of supernova (SN) explosion in their host galaxies and compare with the global properties of the host galaxies. We use the integral field spectrograph (IFS) of Mapping Nearby Galaxies (MaNGA) at Apache Point Observatory (APO) to get the 2D maps of the parameter properties for eleven SN host galaxies. The sample galaxies are analyzed one by one in details on their properties of velocity field, star formation rate, oxygen abundance and stellar mass etc. This sample of SN host galaxies have redshifts around $z$ $\sim$ 0.03, which is higher than those of the previous related works. The higher redshift distribution allows us to obtain the properties of more distant SN host galaxies. Metallicity (gas-phase oxygen abundance) estimated from integrated spectra could represent the local metallicity at SN explosion sites with small bias. All the host galaxies in our sample are metal-rich galaxies (12+log(O/H) $>$ 8.5) except for NGC 6387, which means supernovae (SNe) may be more inclined to explode in rich-metallicity galaxies. There is a positive relation between global gas-phase oxygen abundance and the stellar mass of host galaxies. We also try to compare the differences of the host galaxies between SN Ia and SN II. In our sample, both SNe Ia and SNe II could explode in normal galaxies, while SNe II also could explode in an interactive or merger system, which has star formation in the galaxy. | astrophysics |
Here we present new joint reconstruction and regularization techniques inspired by ideas in microlocal analysis and lambda tomography, for the simultaneous reconstruction of the attenuation coefficient and electron density from X-ray transmission (i.e., X-ray CT) and backscattered data (assumed to be primarily Compton scattered). To demonstrate our theory and reconstruction methods, we consider the "parallel line segment" acquisition geometry of Webber and Miller ("Compton scattering tomography in translational geometries." Inverse Problems 36, no. 2 (2020): 025007), which is motivated by system architectures currently under development for airport security screening. We first present a novel microlocal analysis of the parallel line geometry which explains the nature of image artefacts when the attenuation coefficient and electron density are reconstructed separately. We next introduce a new joint reconstruction scheme for low effective $Z$ (atomic number) imaging ($Z<20$) characterized by a regularization strategy whose structure is derived from lambda tomography principles and motivated directly by the microlocal analytic results. Finally we show the effectiveness of our method in combating noise and image artefacts on simulated phantoms. | electrical engineering and systems science |
Catalysts are substances that assist transformation of other resourceful objects without being consumed in the process. However, the fact that their `catalytic power' is limited and can be depleted is often overlooked, especially in the recently developing theories on catalysis of quantum randomness utilizing building correlation with catalyst. In this work, we establish a resource theory of one-shot catalytic randomness in which uncorrelatedness is consumed in catalysis of randomness. We do so by completely characterizing bipartite unitary operators that can be used to implement catalysis of randomness using partial transpose. By doing so, we find that every catalytic channel is factorizable, and therefore there exists a unital channel that is not catalytic. We define a family of catalytic entropies that quantifies catalytically extractable entropy within a quantum state and show how much degeneracy of quantum state can boost the catalytic entropy beyond its ordinary entropy. Based on this, we demonstrate that a randomness source can be actually exhausted after a certain amount of randomness is extracted. We apply this theory to systems under conservation law that forbids superposition of certain quantum states and find that non-maximally mixed states can yield the maximal catalytic entropy. We discuss implications of this theory to various topics including catalytic randomness absorption, the no-secret theorem and the possibility of multi-party infinite catalysis. | quantum physics |
Quantum descriptions of polarization show the rich degrees of freedom underlying classical light. While changes in polarization of light are well-described classically, a full quantum description of polarimetry, which characterizes substances by their effects on incident light's polarization, is lacking. We provide sets of quantum channels that underlie classical polarimetry and thus correspond to arbitrary Mueller matrices. This allows us to inspect how the quantum properties of light change in a classical polarimetry experiment, and to investigate the extra flexibility that quantum states have during such transformations. Moreover, our quantum channels imply a new method for discriminating between depolarizing and nondepolarizing Mueller matrices, which has been the subject of much research. This theory can now be taken advantage of to improve estimation strategies in classical polarimetry and to further explore the boundaries between classical and quantum polarization. | quantum physics |
In this paper we present new theoretical results for the Dantzig and Lasso estimators of the drift in a high dimensional Ornstein-Uhlenbeck model under sparsity constraints. Our focus is on oracle inequalities for both estimators and error bounds with respect to several norms. In the context of the Lasso estimator our paper is strongly related to [11], who investigated the same problem under row sparsity. We improve their rates and also prove the restricted eigenvalue property solely under ergodicity assumption on the model. Finally, we demonstrate a numerical analysis to uncover the finite sample performance of the Dantzig and Lasso estimators. | mathematics |
The properties of the recently discovered Higgs boson together with the absence of new physics at collider experiments allows us to speculate about consistently extending the Standard Model of particle physics all the way up to the Planck scale. In this context, the Standard Model Higgs non-minimally coupled to gravity could be responsible for the symmetry properties of the Universe at large scales and for the generation of the primordial spectrum of curvature perturbations seeding structure formation. We overview the minimalistic Higgs inflation scenario, its predictions, open issues and extensions and discuss its interplay with the possible metastability of the Standard Model vacuum. | high energy physics phenomenology |
We investigate the most general form of the one-dimensional Dirac Hamiltonian $H_D$ in the presence of scalar and pseudoscalar potentials. To seek embedding of supersymmetry (SUSY) in it, as an alternative procedure to directly employing the intertwining relations, we construct a quasi-Hamiltonian $\mathcal{K}$, defined as the square of $H_D$, to explore the consequences. We show that the diagonal elements of $\mathcal{K}$ under a suitable approximation reflects the presence of a superpotential thus proving a useful guide in unveiling the role of SUSY. For illustrative purpose we apply our scheme to the transformed one-dimensional version of the planar electron Hamiltonian under the influence of a magnetic field. We generate spectral solutions for a class of isochronous potentials. | quantum physics |
In this paper, we propose an efficient and effective framework to fuse hyperspectral and Light Detection And Ranging (LiDAR) data using two coupled convolutional neural networks (CNNs). One CNN is designed to learn spectral-spatial features from hyperspectral data, and the other one is used to capture the elevation information from LiDAR data. Both of them consist of three convolutional layers, and the last two convolutional layers are coupled together via a parameter sharing strategy. In the fusion phase, feature-level and decision-level fusion methods are simultaneously used to integrate these heterogeneous features sufficiently. For the feature-level fusion, three different fusion strategies are evaluated, including the concatenation strategy, the maximization strategy, and the summation strategy. For the decision-level fusion, a weighted summation strategy is adopted, where the weights are determined by the classification accuracy of each output. The proposed model is evaluated on an urban data set acquired over Houston, USA, and a rural one captured over Trento, Italy. On the Houston data, our model can achieve a new record overall accuracy of 96.03%. On the Trento data, it achieves an overall accuracy of 99.12%. These results sufficiently certify the effectiveness of our proposed model. | computer science |
According to quantum electrodynamics, in a strong magnetic field that is constant and spatially uniform, the vacuum becomes polarized with a refractive index greater than unity. As a result, ultra-relativistic charged particles travelling in such media can emit Cherenkov radiation with a power spectrum directly proportional to the photon frequency $\omega$. Therefore, by extrapolating $\omega$ beyond the critical synchrotron frequency $\omega_{c}$, the Cherenkov radiation will eventually dominate over its synchrotron counterpart. However, such an extrapolation is not possible. We show that in the framework of effective field theory, the maximal attainable photon frequency $\omega_{\tiny{\mbox{max}}}$ is about four order of magnitude less than $\omega_{c}$. At $\omega=\omega_{\tiny{\mbox{max}}}$, given the $\gamma_{e}$-factor of an electron travelling normal to a constant and spatially uniform magnetic field $\mathbf{B}$, the spectrum of Cherenkov radiation becomes dominant when $\gamma_{e}(|\mathbf{B}|/\mbox{Gauss})\gtrsim 4.32\times 10^{19}$. Nevertheless, detecting the Cherenkov radiation in astrophysical environments remains challenging since its spectral flux density is about three orders of magnitude less than the synchrotron radiation. | high energy physics phenomenology |
Organometallic halide perovskites (OMHPs) have undergone remarkable developments as highly efficient optoelectronic materials for a variety of applications. Several studies indicated the critical role of defects on the performance of OMHP devices. Yet, the parameters of defects and their interplay with free charge carriers remain unclear. In this study we explore the dynamics of free holes in methylammonium lead tribromide (MAPbBr3) single crystals using the time of flight (ToF) current spectroscopy. By combining the current waveform (CWF) ToF spectroscopy and the Monte Carlo (MC) simulation, three energy states were detected in the band gap of MAPbBr3. Additionally, we found the trapping and detrapping rates of free holes ranging from a few us to hundreds of us and, contrary to previous studies, a strong detrapping activity was revealed. It was shown that these traps have a significant impact on the transport properties of MAPbBr3 single crystal devices, including drift mobility and mobility-lifetime product. To demonstrate the impact of traps on the delay of free carriers, we developed a new model of the effective mobility valid for the case of multiple traps in a semiconductor. Our results provide a new insight on charge transport properties of OMHP semiconductors, which is required for further development of this class of optoelectronic devices. | condensed matter |
Large-scale randomized experiments, sometimes called A/B tests, are increasingly prevalent in many industries. Though such experiments are often analyzed via frequentist $t$-tests, arguably such analyses are deficient: $p$-values are hard to interpret and not easily incorporated into decision-making. As an alternative, we propose an empirical Bayes approach, which assumes that the treatment effects are realized from a "true prior". This requires inferring the prior from previous experiments. Following Robbins, we estimate a family of marginal densities of empirical effects, indexed by the noise scale. We show that this family is characterized by the heat equation. We develop a spectral maximum likelihood estimate based on a Fourier series representation, which can be efficiently computed via convex optimization. In order to select hyperparameters and compare models, we describe two model selection criteria. We demonstrate our method on simulated and real data, and compare posterior inference to that under a Gaussian mixture model of the prior. | statistics |
Although pretrained language models (PTLMs) have been shown to contain significant amounts of world knowledge, they can still produce inconsistent answers to questions when probed, even after using specialized training techniques to reduce inconsistency. As a result, it can be hard to identify what the model actually "believes" about the world. Our goal is to reduce this problem, so systems are more globally consistent and accurate in their answers. Our approach is to add a memory component - a BeliefBank - that records a model's answers, and two mechanisms that use it to improve consistency among beliefs. First, a reasoning component - a weighted SAT solver - improves consistency by flipping answers that significantly clash with others. Second, a feedback component re-queries the model but using known beliefs as context. We show that, in a controlled experimental setting, these two mechanisms improve both accuracy and consistency. This is significant as it is a first step towards endowing models with an evolving memory, allowing them to construct a more coherent picture of the world. | computer science |
In this paper we study structural properties of residuated lattices that are idempotent as monoids. We provide descriptions of the totally ordered members of this class and obtain counting theorems for the number of finite algebras in various subclasses. We also establish the finite embeddability property for certain varieties generated by classes of residuated lattices that are conservative in the sense that monoid multiplication always yields one of its arguments. We then make use of a more symmetric version of Raftery's characterization theorem for totally ordered commutative idempotent residuated lattices to prove that the variety generated by this class has the amalgamation property. Finally, we address an open problem in the literature by giving an example of a noncommutative variety of idempotent residuated lattices that has the amalgamation property. | mathematics |
The simplest extension of the Standard Model by only one real singlet scalar can explain the observed dark matter relic density while giving simultaneously a strongly first-order electroweak phase transition in the early universe. However, after imposing the invisible Higgs decay constraint from the LHC, the parameter space of the single scalar model shrinks to regions with only a few percent of the DM relic abundance and when adding the direct detection bound, e.g. from XENON100, it gets excluded completely. In this paper, we extend the Standard Model with two real guage singlet scalars, here $s$ and $s'$, and show that the electroweak symmetry breaking may occur via different channels. Despite very restrictive first-order phase transition conditions for the two-scalar model in comparison to the single scalar model, there is a viable space of parameters in different phase transition channels that simultaneously explains a fraction or the whole dark matter relic density, a strongly first-order electroweak phase transition and still evading the direct detection bounds from the latest LUX/XENON experiments while respecting the invisible Higgs decay width constraint from the LHC. | high energy physics phenomenology |
The work shows that the associated Einstein like gravity for the Klein-Gordon field shows the spontaneous emergence of the cosmological pressure tensor density (CPTD) that in the classical limit leads to the cosmological constant (CC). Even if the classical cosmological constant is set to zero, the model shows that exists a residual theory-derived quantum CPTD. The work shows that the cosmological constant can be considered as a second order quantum-mechanical correction to the Newtonian gravity. The outputs of the theory show that the expectation value of the CPTD is independent by the zero-point vacuum energy density and that it takes contribution only from the space where the mass is localized (and the space-time is curvilinear) while tending to zero as the space-time approaches to the flat vacuum. A developed model of scalar matter universe shows an overall cosmological effect of the CPTD on the motion of the galaxies that agrees with the astronomical observations. | physics |
Metal oxyfluorides constitute a broad group of chemical compounds with rich spectrum of crystal structures and properties. Here we predict, based on evolutionary algorithm approach, the crystal structure and selected properties of Ag$_2$OF$_2$. This system may be considered as the 1 to 1 adduct of AgF$_2$ (i.e. an antiferromagnetic charge transfer positive U insulator) and AgO (i.e. a disproportionated negative U insulator). We analyze oxidation states of silver in each structure, possible magnetic interactions, as well as energetic stability. Prospect is outlined for synthesis of polytypes of interest using diverse synthetic approaches. | condensed matter |
Space-time (ST) wave packets are propagation-invariant pulsed optical beams whose group velocity can be tuned in free space by tailoring their spatio-temporal spectral structure. To date, efforts on synthesizing ST wave packets have striven to maintain their propagation invariance. Here, we demonstrate that one degree of freedom of a ST wave packet -- its on-axis spectrum -- can be isolated and purposefully controlled independently of the others. Judicious spatio-temporal spectral amplitude and phase modulation yields ST wave packets with programmable spectral changes along the propagation axis; including red-shifting or blue-shifting spectra, or more sophisticated axial spectral encoding including bidirectional spectral shifts and accelerating spectra. In all cases, the spectral shift can be larger than the initial on-axis bandwidth, while preserving the propagation-invariance of the other degrees of freedom, including the wave packet spatio-temporal profile. These results may be useful in range-finding in microscopy or remote sensing via spectral stamping. | physics |
We discuss two possible scenarios, namely the curvaton mechanism and the dark matter density modulation, where non-Gaussianity signals of superheavy dark matter produced by gravity can be enhanced and observed. In both scenarios, superheavy dark matter couples to an additional light field as a mediator. In the case of derivative coupling, the resulting non-Gaussianities induced by the light field can be large, which can provide inflationary evidences for these superheavy dark matter scenarios. | high energy physics phenomenology |
Illustration of the geometric and topological properties of Berry phase is often in an obscure and abstract language of fiber bundles. In this article, we demonstrate these properties with a lucid and concrete system whose parameter space is a torus. The instantaneous eigenstate is regarded as a basis. And the corresponding connection and curvature are calculated respectively. Furthermore, we find the magnitude of curvature is exactly the Gaussian curvature, which shows its local property. The topological property is reflected by the integral over the torus due to Gauss-Bonnet theorem. When we study the property of parallel transportation of a vector over a loop, we make a conclusion that the Berry phase is just the angle between the final and initial vectors. And we also illuminate the geometric meaning of gauge transformation, which is just a rotation of basis. | quantum physics |
In this paper, the dynamics of quantum Fisher information of a qubit interacting with a squeezed thermal environment are studied. The optimal initial state of the qubit, the temperature of the environment, and the interaction time, which maximize quantum Fisher information are obtained. Based on the ohmicity of the environment, we compare the dynamics of quantum Fisher information in ohmic, sub-ohmic, and super-ohmic regimes of the environment. Moreover, it is shown that the precise estimation of parameters is robust against squeezing. | quantum physics |
The debate surrounding the hot hand in the NBA has been ongoing for many years. However, many of the previous works on this theme has focused on only the very next sequential shot attempt, often on very select players. This work looks in more detail the effect of a made or missed shot on the next series of shots over a two-year span, with time between shots shown to be a critical factor in the analysis. Also, multi-year streakiness is analyzed, and all indications are that players cannot really sustain their good (or bad) fortune from year to year. | statistics |
Failure in disordered solids is accompanied by intermittent fluctuations extending over a broad range of scales. The implied scaling has been previously associated with either spinodal or critical points. We use an analytically transparent mean-field model to show that both analogies are relevant near the brittle-to-ductile transition. Our study indicates that in addition to the strength of quenched disorder, an appropriately chosen global measure of rigidity (connectivity) can be also used to tune the system to criticality. By interpreting rigidity as a timelike variable we reveal an intriguing parallel between earthquake-type critical failure and Burgers turbulence. | condensed matter |
Coherent, optically dressed media composed of two-level molecular systems without inversion symmetry are considered as all-optically tunable sources of coherent radiation in the microwave domain. A theoretical model and a numerical toolbox are developed to confirm the main finding: the generation of a low-frequency radiation, and the buildup and propagation dynamics of such low-frequency signals in a medium of polar molecules in a gas phase. The physical mechanism of the signal generation relies on the permanent dipole moment characterizing systems without inversion symmetry. The molecules are polarized with a DC electric field yielding a permanent electric dipole moment in the laboratory frame; the direction and magnitude of the moment depend on the molecular state. As the system is resonantly driven, the dipole moment oscillates at the Rabi frequency and, hence, generates microwave radiation. We demonstrate the tuning capability of the output signal frequency with the drive amplitude and detuning. We find that even though decoherence mechanisms such as spontaneous emission may damp the output field, a scenario based on pulsed illumination yields a coherent, pulsed output of tunable temporal width. Finally, we discuss experimental scenarios exploiting rotational levels of gaseous ensembles of heteronuclear diatomic molecules. | quantum physics |
The Fifth Generation (5G) New Radio (NR) does not support data transmission during the random access (RA) procedures, which results in unnecessary control signalling overhead and power consumption, especially for small data transmission. Motivated by this, we propose two new RA schemes based on the existing grant-based (4-step) and grant-free (2-step B) RA schemes, which are NR Early Data Transmission (NR EDT) and 2-step A RA schemes, with the aim to enable data transmission during RA procedures in Radio Resource Control (RRC) Inactive state. To compare our proposed schemes with the benchmark schemes, we provide a spatio-temporal analytical framework to evaluate the RA schemes, which jointly models the preamble detection, Physical Uplink Shared Channel (PUSCH) decoding, and data transmission procedures. Based on this analytical model, we derive the analytical expressions for the overall packet transmission success probability of four RA schemes in each time slot. We also derive the throughput and the average energy consumption for a successful packet transmission of each scheme. Our results show that the 2-step A and 2-step B RA schemes provide the highest overall packet transmission success probability, the 2-step A RA scheme provides the lowest average energy consumption in low device intensity scenario, and 2-step B RA provides the lowest average energy consumption in high device intensity scenario. | electrical engineering and systems science |
Polyatomic polar molecules are promising systems for future experiments that search for violation of time-reversal and parity symmetries due to their advantageous electronic and vibrational structure, which allows laser cooling, full polarisation of the molecule, and reduction of systematic effects [I. Kozyryev and N.R. Hutzler, Phys, Rev. Lett. {\bf 119}, 133002 (2017)]. In this work we investigate the enhancement factor of the electric dipole moment of the electron ($E_\text{eff}$) in the triatomic monohydroxide molecules BaOH and YbOH within the high-accuracy relativistic coupled cluster method. The recommended $E_\text{eff}$ values of the two systems are 6.65 $\pm$ 0.15 GV/cm and 23.4 $\pm$ 1.0 GV/cm, respectively. We compare our results with similar calculations for the isoelectronic diatomic molecules BaF and YbF, which are currently used in experimental search for $P,T$-odd effects in molecules. The $E_\text{eff}$ values prove to be very close, within about 1.5 $\%$ difference in magnitude between the diatomic and the triatomic compounds. Thus, BaOH and YbOH have a similar enhancement of the electron electric dipole moment, while benefiting from experimental advantages, and can serve as excellent candidates for next-generation experiments. | physics |
Dirichlet-multinomial (DMN) distribution is commonly used to model over-dispersion in count data. Precise and fast numerical computation of the DMN log-likelihood function is important for performing statistical inference using this distribution, and remains a challenge. To address this, we use mathematical properties of the gamma function to derive a closed form expression for the DMN log-likelihood function. Compared to existing methods, calculation of the closed form has a lower computational complexity, hence is much faster without comprimising computational accuracy. | statistics |
Could we hear the pop of a wave-function collapse, and if so, what would it sound like? There exist reconstructions or modifications of quantum mechanics (collapse models) where this archetypal signature of randomness exists and can in principle be witnessed. But, perhaps surprisingly, the resulting sound is disappointingly banal, indistinguishable from any other click. The problem of finding the right description of the world between two completely different classes of models -- where wave functions jump and where they do not -- is empirically undecidable. Behind this seemingly trivial observation lie deep lessons about the rigidity of quantum mechanics, the difficulty to blame unpredictability on intrinsic randomness, and more generally the physical limitations to our knowledge of reality. | quantum physics |
Recently, Kim, Shiu and Vafa proposed general consistency conditions for six dimensional supergravity theories with minimal supersymmetry coming from couplings to strings. We test them in explicit perturbative orientifold models in order to unravel the microscopic origin of these constraints. Based on the perturbative data, we conjecture the existence of null charges $Q \cdot Q=0 $ for any six-dimensional theory with at least one tensor multiplet, coupling to string defects of charge $Q$. We then include the new constraint to exclude some six-dimensional supersymmetric anomaly-free examples that have currently no string or F-theory realization. We also investigate the constraints from the couplings to string defects in case where supersymmetry is broken in tachyon free vacua, containing non-BPS configurations of brane supersymmetry breaking type, where the breaking is localized on antibranes. In this case, some conditions have naturally to be changed or relaxed whenever the string defects experience supersymmetry breaking, whereas the constraints are still valid if they are geometrically separated from the supersymmetry breaking source. | high energy physics theory |
A/B testing is an important decision-making tool in product development for evaluating user engagement or satisfaction from a new service, feature or product. The goal of A/B testing is to estimate the average treatment effects (ATE) of a new change, which becomes complicated when users are interacting. When the important assumption of A/B testing, the Stable Unit Treatment Value Assumption (SUTVA), which states that each individual's response is affected by their own treatment only, is not valid, the classical estimate of the ATE usually leads to a wrong conclusion. In this paper, we propose a cluster-adaptive network A/B testing procedure, which involves a sequential cluster-adaptive randomization and a cluster-adjusted estimator. The cluster-adaptive randomization is employed to minimize the cluster-level Mahalanobis distance within the two treatment groups, so that the variance of the estimate of the ATE can be reduced. In addition, the cluster-adjusted estimator is used to eliminate the bias caused by network interference, resulting in a consistent estimation for the ATE. Numerical studies suggest our cluster-adaptive network A/B testing achieves consistent estimation with higher efficiency. An empirical study is conducted based on a real world network to illustrate how our method can benefit decision-making in application. | statistics |
We have designed and implemented a straightforward method to deterministically measure the temperature of the selected segment of a cold atom ensemble, and we have also developed an upgrade in the form of nondestructive thermometry. The essence is to monitor the thermal expansion of the targeted cold atoms after labeling them through manipulating the internal states, and the nondestructive property relies upon the nearly lossless detection via driving a cycling transition. For cold atoms subject to isotropic laser cooling, this method has the unique capability of addressing only the atoms on the optical detection axis within the enclosure, which is exactly the part we care about in major applications such as atomic clock or quantum sensing. Furthermore, our results confirm the sub-Doppler cooling features in isotropic laser cooling, and we have investigated the relevant cooling properties. Meanwhile, we have applied the recently developed optical configuration with the cooling laser injection in the form of hollow beams, which helps to enhance the cooling performance and accumulate more cold atoms in the central regions. | physics |
It would be a natural expectation that only major peaks, not all of them, would make an important contribution to the characterization of the XRD pattern. We developed a scheme that can identify which peaks are relavant to what extent by using auto-encoder technique to construct a feature space for the XRD peak patterns. Individual XRD patterns are projected onto a single point in the two-dimensional feature space constructed using the method. If the point is significantly shifted when a peak of interest is masked, then we can say the peak is relevant for the characterization represented by the point on the space. In this way, we can formulate the relevancy quantitatively. By using this scheme, we actually found such a peak with a significant peak intensity but low relevancy in the characterization of the structure. The peak is not easily explained by the physical viewpoint such as the higher-order peaks from the same plane index, being a heuristic finding by the power of machine-learning. | physics |
We study sampling from a target distribution ${\nu_* = e^{-f}}$ using the unadjusted Langevin Monte Carlo (LMC) algorithm. For any potential function $f$ whose tails behave like ${\|x\|^\alpha}$ for ${\alpha \in [1,2]}$, and has $\beta$-H\"older continuous gradient, we prove that ${\widetilde{\mathcal{O}} \Big(d^{\frac{1}{\beta}+\frac{1+\beta}{\beta}(\frac{2}{\alpha} - \boldsymbol{1}_{\{\alpha \neq 1\}})} \epsilon^{-\frac{1}{\beta}}\Big)}$ steps are sufficient to reach the $\epsilon $-neighborhood of a $d$-dimensional target distribution $\nu_*$ in KL-divergence. This convergence rate, in terms of $\epsilon$ dependency, is not directly influenced by the tail growth rate $\alpha$ of the potential function as long as its growth is at least linear, and it only relies on the order of smoothness $\beta$. One notable consequence of this result is that for potentials with Lipschitz gradient, i.e. $\beta=1$, our rate recovers the best known rate ${\widetilde{\mathcal{O}}(d\epsilon^{-1})}$ which was established for strongly convex potentials in terms of $\epsilon$ dependency, but we show that the same rate is achievable for a wider class of potentials that are degenerately convex at infinity. The growth rate $\alpha$ starts to have an effect on the established rate in high dimensions where $d$ is large; furthermore, it recovers the best-known dimension dependency when the tail growth of the potential is quadratic, i.e. ${\alpha = 2}$, in the current setup. Our framework allows for finite perturbations, and any order of smoothness ${\beta\in(0,1]}$; consequently, our results are applicable to a wide class of non-convex potentials that are weakly smooth and exhibit at least linear tail growth. | statistics |
We study the axion strings with the electroweak gauge flux in the DFSZ axion model and show that these strings, called the electroweak axion strings, can exhibit superconductivity without fermionic zero modes. We construct three types of electroweak axion string solutions. Among them, the string with $W$-flux can be lightest in some parameter space, which leads to a stable superconducting cosmic string. We also show that a large electric current can flow along the string due to the Peccei-Quinn scale much higher than the electroweak scale. This large current induces a net attractive force between the axion strings with the same topological charge, which opens a novel possibility that the axion strings form Y-junctions in the early universe. | high energy physics phenomenology |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.