text
stringlengths
11
9.77k
label
stringlengths
2
104
We describe the spaces of stability conditions on certain triangulated categories associated to Dynkin diagrams. These categories can be defined either algebraically via module categories of preprojective algebras, or geometrically via coherent sheaves on resolutions of Kleinian singularities. The resulting spaces are related to regular subsets of the corresponding complexified Cartan algebras.
mathematics
Several evidences indicate that Lyman Break Galaxies (LBG) in the Epoch of Reionization (redshift $z>6$) might host massive black holes (MBH). We address this question by using a merger-tree model combined with tight constraints from the 7 Ms Chandra survey, and the known high-$z$ super-MBH population. We find that a typical LBG with $M_{\rm UV}=-22$ residing in a $M_h\approx 10^{12} M_\odot$ halo at $z=6$ host a MBH with mass $M_\bullet \approx 2\times 10^8 M_\odot$. Depending on the fraction, $f_{\rm seed}$, of early halos planted with a direct collapse black hole seed ($M_{\rm seed}=10^5 M_\odot$), the model suggests two possible scenarios: (a) if $f_{\rm seed}=1$, MBH in LBGs mostly grow by merging, and must accrete at a low ($\lambda_E\simeq 10^{-3}$) Eddington ratio not to exceed the experimental X-ray luminosity upper bound $L_X^* = 10^{42.5} {\rm erg\, s}^{-1}$; (b) if $f_{\rm seed}=0.05$ accretion dominates ($\lambda_E\simeq 0.22$), and MBH emission in LBGs must be heavily obscured. In both scenarios the UV luminosity function is largely dominated by stellar emission up to very bright mag, $M_{\rm UV} > -23$, with BH emission playing a subdominant role. Scenario (a) poses extremely challenging, and possibly unphysical, requirements on DCBH formation. Scenario (b) entails testable implications on the physical properties of LBGs involving the FIR luminosity, emission lines, and presence of outflows.
astrophysics
Originating in the field of biomechanics, Fung's model of quasi-linear viscoelasticity (QLV) is one of the most popular constitutive theories employed to compute the time-dependent relationship between stress and deformation in soft solids. It is one of the simplest models of nonlinear viscoelasticity, based on a time-domain integral formulation. In the present study, we consider the QLV model incorporating a single scalar relaxation function. We provide natural internal variables of state, as well as a consistent expression of the free energy to illustrate the thermodynamic consistency of this version of the QLV model. The thermodynamic formulation highlights striking similarities between QLV and the internal-variable models introduced by Holzapfel and Simo. Finally, the dissipative features of compressible QLV materials are illustrated in simple tension.
condensed matter
We study target space theory on a torus for the states with $N_L+N_R=2$ through Double Field Theory. The spin-two Fierz-Pauli fields are not allowed when all spatial dimensions are non-compact. The massive states provide both non-vanishing momentum and winding numbers in the target space theory. To derive the cubic action, we provide the unique constraint for $N_L\neq N_R$ compatible with the integration by part. We first make a correspondence of massive and massless fields. The quadratic action is gauge invariant by introducing the mass term. We then proceed to the cubic order. The cubic action is also gauge invariant by introducing the coupling between the one-form field and other fields. The massive states do not follow the consistent truncation. One should expect the self-consistent theory by summing over infinite modes. Hence the naive expectation is wrong up to the cubic order. In the end, we show that the momentum and winding modes cannot both appear for only one compact doubled space.
high energy physics theory
Following to the recently published approach [Phys. Rev. Lett. 119, 073901 (2017); New J. Phys., 123014 (2017)], we refine and accomplish the general scheme for the unified description of the momentum and angular momentum in complex media. The equations for the canonical (orbital) and spin linear momenta, orbital and spin angular momenta in a lossless inhomogeneous dispersive medium are presented in the compact form analogous to the Brillouin's relation for the energy. The results are applied to the surface plasmon-polariton (SPP) field, and the microscopic calculations support the phenomenological expectations. The refined general scheme correctly describes the unusual SPP properties (transverse spin, magnetization momentum) and additionally predicts the singular momentum contribution sharply localized at the metal-dielectric interface, which is confirmed by the microscopic analysis. The results can be useful in optical systems employing the structured light, especially for microoptics, plasmophotonics, optical sorting and micromanipulation.
physics
In this note, we show that the $\mu$-oscillator does not lead to the Fibonacci sequence as claimed in \cite{Marinho:2019zny} since $[n]^{\mu}=n$ in the limit $\mu \rightarrow 0$. Thus we obtain the sequence $[0]^{\mu}=0, 1, 2,...$. We only obtain the Fibonacci sequence when the $q$-deformation is associated to the $\mu$-deformation via the basis number $$\lim_{\mu\rightarrow 0} [n]^{\mu}_{q_1, q_2} = \frac{q_1^{2n}-q_2^{2n}}{q_1^2-q_2^2}.$$
quantum physics
A neutron radiography testing station has been developed exploiting the neutron beam of CERN's n\_TOF Experimental Area 2, located at the shortest distance to the neutron producing-target. The characteristics of the n\_TOF neutron beam for the imaging setup are presented in this paper, together with the obtained experimental results. The possible developments of neutron imaging capabilities of the n\_TOF facility in terms of detection-systems and beam-line upgrades are as well outlined.
physics
Previous observations of a large number of galaxies show differences between the photometry of spiral galaxies with clockwise spin patterns and spiral galaxies with counterclockwise spin patterns. In this study the mean magnitude of a large number of clockwise galaxies is compared to the mean magnitude of a large number of counterclockwise galaxies. The observed difference between clockwise and counterclockwise spiral galaxies imaged by the space-based COSMOS survey is compared to the differences between clockwise and counterclockwise galaxies imaged by the Earth-based SDSS and Pan-STARRS around the same field. The annotation of clockwise and counterclockwise galaxies is a fully automatic process that does not involve human intervention, and in all experiments both clockwise and counterclockwise galaxies are separated from the same fields. The comparison shows that the same asymmetry was identified by all three telescopes, providing strong evidence that the rotation direction of a spiral galaxy is linked to its luminosity as measured from Earth. Analysis of the luminosity difference using a large number of galaxies from different parts of the sky shows that the difference between clockwise and counterclockwise galaxies changes with the direction of observation, and oriented around an axis.
astrophysics
We describe three different methods for exploring the hydrogen reionization epoch using fast radio bursts (FRBs) and provide arguments for the existence of FRBs at high redshift (z). The simplest way, observationally, is to determine the maximum dispersion measure (DM$_{\rm max}$) of FRBs for an ensemble that includes bursts during the reionization. The DM$_{\rm max}$ provides information regarding reionization much like the optical depth of the CMB to Thomson scattering does, and it has the potential to be more accurate than constraints from Planck, if DM$_{\rm max}$ can be measured to a precision better than 500 $\mbox{pc cm}^{-3}$. Another method is to measure redshifts of about 40 FRBs between z of 6-10 with$\sim10\%$ accuracy to obtain the average electron density in 4 different z-bins with $\sim4\%$ accuracy. These two methods don't require knowledge of the FRB luminosity function and its possible redshift evolution. Finally, we show that the reionization history is reflected in the number of FRBs per unit DM, given a fluence limited survey of FRBs that includes bursts during the reionization epoch; we show using FIRE simulations that the contributions to DM from the FRB host galaxy $\&$ CGM during the reionization era is a small fraction of the observed DM. This third method requires no redshift information but does require knowledge of the FRB luminosity function.
astrophysics
Using the framework of higher-form global symmetries, we examine the regime of validity of force-free electrodynamics by evaluating the lifetime of the electric field operator, which is non-conserved due to screening effects. We focus on a holographic model which has the same global symmetry as that of low energy plasma and obtain the lifetime of (non-conserved) electric flux in a strong magnetic field regime. The lifetime is inversely correlated to the magnetic field strength and thus suppressed in the strong field regime.
high energy physics theory
We analyze Lindblad-Gorini-Kossakowski-Sudarshan-type generators for selected periodically driven open quantum systems. All these generators can be obtained by temporal coarse-graining procedures, and we compare different coarse-graining schemes. Similar to for undriven systems, we find that a dynamically adapted coarse-graining time, effectively yielding non-Markovian dynamics by interpolating through a series of different but invididually Markovian solutions, yields the best results among the different coarse-graining schemes, albeit at highest computational cost.
quantum physics
In this paper, we study the quantisation of Dirac field theory in the $\kappa$-deformed space-time. We adopt a quantisation method that uses only equations of motion for quantising the field. Starting from $\kappa$-deformed Dirac equation, valid up to first order in the deformation parameter $a$, we derive deformed unequal time anti-commutation relation between deformed field and its adjoint, leading to undeformed oscillator algebra. Exploiting the freedom of imposing a deformed unequal time anti-commutation relations between $\kappa$-deformed spinor and its adjoint, we also derive a deformed oscillator algebra. We show that deformed number operator is the conserved charge corresponding to global phase transformation symmetry. We construct the $\kappa$-deformed conserved currents, valid up to first order in $a$, corresponding to parity and time-reversal symmetries of $\kappa$-deformed Dirac equation also. We show that these conserved currents and charges have a mass-dependent correction, valid up to first order in $a$. This novel feature is expected to have experimental significance in particle physics. We also show that it is not possible to construct a conserved current associated with charge conjugation, showing that the Dirac particle and its anti-particle satisfy different equations in $\kappa$-space-time.
high energy physics theory
We present 3D DEM simulations of jammed bidisperse granular packings to investigate their jamming density, $\phi_J$, and bulk modulus, $K$, as a function of the size ratio, $\delta$, and concentration of small particles, $X_{\mathrm S}$. We determine the partial and total bulk modulus for each packing and obtain a transition behavior at specific densities that depends on the compression level, thus marking the first and second transition of the system. The highest bulk modulus is found at $X^{*}_{\mathrm S}(\delta = 0.15) \approx 0.21$ consistent with the maximum jamming density, where both particle species mix more efficiently. At extreme size ratios, $\delta = 0.15$, $X^{*}_{\mathrm S}$ divides two structural scenarios for $K$ that depend on whether small particles are jammed or not jointly with large ones. We find that along the second transition line, $K$ rises $20\%$ compared to those found at the first transition. However, their values are still low compared to that shown at $X^{*}_{\mathrm S}$. This clearly indicates that the jamming of small particles indeed impacts the internal resistance of the system for low $\delta$ and low $X_{\mathrm S}$. This new result will allow tuning packing bulk modulus and other properties, such as wave speed, when a specific size and concentration of small particles contribute to the jammed structure.
condensed matter
We report on our efforts to optimize the geometry of neutron moderators and converters for the TRIUMF UltraCold Advanced Neutron (TUCAN) source using MCNP simulations. It will use an existing spallation neutron source driven by a 19.3 kW proton beam delivered by TRIUMF's 520 MeV cyclotron. Spallation neutrons will be moderated in heavy water at room temperature and in liquid deuterium at 20 K, and then superthermally converted to ultracold neutrons in superfluid, isotopically purified $^4$He. The helium will be cooled by a $^3$He fridge through a $^3$He-$^4$He heat exchanger. The optimization took into account a range of engineering and safety requirements and guided the detailed design of the source. The predicted ultracold-neutron density delivered to a typical experiment is maximized for a production volume of 27 L, achieving a production rate of $1.4 \cdot 10^7$ s$^{-1}$ to $1.6 \cdot 10^7$ s$^{-1}$ with a heat load of 8.1 W. At that heat load, the fridge can cool the superfluid helium to 1.1 K, resulting in a storage lifetime for ultracold neutrons in the source of about 30 s. The most critical performance parameters are the choice of cold moderator and the volume, thickness, and material of the vessel containing the superfluid helium. The source is scheduled to be installed in 2021 and will enable the TUCAN collaboration to measure the electric dipole moment of the neutron with a sensitivity of $10^{-27}$ e cm.
physics
We examine the proposal that the dimensional reduction of the effective action of perturbative string theory on a circle, should be invariant under T-duality transformations. The T-duality transformations are the standard Buscher rules plus some higher covariant derivatives. By explicit calculations at order $\alpha'$ for metric, dilaton and B-field, we show that the T-duality constraint can fix both the effective action and the higher derivative corrections to the Buscher rules up to an overall factor. The corrections depend on the scheme that one uses for the effective action. We have found the effective action and its corresponding T-duality transformations in an arbitrary scheme.
high energy physics theory
The competition between kinetic energy and Coulomb interactions in electronic systems can lead to complex many-body ground states with competing superconducting, charge density wave, and magnetic orders. Here we study the low temperature phases of a strongly interacting zinc-oxide-based high mobility two dimensional electron system that displays a tunable metal-insulator transition. Through a comprehensive analysis of the dependence of electronic transport on temperature, carrier density, in-plane and perpendicular magnetic fields, and voltage bias, we provide evidence for the existence of competing correlated metallic and insulating states with varying degrees of spin polarization. Our system features an unprecedented level of agreement with the state-of-the-art Quantum Monte Carlo phase diagram of the ideal jellium model, including a Wigner crystallization transition at a value of the interaction parameter $r_s\sim 30$ and the absence of a pure Stoner transition. In-plane field dependence of transport reveals a new low temperature state with partial spin polarization separating the spin unpolarized metal and the Wigner crystal, which we examine against possible theoretical scenarios such as an anti-ferromagnetic crystal, Coulomb induced micro-emulsions, and disorder driven puddle formation.
condensed matter
We present Graph Attention Collaborative Similarity Embedding (GACSE), a new recommendation framework that exploits collaborative information in the user-item bipartite graph for representation learning. Our framework consists of two parts: the first part is to learn explicit graph collaborative filtering information such as user-item association through embedding propagation with attention mechanism, and the second part is to learn implicit graph collaborative information such as user-user similarities and item-item similarities through auxiliary loss. We design a new loss function that combines BPR loss with adaptive margin and similarity loss for the similarities learning. Extensive experiments on three benchmarks show that our model is consistently better than the latest state-of-the-art models.
computer science
The possible detection of a compact object in the remnant of SN 1987A presents an unprecedented opportunity to follow its early evolution. The suspected detection stems from an excess of infrared emission from a dust blob near the compact object's predicted position. The infrared excess could be due to the decay of isotopes like 44Ti, accretion luminosity from a neutron star or black hole, magnetospheric emission or a wind originating from the spindown of a pulsar, or thermal emission from an embedded, cooling neutron star (NS 1987A). It is shown that the last possibility is the most plausible as the other explanations are disfavored by other observations and/or require fine-tuning of parameters. Not only are there indications the dust blob overlaps the predicted location of a kicked compact remnant, but its excess luminosity also matches the expected thermal power of a 30 year old neutron star. Furthermore, models of cooling neutron stars within the Minimal Cooling paradigm readily fit both NS 1987A and Cas A, the next-youngest known neutron star. If correct, a long heat transport timescale in the crust and a large effective stellar temperature are favored, implying relatively limited crustal n-1S0 superfluidity and an envelope with a thick layer of light elements, respectively. If the locations don't overlap, then pulsar spindown or accretion might be more likely, but the pulsar's period and magnetic field or the accretion rate must be rather finely tuned. In this case, NS 1987A may have enhanced cooling and/or a heavy-element envelope.
astrophysics
The effect of quark anomalous magnetic moment (AMM) to chiral restoration and deconfinement phase transitions under magnetic fields is investigated in a Pauli-Villars regularized PNJL model. A linear-in-$B$ term for quark anomalous magnetic moment is introduced to the Lagrangian density of our model, and it plays the role of inverse catalysis to the phase transitions. With fixed magnetic field, the critical temperature decreases with quark AMM. When fixing quark AMM, the critical temperature increases with magnetic field for a small quark AMM, but decreases with magnetic field for a large quark AMM. The critical temperature of chiral restoration and deconfinement phase transitions is determined by the two competing factors, the catalysis effect of magnetic field and inverse catalysis of quark anomalous magnetic moment.
high energy physics phenomenology
CCAT-prime is a new 6 m crossed Dragone telescope designed to characterize the Cosmic Microwave Background (CMB) polarization and foregrounds, measure the Sunyaev-Zel'dovich effects of galaxy clusters, map the [CII] emission intensity from the Epoch of Reionization (EoR), and monitor accretion luminosity over multi-year timescales of hundreds of protostars in the Milky Way. CCAT-prime will make observations from a 5,600 m altitude site on Cerro Chajnantor in the Atacama Desert of northern Chile. The novel optical design of the telescope combined with high surface accuracy ($<$10 $\mu$m) mirrors and the exceptional atmospheric conditions of the site will enable sensitive broadband, polarimetric, and spectroscopic surveys at sub-mm to mm wavelengths. Prime-Cam, the first light instrument for CCAT-prime, consists of a 1.8 m diameter cryostat that can house seven individual instrument modules. Each instrument module, optimized for a specific science goal, will use state-of-the-art kinetic inductance detector (KID) arrays operated at $\sim$100 mK, and Fabry-Perot interferometers (FPI) for the EoR science. Prime-Cam will be commissioned with staged deployments to populate the seven instrument modules. The full instrument will consist of 60,000 polarimetric KIDs at a combination of 220/280/350/410 GHz, 31,000 KIDS at 250/360 GHz coupled with FPIs, and 21,000 polarimetric KIDs at 850 GHz. Prime-Cam is currently being built, and the CCAT-prime telescope is designed and under construction by Vertex Antennentechnik GmbH to achieve first light in 2021. CCAT-prime is also a potential telescope platform for the future CMB Stage-IV observations.
astrophysics
In high-quality solid-state systems at low temperatures, the hydrodynamic or the ballistic regimes of heat and charge transport are realized in the electron and the phonon systems. In these regimes, the thermal and the electric conductance of the sample can reach abnormally large magnitudes. In this paper, we study the Hall effect in a system of interacting two-dimensional charged particles in a ballistic regime. We demonstrated that the Hall electric field is caused by a change in the densities of particles due to the effect of external fields on their free motions between the sample edges. In one-component (electron or hole) systems the Hall coefficient turns out to one half compared with the one in conventional disordered Ohmic samples. This result is consistent with the recent experiment on measuring of the Hall resistance in ultra-high-mobility GaAs quantum wells. In two-component electron-hole systems the Hall electric field depends linearly on the difference between the concentrations of electrons and holes near the charge neutrality point (the equilibrium electron and hole densities coincide) and saturates to the Hall field of a one-component system far from the charge neutrality point. We also studied the corrections to magnetoresistance and the Hall electric field due to inter-particle scattering being a precursor of forming a viscous flow. For the samples shorter than the inter-particle scattering length, the obtained corrections govern the dependencies of magnetoresistance and the Hall field on temperature.
condensed matter
We develop an iterative method for constructing four-dimensional generalized unitarity cuts in $\mathcal{N} = 2$ supersymmetric Yang-Mills (SYM) theory coupled to fundamental matter hypermultiplets ($\mathcal{N} = 2$ SQCD). For iterated two-particle cuts,specifically those involving only four-point amplitudes, this implies simple diagrammatic rules for assembling the cuts to any loop order, reminiscent of the rung rule in $\mathcal{N} = 4$ SYM. By identifying physical poles, the construction simplifies the task of extracting complete integrands. In combination with the duality between color and kinematics we construct all four-point massless MHV-sector scattering amplitudes up to two loops in $\mathcal{N} = 2$ SQCD, including those with matter on external legs. Our results reveal chiral infrared-finite integrands closely related to those found using loop-level BCFW recursion. The integrands are valid in $D\leq 6$ dimensions with external states in a four-dimensional subspace; the upper bound is dictated by our use of six-dimensional chiral $\mathcal{N} = (1,0)$ SYM as a means of dimensionally regulating loop integrals.
high energy physics theory
In (1+1) space-time dimensions, we can have two particles that are Weyl and Majorana particles at the same time---1D Weyl-Majorana particles. That is, the right-chiral and left-chiral parts of the two-component Dirac wave function that satisfies the Majorana condition, in the Weyl representation, describe these particles, and each satisfies their own Majorana condition. Naturally, the nonzero component of each of these two two-component wave functions satisfies a Weyl equation. We investigate and discuss this issue and demonstrate that for a 1D Weyl-Majorana particle in a box, the nonzero components, and therefore the chiral wave functions, only admit the periodic and antiperiodic boundary conditions. From the latter two boundary conditions, we can only construct four boundary conditions for the entire Dirac wave function. Then, we demonstrate that these four boundary conditions are also included within the most general set of self-adjoint boundary conditions for a 1D Majorana particle in a box.
quantum physics
In this paper, we propose linear maps over the space of all polynomials $f(x)$ in $\mathbb{F}_q[x]$ that map $0$ to itself, through their evaluation map. Properties of these linear maps throw up interesting connections with permutation polynomials. We study certain properties of these linear maps. We propose to classify permutation polynomials by identifying the generalized eigenspaces of these maps, where the permutation polynomials reside. As it turns out, several classes of permutation polynomials studied in literature neatly fall into classes defined using these linear maps. We characterize the shapes of permutation polynomials that appear in the various generalized eigenspaces of these linear maps. For the case of $\mathbb{F}_p$, these generalized eigenspaces provide a degree-wise distribution of polynomials (and therefore permutation polynomials) over $\mathbb{F}_p$. We show that for $\mathbb{F}_q$, it is sufficient to consider only a few of these linear maps. The intersection of the generalized eigenspaces of these linear maps contain (permutation) polynomials of certain shapes. In this context, we study a class of permutation polynomials over $\mathbb{F}_{p^2}$. We show that the permutation polynomials in this class are closed under compositional inverses. We also do some enumeration of permutation polynomials of certain shapes.
mathematics
We introduce monoidal categories whose monoidal products of any positive number of factors are lax coherent and whose nullary products are oplax coherent. We call them $\mathsf{Lax}^+\mathsf{Oplax}^0$-monoidal. Dually, we consider $\mathsf{Lax}_0\mathsf{Oplax}_+$-monoidal categories which are oplax coherent for positive numbers of factors and lax coherent for nullary monoidal products. We define $\mathsf{Lax}^+_0\mathsf{Oplax}^0_+$-duoidal categories with compatible $\mathsf{Lax}^+\mathsf{Oplax}^0$- and $\mathsf{Lax}_0\mathsf{Oplax}_+$-monoidal structures. We introduce comonoids in $\mathsf{Lax}^+\mathsf{Oplax}^0$-monoidal categories, monoids in $\mathsf{Lax}_0\mathsf{Oplax}_+$-monoidal categories and bimonoids in $\mathsf{Lax}^+_0\mathsf{Oplax}^0_+$- duoidal categories. Motivation for these notions comes from a generalization of a construction due to Caenepeel and Goyvaerts. This assigns a $\mathsf{Lax}^+_0\mathsf{Oplax}^0_+$-duoidal category $\mathsf D$ to any symmetric monoidal category $\mathsf V$. The unital $\mathsf{BiHom}$-monoids, counital $\mathsf{BiHom}$-comonoids, and unital and counital $\mathsf{BiHom}$-bimonoids in $\mathsf V$ are identified with the monoids, comonoids and bimonoids in $\mathsf D$, respectively.
mathematics
This paper presents a regression-based method for estimating voltages and voltage sensitivities for volt-var control on distribution circuits with limited data. The estimator uses power flow results for representative load and PV output scenarios as training data. Using linear regressions on power flow results, the voltages at critical nodes are calculated online based on power measurements at the feeder head and at each PV plant. The voltage sensitivity to changes in reactive power injection by each PV plant is also found online using regressions on power flow results. The estimator thus provides the estimated critical voltages and their sensitivities to each possible control action. The estimator is tested in conjunction with a volt-var optimization on real, unbalanced rural distribution feeder. The optimal control actions and voltage results using the estimator are compared to the optimal results assuming full visibility of the distribution system. Results show that the estimator can estimate voltages and sensitivities with adequate accuracy for successful centralized volt-var control.
electrical engineering and systems science
We present an Expectation-Maximization (EM) Regularized Deep Learning (EMReDL) model for the weakly supervised tumor segmentation. The proposed framework was tailored to glioblastoma, a type of malignant tumor characterized by its diffuse infiltration into the surrounding brain tissue, which poses significant challenge to treatment target and tumor burden estimation based on conventional structural MRI. Although physiological MRI can provide more specific information regarding tumor infiltration, the relatively low resolution hinders a precise full annotation. This has motivated us to develop a weakly supervised deep learning solution that exploits the partial labelled tumor regions. EMReDL contains two components: a physiological prior prediction model and EM-regularized segmentation model. The physiological prior prediction model exploits the physiological MRI by training a classifier to generate a physiological prior map. This map was passed to the segmentation model for regularization using the EM algorithm. We evaluated the model on a glioblastoma dataset with the available pre-operative multiparametric MRI and recurrence MRI. EMReDL was shown to effectively segment the infiltrated tumor from the partially labelled region of potential infiltration. The segmented core and infiltrated tumor showed high consistency with the tumor burden labelled by experts. The performance comparison showed that EMReDL achieved higher accuracy than published state-of-the-art models. On MR spectroscopy, the segmented region showed more aggressive features than other partial labelled region. The proposed model can be generalized to other segmentation tasks with partial labels, with the CNN architecture flexible in the framework.
electrical engineering and systems science
We introduce a natural definition of the renormalized volume of a 4-dimensional Ricci-flat ALE space. We then prove that the renormalized volume is always less or equal than zero, with equality if and only if the ALE space is isometric to its asymptotic cone. Currently the only known examples of 4-dimensional Ricci-flat ALE spaces are Kronheimer's gravitational instantons and their quotients, which are also known to be the only possible examples of special holonomy. We calculate the renormalized volume of these spaces in terms of Kronheimer's period map.
mathematics
Most work on musical score models (a.k.a. musical language models) for music transcription has focused on describing the local sequential dependence of notes in musical scores and failed to capture their global repetitive structure, which can be a useful guide for transcribing music. Focusing on rhythm, we formulate several classes of Bayesian Markov models of musical scores that describe repetitions indirectly using the sparse transition probabilities of notes or note patterns. This enables us to construct piece-specific models for unseen scores with an unfixed repetitive structure and to derive tractable inference algorithms. Moreover, to describe approximate repetitions, we explicitly incorporate a process for modifying the repeated notes/note patterns. We apply these models as prior musical score models for rhythm transcription, where piece-specific score models are inferred from performed MIDI data by Bayesian learning, in contrast to the conventional supervised construction of score models. Evaluations using the vocal melodies of popular music showed that the Bayesian models improved the transcription accuracy for most of the tested model types, indicating the universal efficacy of the proposed approach. Moreover, we found an effective data representation for modelling rhythms that maximizes the transcription accuracy and computational efficiency.
computer science
A subalgebra $\mathcal{A}$ of a $C^*$-algebra $\mathcal{M}$ is logmodular (resp. has factorization) if the set $\{a^*a; a\text{ is invertible with }a,a^{-1}\in\mathcal{A}\}$ is dense in (resp. equal to) the set of all positive and invertible elements of $\mathcal{M}$. There are large classes of well studied algebras, both in commutative and non-commutative settings, which are known to be logmodular. In this paper, we show that the lattice of projections in a von Neumann algebra $\mathcal{M}$ whose ranges are invariant under a logmodular algebra in $\mathcal{M}$, is a commutative subspace lattice. Further, if $\mathcal{M}$ is a factor then this lattice is a nest. As a special case, it follows that all reflexive (in particular, completely distributive CSL) logmodular subalgebras of type I factors are nest algebras, thus answering a question of Paulsen and Raghupathi [Trans. Amer. Math. Soc., 363 (2011) 2627-2640]. We also discuss some sufficient criteria under which an algebra having factorization is automatically reflexive and is a nest algebra.
mathematics
The photo-response non-uniformity (PRNU) is a distinctive image sensor characteristic, and an imaging device inadvertently introduces its sensor's PRNU into all media it captures. Therefore, the PRNU can be regarded as a camera fingerprint and used for source attribution. The imaging pipeline in a camera, however, involves various processing steps that are detrimental to PRNU estimation. In the context of photographic images, these challenges are successfully addressed and the method for estimating a sensor's PRNU pattern is well established. However, various additional challenges related to generation of videos remain largely untackled. With this perspective, this work introduces methods to mitigate disruptive effects of widely deployed H.264 and H.265 video compression standards on PRNU estimation. Our approach involves an intervention in the decoding process to eliminate a filtering procedure applied at the decoder to reduce blockiness. It also utilizes decoding parameters to develop a weighting scheme and adjust the contribution of video frames at the macroblock level to PRNU estimation process. Results obtained on videos captured by 28 cameras show that our approach increases the PRNU matching metric up to more than five times over the conventional estimation method tailored for photos.
electrical engineering and systems science
Within Type-I seesaw mechanism, Higgs mass can be dynamically generated via quantum effects of the right handed neutrinos assuming the potential is nearly conformal at the Ultra-Violet. The scenario, named as the "Neutrino Option" allows RH neutrino mass scale upto $M \lesssim$ $10^7$ GeV to be consistent with light neutrino masses, mixing and Higgs mass. Therefore, it is not consistent with standard hierarchical thermal leptogenesis. Parameter space for thermal resonant leptogenesis is highly constrained in this model. We point out that non-thermal pair production of RH neutrinos from inflaton decay corresponds in general to a mild degree of resonance in the CP asymmetry parameter and allows RH mass scale to be smaller more than by an order of magnitude than the thermal strong resonance case. Within the similar parameter space of thermal leptogenesis, RH neutrinos can also be produced from inflaton decay along with a Dark Matter having mass $M_{\rm DM}\lesssim$ 320 MeV. The main constraint in the latter scenario comes from the Ly$\alpha$ constraints on Dark Matter free streaming. We show in addition, that the Neutrino Option introduces a 'phantom window' for the RH mass scale, in which contrary to the usual scenarios, CP asymmetry parameter for leptogenesis decreases with the increase of the RH mass scale and minimally fine-tuned seesaw models naturally exhibit this `phantom window'.
high energy physics phenomenology
In molecular photochemistry, the non-equilibrium character and subsequent ultrafast relaxation dynamics of photoexcitations near the Franck-Condon region limit the control of their chemical reactivity. We address how to harness strong light-matter coupling in optical microcavities to isolate and preferentially select specific reaction pathways out of the myriad of possibilities present in large-scale complex systems. Using Fermi's Golden Rule and realistic molecular parameters, we estimate the extent to which molecular configurations can be "locked" into non-equilibrium excited state configurations for timescales well beyond their natural relaxation times. For upper polaritons--which are largely excitonic in character, molecular systems can be locked into their ground state geometries for tens to thousands of picoseconds and varies with the strength of the exciton/phonon coupling (Huang-Rhys parameter). On the other hand, relaxed LP lifetimes are nearly uniformly distributed between 2.1-2.4\,ps and are nearly independent of the Huang-Rhys parameter.
physics
In this second part of our two-part paper, we provide a detailed, frequentist framework for propagating uncertainties within our multivariate linear least squares model. This permits us to quantify the impact of uncertainties in thermodynamic measurements---arising from calibrations and the data acquisition system---and the correlations therein, along with uncertainties in probe positions. We show how the former has a much larger effect (relatively) than uncertainties in probe placement. We use this non-deterministic framework to demonstrate why the well-worn metric for assessing spatial sampling uncertainty falls short of providing an accurate characterization of the effect of a few spatial measurements. In other words, it does not accurately describe the uncertainty associated with sampling a non-uniform pattern with a few circumferentially scattered rakes. To this end, we argue that our data-centric framework can offer a more rigorous characterization of this uncertainty. Our paper proposes two new uncertainty metrics: one for characterizing spatial sampling uncertainty and another for capturing the impact of measurement imprecision in individual probes. These metrics are rigorously derived in our paper and their ease in computation permits them to be widely adopted by the turbomachinery community for carrying out uncertainty assessments.
statistics
Dipolar excitons offer a rich playground for both design of novel optoelectronic devices and fundamental many-body physics. Wide GaN/(AlGa)N quantum wells host a new and promising realization of dipolar excitons. We demonstrate the in-plane confinement and cooling of these excitons, when trapped in the electrostatic potential created by semitransparent electrodes of various shapes deposited on the sample surface. This result is a prerequisite for the electrical control of the exciton densities and fluxes, as well for studies of the complex phase diagram of these dipolar bosons at low temperature.
physics
Data of long-lived and high profile projects is valuable for research on successful software engineering in the wild. Having a dataset with different linked software repositories of such projects, enables deeper diving investigations. This paper presents 20-MAD, a dataset linking the commit and issue data of Mozilla and Apache projects. It includes over 20 years of information about 765 projects, 3.4M commits, 2.3M issues, and 17.3M issue comments, and its compressed size is over 6 GB. The data contains all the typical information about source code commits (e.g., lines added and removed, message and commit time) and issues (status, severity, votes, and summary). The issue comments have been pre-processed for natural language processing and sentiment analysis. This includes emoticons and valence and arousal scores. Linking code repository and issue tracker information, allows studying individuals in two types of repositories and provide more accurate time zone information for issue trackers as well. To our knowledge, this the largest linked dataset in size and in project lifetime that is not based on GitHub.
computer science
In broadband millimeter-wave (mm-Wave) systems, it is desirable to design hybrid beamformers with common analog beamformer for the entire band while employing different baseband beamformers in different frequency sub-bands. Furthermore, the performance mostly relies on the perfectness of the channel information. In this paper, we propose a deep learning (DL) framework for hybrid beamformer design in broadband mm-Wave massive MIMO systems. We design a convolutional neural network (CNN) that accepts the channel matrix of all subcarriers as input and the output of CNN is the hybrid beamformers. The proposed CNN architecture is trained with imperfect channel matrices in order to provide robust performance against the deviations in the channel data. Hence, the proposed precoding scheme can handle the imperfect or limited feedback scenario where the full and exact knowledge of the channel is not available. We show that the proposed DL framework is more robust and computationally less complex than the conventional optimization and phase-extraction-based approaches.
electrical engineering and systems science
Brownian motion is a complex object shared by different communities: first observed by the botanist Robert Brown in 1827, then theorised by physicists in the 1900s, and eventually modelled by mathematicians from the 1920s. Consequently, it is now ambiguously referring to the natural phenomenon but also to the theories accounting for it. There is no published work telling its entire history from its discovery until today, but rather partial histories either from 1827 to Perrin's experiments in the late 1900s, from a physicist's point of view; or from the 1920s from a mathematician's point of view. In this article, we tackle a period straddling the two `half-histories' just mentioned, in order to highlight its continuity, to question the relationship between physics and mathematics, and to remove the ambiguities mentioned above. We study the works of Einstein, Smoluchowski, Langevin, Wiener, Ornstein and Uhlenbeck from 1905 to 1934 as well as experimental results, through the concept of Brownian velocity. We show how Brownian motion became a research topic for the mathematician Wiener in the 1920s, why his model was an idealization of physical reality, what Ornstein and Uhlenbeck added to Einstein's results and how Wiener, Ornstein and Uhlenbeck developed in parallel contradictory theories concerning Brownian velocity.
physics
In this work, which is based on spin-2 vectors and traceless spin-2 tensors, an effective Hamiltonian is constructed with a linearly dispersive five-fold degenerate point with spin-2 vector-momentum couplings. For the model without spin-2 vector-tensor coupling, the topological Chern numbers of five bands are calculated as 4, 2, 0, -2, -4. After including spin-2 vector-tensor coupling, separate topological Chern numbers are obtained for different couplings. A cubic lattice of atoms with five internal states is designed to realize two five-fold degenerate points. The Chern numbers of the bands can be changed by tuning the coupling coefficient. In this work we propose a theoretical design to obtain spin-2 quasi-particles.
condensed matter
Quantum-dot light-emitting diodes (QLEDs) promise high-performance and cost-effective electroluminescent devices. However, shelf stability of QLEDs, a must for practical applications, is currently overlooked. Here we reveal that the state-of-the-art QLEDs exhibit abnormal shelf-ageing behaviours, i.e., improvements in performance (positive ageing), followed by deterioration of performance (negative ageing). Mechanism studies show that the organic acids in the encapsulation acrylic resin induce in-situ reactions, which simultaneously improve electrical conductance and minimize interfacial exciton quenching at the positive-ageing stage. Progression of the in-situ reactions results in negative ageing. Inspired by these findings, we design an electron-transporting bi-layer structure, which delivers both improved electrical conductivity and suppressed interfacial exciton quenching. This design enables shelf-stable QLEDs with high operational performance, i.e., neglectable changes of external quantum efficiency (>20.0%) and ultra-long operational lifetime (T95: 5,500 hours at 1,000 cd m-2) after storage for 180 days. Our work paves the way towards the commercialization of QLEDs.
physics
The extended Schur functions form a basis of quasisymmetric functions that contains the Schur functions. We provide a representation-theoretic interpretation of this basis by constructing $0$-Hecke modules whose quasisymmetric characteristics are the extended Schur functions. We further prove these modules are indecomposable.
mathematics
Recently, there has been increasing interest in using deep learning techniques for various seismic interpretation tasks. However, unlike shallow machine learning models, deep learning models are often far more complex and can have hundreds of millions of free parameters. This not only means that large amounts of computational resources are needed to train these models, but more critically, they require vast amounts of labeled training data as well. In this work, we show how automatically-generated weak labels can be effectively used to overcome this problem and train powerful deep learning models for labeling seismic structures in large seismic volumes. To achieve this, we automatically generate thousands of weak labels and use them to train a deconvolutional network for labeling fault, salt dome, and chaotic regions within the Netherlands F3 block. Furthermore, we show how modifying the loss function to take into account the weak training labels helps reduce false positives in the labeling results. The benefit of this work is that it enables the effective training and deployment of deep learning models to various seismic interpretation tasks without requiring any manual labeling effort. We show excellent results on the Netherlands F3 block, and show how our model outperforms other baseline models.
physics
Entanglement witness is an effective method to detect entanglement in unknown states without doing full tomography. One of the most widespread schemes of witnessing entanglement is measuring its fidelity with respect to a pure entangled state. Recently, a large class of states whose entanglement can not be detected with the fidelity witness has been discovered in Phys.Rev.Lett \textbf{124},200502(2020). They are called unfaithful states. In this paper we propose a new way to detect entanglement by calculating the lower bound of entanglement using measurement results. Numerical simulation shows our method can detect entanglement in unfaithful states with a small number of measurements. Moreover, we generalize our scheme to multipartite states and show that it can tolerate higher noise than previous entanglement witness operators with same number of measurement settings.
quantum physics
Here we aim to separate the two main contributions of slow and fast solar wind that appear at 1 AU. The Bi-Gaussian function is proposed as the probability distribution function of the two main components of the solar wind. The positions of the peaks of every simple Gaussian curve are associated with the typical values of every contribution to solar wind. We used the entire data set from the Advanced Composition Explorer (ACE) mission in an analysis of the data set as a whole and as yearly series. Solar cycle dependence is considered to provide more accurate results for the typical values of the different parameters. The distribution of the solar wind at 1 AU is clearly bimodal, not only for velocity, but also for proton density, temperature and magnetic field. New typical values for the main parameters of the slow and fast components of the solar wind at 1 AU are proposed.
astrophysics
Measuring stellar rotational velocities is a powerful way to probe the many astrophysical phenomena that drive, or are driven by, the evolution of stellar angular momentum. In this paper, we present a novel data-driven approach to measuring the projected rotational velocity, $v\sin{i}$. Rather than directly measuring the broadening of spectral lines, we leverage the large information content of high-resolution spectral data to empirically estimate $v\sin{i}$. We adapt the framework laid down by The Cannon (Ness et al. 2015), which trains a generative model of the stellar flux as a function of wavelength using high-fidelity reference data, and can then produce estimates of stellar parameters and abundances for other stars directly from their spectra. Instead of modeling the flux as a function of wavelength, however, we model the first derivative of the spectra, as we expect the slopes of spectral lines to change as a function of $v\sin{i}$. This technique is computationally efficient and provides a means of rapidly estimating $v\sin{i}~$ for large numbers of stars in spectroscopic survey data. We analyze SDSS APOGEE spectra, constructing a model informed by high-fidelity stellar parameter estimates derived from high-resolution California Kepler Survey spectra of the same stars. We use the model to estimate $v\sin{i}~$ up to $15\,km\,s^{-1}$ for $27,000$ APOGEE spectra, in fractions of a second per spectrum. Our estimates agree with the APOGEE $v\sin{i}~$ estimates to within $1.2\,km\,s^{-1}$.
astrophysics
We study nonlinear vacuum electrodynamics in the first-order formulation proposed by Pleba\'nski. We analyze in detail the equations of motion, and identify conditions for which a singularity can occur for the time derivative of one of the field components. The resulting degenerate behavior can give rise to a shock wave with a reduction of the local number of degrees of freedom. We use an example model to illustrate the occurrence of superluminal propagation for field values approaching the singularity.
high energy physics phenomenology
The evolution of images with physics-based dynamics is often spatially localized and nonlinear. A switching linear dynamic system (SLDS) is a natural model under which to pose such problems when the system's evolution randomly switches over the observation interval. Because of the high parameter space dimensionality, efficient and accurate recovery of the underlying state is challenging. The work presented in this paper focuses on the common cases where the dynamic evolution may be adequately modeled as a collection of decoupled, locally concentrated dynamic operators. Patch-based hybrid estimators are proposed for real-time reconstruction of images from noisy measurements given perfect or partial information about the underlying system dynamics. Numerical results demonstrate the effectiveness of the proposed approach for denoising in a realistic data-driven simulation of remotely sensed cloud dynamics.
electrical engineering and systems science
How should scholars evaluate the statistically estimated causal effect of a policy intervention? I point out three limitations in the conventional practice. First, relying on statistical significance misses the fact that uncertainty is a continuous scale. Second, focusing on a standard point estimate overlooks variation in plausible effect sizes. Third, the criterion of substantive significance is rarely explained or justified. To address these issues, I propose an original Bayesian decision-theoretic model for binary outcomes. I incorporate the posterior distribution of a causal effect reducing the likelihood of an undesirable event, into a loss function over the cost of a policy to realize the effect and the cost of the event. The model can use an effect size of interest other than the standard point estimate, and the probability of this effect as a continuous measure of uncertainty. It then presents approximately up to what ratio between the two costs an expected loss remains smaller if the policy is implemented than if not. I exemplify my model through three applications and provide an R package for easy implementation.
statistics
The gravitational dual of $c$-extremization for a class of $(0,2)$ two-dimensional theories obtained by twisted compactifications of D3-brane gauge theories living at a toric Calabi-Yau three-fold has been recently proposed. The equivalence of this construction with $c$-extremization has been checked in various examples and holds also off-shell. In this note we prove that such equivalence holds for an arbitrary toric Calabi-Yau. We do it by generalizing the proof of the equivalence between $a$-maximization and volume minimization for four-dimensional toric quivers. By an explicit parameterization of the R-charges we map the trial right-moving central charge $c_r$ into the off-shell functional to be extremized in gravity. We also observe that the similar construction for M2-branes on $\mathbb{C}^4$ is equivalent to the $\mathcal{I}$-extremization principle that leads to the microscopic counting for the entropy of magnetically charged black holes in AdS$_4\times S^7$. Also this equivalence holds off-shell.
high energy physics theory
Conventional metals, in general, do not exhibit strong photoluminescence. 2H-TaSe$_2$ is a layered transition metal dichalcogenide that possesses metallic property with charge density wave characteristics. Here we show that 2H-TaSe$_2$ exhibits a surprisingly strong optical absorption and photoluminescence resulting from inter-band transitions. We use this perfect combination of electrical and optical properties in several optoelectronic applications. We show a seven-fold enhancement in the photoluminescence intensity of otherwise weakly luminescent multi-layer MoS$_2$ through non-radiative resonant energy transfer from TaSe$_2$ transition dipoles. Using a combination of scanning photocurrent and time-resolved photoluminescence measurements, we also show that the hot electrons generated by light absorption in TaSe$_2$ have a rather long lifetime unlike conventional metals, making TaSe$_2$ an excellent hot electron injector. Finally, we show a vertical TaSe$_2$/MoS$_2$/graphene photodetector demonstrating a responsivity of $>10$ AW$^{-1}$ at $0.1$ MHz - one of the fastest reported photodetectors using MoS$_2$.
condensed matter
Very compelling deviations in the recently observed lepton nonuniversality observables $\big (R_{D^{(*)}}, R_{K^{(*)}}, R_{J/\psi} \big )$ of semileptonic $B$ meson decays from their Standard Model predictions hint towards the presence of some kind of new physics beyond it. In this regard, we investigate the effect of new physics in the semileptonic $\bar B_{d(s)}^* \to P \ell \bar{\nu}_\ell$ decay processes, where $P=D,\pi (D_s,K$), in a model independent way. We consider the presence of additional vector and scalar type interactions and constrain the corresponding new couplings by fitting ${\rm Br(B_{u}^+ \to \tau^+ \nu_\tau)}$, ${\rm Br(B \to \pi \tau \bar \nu_\tau)}$, ${\rm Br(B_{c}^+ \to \tau^+ \nu_\tau)}$, $R_\pi^l$, $R_{D^{(*)}}$ and $R_{J/\psi}$ data. Using the constrained new parameters, we estimate the branching ratios, forward-backward asymmetry, lepton-spin asymmetry and lepton non-universality observables of $\bar B_{d,s}^{*} \to P \tau \bar \nu_\tau$ processes. We find that the branching ratios of these decay modes are sizeable and deviate significantly (for vector-type couplings) from their corresponding standard model values, which are expected to be within the reach of Run III of Large Hadron Collider experiment.
high energy physics phenomenology
Long before the recent fascination with two-dimensional materials, the critical behaviour and universality scaling of phase transitions in low-dimensional systems has been a topic of great interest. Particularly intriguing is the case of long-range magnetic order in two dimensions, once considered to be excluded in systems with continuous symmetry by the Hohenberg-Mermin-Wagner theorem. While an out-of-plane anisotropy has been shown to stabilize 2D magnetic order, this proof has remained elusive for a 2D magnet with in-plane rotational symmetry. Here, we construct a nearly ideal easy-plane system, a CrCl3 monolayer grown on Graphene/6H-SiC (0001), and unambiguously demonstrate robust in-plane ferromagnetic ordering with a critical scaling behaviour characteristic of a 2D-XY system. These observations suggest the first realization of a finite-size Berezinskii-Kosterlitz-Thouless (BKT) phase transition in a large-area, quasi-freestanding, van der Waals monolayer magnet with a XY universality class; and further constitute an ideal platform to study exotic phenomena like superfluid spin transport or 2D topological in-plane spin textures -- such as merons.
condensed matter
We propose a novel consensus protocol based on a hybrid approach, that combines a directed acyclic graph (DAG) and a classical chain of blocks. This architecture allows us to enforce collective block construction, minimising the monopolistic power of the round-leader. In this way, we decrease the possibility for collusion among senders and miners, as well as miners themselves, allowing the use of more incentive compatible and fair pricing strategies. We investigate these possibilities alongside the ability to use the DAG structure to minimise the risk of transaction censoring. We conclude by providing preliminary benchmarks of our protocol and by exploring further research directions.
computer science
The Internet of Things (IoT) envisions the integration of physical objects into software systems for automating crucial aspects of our lives, such as healthcare, security, agriculture, and city management. Although the vision is promising, with the rapid advancement of hardware and communication technologies, IoT systems are becoming increasingly dynamic, large, and complex to the extent that manual management becomes infeasible. Thus, it is of paramount importance to provide software engineering foundations for constructing autonomic IoT systems. In this paper, we introduce a novel paradigm referred to as self-organizing software models in which IoT software systems are not explicitly programmed, but emerge in a decentralized manner during system operation, with minimal or without human intervention. We particularly present an overview of these models by including their definition, motivation, research challenges, and potential directions.
computer science
M-theory on local $G_2$-manifolds engineers 4d minimally supersymmetric gauge theories. We consider ALE-fibered $G_2$-manifolds and study the 4d physics from the view point of a partially twisted 7d supersymmetric Yang-Mills theory and its Higgs bundle. Euclidean M2-brane instantons descend to non-perturbative effects of the 7d supersymmetric Yang-Mills theory, which are found to be in one to one correspondence with the instantons of a colored supersymmetric quantum mechanics. We compute the contributions of M2-brane instantons to the 4d superpotential in the effective 7d description via localization in the colored quantum mechanics. Further we consider non-split Higgs bundles and analyze their 4d spectrum.
high energy physics theory
Here we discuss advances in UV technology over the last decade, with an emphasis on photon counting, low noise, high efficiency detectors in sub-orbital programs. We focus on the use of innovative UV detectors in a NASA astrophysics balloon telescope, FIREBall-2, which successfully flew in the Fall of 2018. The FIREBall-2 telescope is designed to make observations of distant galaxies to understand more about how they evolve by looking for diffuse hydrogen in the galactic halo. The payload utilizes a 1.0-meter class telescope with an ultraviolet multi-object spectrograph and is a joint collaboration between Caltech, JPL, LAM, CNES, Columbia, the University of Arizona, and NASA. The improved detector technology that was tested on FIREBall-2 can be applied to any UV mission. We discuss the results of the flight and detector performance. We will also discuss the utility of sub-orbital platforms (both balloon payloads and rockets) for testing new technologies and proof-of-concept scientific ideas
astrophysics
We study the loop-induced decays $h^0 \to \gamma \, \gamma$ and $h^0 \to g \, g$ in the Minimal Supersymmetric Standard Model (MSSM) with quark flavour violation (QFV), identifying $h^0$ with the Higgs boson with a mass of 125 GeV, where $\gamma$ and $g$ are photon and gluon, respectively. We perform a MSSM parameter scan and a detailed analysis around a fixed reference point respecting theoretical constraints from vacuum stability conditions and experimental constraints, such as those from B meson data and electroweak precision data, as well as recent limits on supersymmetric (SUSY) particle masses from LHC experiments. We find that (i) the relative deviation of the decay width $\Gamma(h^0 \to g \, g)$ from the Standard Model value, $DEV(g)$, can be large and negative, $\lesssim - 15\%$, (ii) the analogous deviation of $\Gamma(h^0 \to \gamma \, \gamma)$ is strongly correlated, $DEV(\gamma) \simeq -1/4\,DEV(g)$ for $DEV(g) \lesssim - 4\%$, (iii) the relative deviation of the width ratio $\Gamma(h^0 \to \gamma \, \gamma)/\Gamma(h^0 \to g \, g)$ from the SM value, $DEV(\gamma/g)$, can be large (up to $\sim$ 20\%), (iv) the deviations can be large due to the up-type squark loop contributions, (v) the SUSY QFV parameters can have a significant effect on these deviations. Such large deviations can be observed at a future $e^+e^-$ collider like ILC. Observation of the deviation patterns as shown in this study would favour the MSSM with flavour-violating squark mixings and encourage to perform further studies in this model.
high energy physics phenomenology
A fundamental limit of current radiative cooling systems is that only the top surface facing deep-space can provide the radiative cooling effect, while the bottom surface cannot. Here, we propose and experimentally demonstrate a concept of "concentrated radiative cooling" by nesting a radiative cooling system in a mid-infrared reflective trough, so that the lower surface, which does not contribute to radiative cooling in previous systems, can radiate heat to deep-space via the reflective trough. Field experiments show that the temperature drop of a radiative cooling pipe with the trough is more than double that of the standalone radiative cooling pipe. Furthermore, by integrating the concentrated radiative cooling system as a preconditioner in an air conditioning system, we predict electricity savings of $>75\%$ in Phoenix, AZ, and $>80\%$ in Reno, NV, for a single-story commercial building.
physics
We propose a Digital Neuron, a hardware inference accelerator for convolutional deep neural networks with integer inputs and integer weights for embedded systems. The main idea to reduce circuit area and power consumption is manipulating dot products between input feature and weight vectors by Barrel shifters and parallel adders. The reduced area allows the more computational engines to be mounted on an inference accelerator, resulting in high throughput compared to prior HW accelerators. We verified that the multiplication of integer numbers with 3-partial sub-integers does not cause significant loss of inference accuracy compared to 32-bit floating point calculation. The proposed digital neuron can perform 800 MAC operations in one clock for computation for convolution as well as full-connection. This paper provides a scheme that reuses input, weight, and output of all layers to reduce DRAM access. In addition, this paper proposes a configurable architecture that can provide inference of adaptable feature of convolutional neural networks. The throughput in terms of Watt of the digital neuron is achieved 754.7 GMACs/W.
electrical engineering and systems science
Computed tomography (CT) can provide a 3D view of the patient's internal organs, facilitating disease diagnosis, but it incurs more radiation dose to a patient and a CT scanner is much more cost prohibitive than an X-ray machine too. Traditional CT reconstruction methods require hundreds of X-ray projections through a full rotational scan of the body, which cannot be performed on a typical X-ray machine. In this work, we propose to reconstruct CT from two orthogonal X-rays using the generative adversarial network (GAN) framework. A specially designed generator network is exploited to increase data dimension from 2D (X-rays) to 3D (CT), which is not addressed in previous research of GAN. A novel feature fusion method is proposed to combine information from two X-rays.The mean squared error (MSE) loss and adversarial loss are combined to train the generator, resulting in a high-quality CT volume both visually and quantitatively. Extensive experiments on a publicly available chest CT dataset demonstrate the effectiveness of the proposed method. It could be a nice enhancement of a low-cost X-ray machine to provide physicians a CT-like 3D volume in several niche applications.
electrical engineering and systems science
A method to derive the convenient representations for many two-photon amplitudes is suggested. It is based on the use of the gauge in which the photon propagator has only space components. The amplitudes obtained have no any strong numerical cancellations and, therefore, are very convenient for numerical evaluations. Our approach is illustrated by the consideration of the processes $e^+e^-\to e^+e^-e^+e^-$, $e^+e^-\rightarrow e^+e^-\mu^+\mu^-$, and $e^+e^-\to \mu^+\mu^-\pi^+\pi^-$. The method is extended on the case of polarized particles. The amplitudes obtained in this approach have been employed for extension of the event generator developed by F.A. Berends, P.H. Daverveldt, and R. Kleiss.
high energy physics phenomenology
In other FICO Technical Papers, I have shown how to fit Generalized Additive Models (GAM) with shape constraints using quadratic programming applied to B-Spline component functions. In this paper, I extend the method to Robust Least Squares Regression.
statistics
Supersymmetric $\,\textrm{AdS}_{4}\,$, $\,\textrm{AdS}_{2} \times \Sigma_{2}\,$ and asymptotically AdS$_{4}$ black hole solutions are studied in the context of non-minimal $\,\mathcal{N}=2\,$ supergravity models involving three vector multiplets (STU-model) and Abelian gaugings of the universal hypermultiplet moduli space. Such models correspond to consistent subsectors of the $\,\textrm{SO}(p,q)\,$ and $\,\textrm{ISO}(p,q)\,$ gauged maximal supergravities that arise from the reduction of 11D and massive IIA supergravity on $\,\textrm{H}^{(p,q)}\,$ spaces down to four dimensions. A unified description of all the models is provided in terms of a square-root prepotential and the gauging of a duality-hidden symmetry pair of the universal hypermultiplet. Some aspects of M-theory and massive IIA holography are mentioned in passing.
high energy physics theory
The cost of drawing object bounding boxes (i.e. labeling) for millions of images is prohibitively high. For instance, labeling pedestrians in a regular urban image could take 35 seconds on average. Active learning aims to reduce the cost of labeling by selecting only those images that are informative to improve the detection network accuracy. In this paper, we propose a method to perform active learning of object detectors based on convolutional neural networks. We propose a new image-level scoring process to rank unlabeled images for their automatic selection, which clearly outperforms classical scores. The proposed method can be applied to videos and sets of still images. In the former case, temporal selection rules can complement our scoring process. As a relevant use case, we extensively study the performance of our method on the task of pedestrian detection. Overall, the experiments show that the proposed method performs better than random selection. Our codes are publicly available at www.gitlab.com/haghdam/deep_active_learning.
computer science
Generic ontologies were introduced as an extension (Generic DOL) of the Distributed Ontology, Modeling and Specification Language, DOL, with the aim to provide a language for Generic Ontology Design Patterns. In this paper we present a number of new language constructs that increase the expressivity and the generality of Generic DOL, among them sequential and optional parameters, list parameters with recursion, and local sub-patterns. These are illustrated with non-trivial patterns: generic value sets and (nested) qualitatively graded relations, demonstrated as definitional building blocks in an application domain.
computer science
The Bell inequalities in three and four correlations are re-derived in general forms showing that three and four data sets, respectively, identically satisfy them regardless of whether they are random, deterministic, measured, predicted, or some combination of these. The Bell inequality applicable to data is thus a purely mathematical result independent of experimental test. Correlations of simultaneously cross-correlated variable pairs do not in general all have the same form, and vary with the physical system considered and its experimental configuration. It is the form of correlations of associated data sets that may be tested, and not whether they satisfy the Bell inequality. In the case of pairs of spins or photons, a third measured or predicted value requires a different experimental setup or predictive computation than is used to obtain data from pairs alone. This is due to the quantum non-commutation of spin and photon measurements when there is more than one per particle of a pair. The Wigner inequality for probabilities, with different probabilities for different variable pairs, may be obtained from the four variable Bell inequality under a simple symmetry condition. Neither the probability or correlation inequality is violated by correlations computed from quantum probabilities based on non-commutation.
quantum physics
The forward BFKL equation is discretised in virtuality space and it is shown that the diffusion into infrared and ultraviolet momenta can be understood in terms of a semi-infinite matrix. The square truncation of this matrix can be exponentiated leading to asymptotic eigenstates sharing many features with the BFKL gluon Green's function in the limit of large matrix size. This truncation is closely related to a representation of the XXX Heisenberg spin $= - \frac{1}{2}$ chain with SL(2) invariance where the Hamiltonian acts on a symmetric double copy of the harmonic oscillator. A simple modification of the BFKL matrix suppressing the infrared modes generates evolution with energy compatible with unitarity.
high energy physics theory
Currently waste of energy is one of the most important issues, power plants are encountered all over the world. In this study, two mathematical programming models are presented to formulate solar power plants pertinent to the design of a lean manufacturing system. We propose an approach consisting of a combination of mathematical and economic models used for distribution systems of electricity transmission networks. Besides the system, diverse costs and the rate of energy waste during the transmission are considered as two criteria for the comparison. The main approach is to employ a simulation method for both models and draw a comparison between the results of simulation and optimization models run by Lingo software. Finally, analyzing the test results lead the system to a remarkable cost reduction of the country energy demand in the distribution system of electricity transmission networks.
electrical engineering and systems science
We consider the Abelian-Higgs model with two complex scalar fields and arbitrary positive integer charges with the addition of a higher-order generalization of the Josephson term. The theory possesses vortices of both local and global variants. The only finite-energy configurations are shown to be the local vortices for which a certain combination of vortex numbers and electric charges -- called the global vortex number -- vanishes. The local vortices have rational fractional magnetic flux, as opposed to the global counterparts that can have an arbitrary fractional flux. The global vortices have angular domain walls, which we find good analytic approximate solutions for. Finally, we find a full classification of the minimal local vortices as well as a few nonminimal networks of vortices, using numerical methods.
high energy physics theory
Variational approximation has been widely used in large-scale Bayesian inference recently, the simplest kind of which involves imposing a mean field assumption to approximate complicated latent structures. Despite the computational scalability of mean field, theoretical studies of its loss function surface and the convergence behavior of iterative updates for optimizing the loss are far from complete. In this paper, we focus on the problem of community detection for a simple two-class Stochastic Blockmodel (SBM) with equal class sizes. Using batch co-ordinate ascent (BCAVI) for updates, we show different convergence behavior with respect to different initializations. When the parameters are known or estimated within a reasonable range and held fixed, we characterize conditions under which an initialization can converge to the ground truth. On the other hand, when the parameters need to be estimated iteratively, a random initialization will converge to an uninformative local optimum.
mathematics
We present a scenario for the strange metal phase in the cuprates, where diffusive, charge-two, finite momentum bosons are present in a vast region of the phase diagram. The presence of these bosons emerging from pairs of high-energy electrons can account for a regime of linear-in-$T$ resistivity. Diffusive bosons are incoherent, and as such, they do not contribute to the Hall conductivity. Surprisingly, these incoherent bosons contribute to the longitudinal Drude conductivity with the corresponding transport time given by $\tau\sim \hbar/(k_{B}T)$, reminiscent of the Planckian dissipators associated with a putative quantum critical point in the strange metal phase. We also obtain a linear-in-$H$ magnetoresistance when the diffusive bosons originate from electron pairs of a triplet. The presence of such bosons in the strange metal phase of the cuprates can shed light on recent transport measurements in overdoped compounds~[J. Ayres et al., unpublished (2020)].
condensed matter
Deep learning (DL) has had unprecedented success and is now entering scientific computing with full force. However, current DL methods typically suffer from instability, even when universal approximation properties guarantee the existence of stable neural networks (NNs). We address this paradox by demonstrating basic well-conditioned problems in scientific computing where one can prove the existence of NNs with great approximation qualities, however, there does not exist any algorithm, even randomised, that can train (or compute) such a NN. For any positive integers $K > 2$ and $L$, there are cases where simultaneously: (a) no randomised training algorithm can compute a NN correct to $K$ digits with probability greater than $1/2$, (b) there exists a deterministic training algorithm that computes a NN with $K-1$ correct digits, but any such (even randomised) algorithm needs arbitrarily many training data, (c) there exists a deterministic training algorithm that computes a NN with $K-2$ correct digits using no more than $L$ training samples. These results imply a classification theory describing conditions under which (stable) NNs with a given accuracy can be computed by an algorithm. We begin this theory by establishing sufficient conditions for the existence of algorithms that compute stable NNs in inverse problems. We introduce Fast Iterative REstarted NETworks (FIRENETs), which we both prove and numerically verify are stable. Moreover, we prove that only $\mathcal{O}(|\log(\epsilon)|)$ layers are needed for an $\epsilon$-accurate solution to the inverse problem.
computer science
Many basic properties in Tutte's flow theory for unsigned graphs do not have their counterparts for signed graphs. However, signed graphs without long barbells in many ways behave like unsigned graphs from the point view of flows. In this paper, we study whether some basic properties in Tutte's flow theory remain valid for this family of signed graphs. Specifically let $(G,\sigma)$ be a flow-admissible signed graph without long barbells. We show that it admits a nowhere-zero $6$-flow and that it admits a nowhere-zero modulo $k$-flow if and only if it admits a nowhere-zero integer $k$-flow for each integer $k\geq 3$ and $k \not = 4$. We also show that each nowhere-zero positive integer $k$-flow of $(G,\sigma)$ can be expressed as the sum of some $2$-flows. For general graphs, we show that every nowhere-zero $\frac{p}{q}$-flow can be normalized in such a way, that each flow value is a multiple of $\frac{1}{2q}$. As a consequence we prove the equality of the integer flow number and the ceiling of the circular flow number for flow-admissible signed graphs without long barbells.
mathematics
In this paper, we present a novel approach to geostatistical filtering which tackles two challenges encountered when applying this method to complex spatial datasets: modeling the non-stationarity of the data while still being able to work with large datasets. The approach is based on a finite element approximation of Gaussian random fields expressed as an expansion of the eigenfunctions of a Laplace--Beltrami operator defined to account for local anisotropies. The numerical approximation of the resulting random fields using a finite element approach is then leveraged to solve the scalability issue through a matrix-free approach. Finally, two cases of application of this approach, on simulated and real seismic data are presented.
statistics
Social networks like Facebook and WhatsApp have enabled users to share images with other users around the world. Along with this has come the rapid spread of misinformation. One step towards verifying the authenticity of an image is understanding its origin, including it distribution history through social media. In this paper, we present a method for tracing the posting history of an image across different social networks. To do this, we propose a two-stage deep-learning-based approach, which takes advantage of cascaded fingerprints in images left by social networks during uploading. Our proposed system is not reliant upon metadata or similar easily falsifiable information. Through a series of experiments, we show that we are able to outperform existing social media source identification algorithms. and identify chains of social networks up to length two with over over 84% accuracy.
electrical engineering and systems science
Highly intense electric field pulses can move the electronic momentum occupation in correlated metals over large portions of the Brillouin zone, leading to phenomena such as dynamic Bloch oscillations. Using the non-equilibrium fluctuation-exchange approximation for the two-dimensional Hubbard model, we study how such non-thermal electron-distributions drive collective spin and charge fluctuations. Suitable pulses can induce a highly anisotropic modification of the occupied momenta, and the corresponding spin dynamics results in a transient change from antiferromagnetic to anisotropic ferromagnetic correlations. To good approximation this behavior is understood in terms of an instantaneous response of the spin fluctuations to the single-particle properties, opposite to the conventional time-scale separation between spin and electron dynamics.
condensed matter
In this paper we propose a novel approach to aperiodic order in optical science and technology that leverages the intrinsic structural complexity of certain non-polynomial (hard) problems in number theory and cryptography for the engineering of optical media with novel transport and wave localization properties. In particular, we address structure-property relationships in a large number (900) of light scattering systems that physically manifest the distinctive aperiodic order of elliptic curves and the associated discrete logarithm problem over finite fields. Besides defining an extremely rich subject with profound connections to diverse mathematical areas, elliptic curves offer unprecedented opportunities to engineer light scattering phenomena in aperiodic environments beyond the limitations of traditional random media. Our theoretical analysis combines the interdisciplinary methods of point patterns spatial statistics with the rigorous Green's matrix solution of the multiple wave scattering problem for electric and magnetic dipoles and provides access to the spectral and light scattering properties of novel deterministic aperiodic structures with enhanced light-matter coupling for nanophotonics and metamaterials applications to imaging and spectroscopy.
condensed matter
In this paper, the variation of a signal in Schwarzschild spacetime is studied and a general equation for frequency shift parameter (FSP) is presented. It shows that FSP depends on the gravitationally-modified Doppler effects and the gravitational effects of observers. In addition, rates of time of a transmitter and a receiver may be different. When FSP is a function of the time of a receiver, FSP contributed by gravitational effect (GFSP) or gravitationally-modified Doppler effect (GMDFSP) may lead a bandlimited signal to a non-bandlimited signal. Based on the equation, FSP as a function of the time of a receiver is calculated for three scenarios: a) a spaceship moves away from a star with a constant velocity communicating with a transmitter at a fixed position; b) a spaceship moves around a star with different conic trajectories communicating with a transmitter at a fixed position; c) a signal is transmitted at a fixed position in a star system to a receiver moving with an elliptical trajectory in another star system. The studied stars are sun-like star, white dwarf and neutron star, and some numerical examples are presented to illustrate the theory.
electrical engineering and systems science
Measurements of thermal diffusivity in several insulators have been shown to reach a Planckian bound on thermal transport that can be thought of as the limit of validity of semiclassical phonon scattering. Beyond this regime, the heat transport must be understood in terms of incoherent motion of the atoms under strongly anharmonic interactions. In this work, we propose a model for heat transport in a strongly anharmonic system where the thermal diffusivity can be lower than the Planckian thermal diffusivity bound. Similar to the materials which exhibit thermal diffusivity close to this bound, our scenario involves complex unit cell with incoherent intra-cell dynamics. We derive a general formalism to compute thermal conductivity in such cases with anharmonic intra-cell dynamics coupled to nearly harmonic inter-cell coupling. Through direct numerical simulation of the non-linear unit cell motion, we explicitly show that our model allows sub-Planckian thermal diffusivity. We find that the propagator of the acoustic phonons becomes incoherent throughout most of the Brillouin zone in this limit. We expect these features to apply to more realistic models of complex insulators showing sub-Planckian thermal diffusivity, suggesting a multi-species generalization of the thermal diffusivity bound that is similar to the viscosity bound in fluids.
condensed matter
We reanalyse the ratio $\varepsilon'/\varepsilon$ in the Standard Model (SM) using most recent hadronic matrix elements from the RBC-UKQCD collaboration in combination with most important NNLO QCD corrections to electroweak penguin contributions and the isospin-breaking corrections. We illustrate the importance of the latter by using their latest estimate from chiral perturbation theory (ChPT) based on the $octet$ approximation for lowest-lying mesons and a very recent estimate in the $nonet$ scheme that takes into account the contribution of $\eta_0$. We find $(\varepsilon'/\varepsilon)^{(8)}_\text{SM} = (17.4 \pm 6.1) \times 10^{-4}$ and $(\varepsilon'/\varepsilon)^{(9)}_\text{SM} = (13.9 \pm 5.2) \times 10^{-4}$, respectively. Despite a very good agreement with the measured value $(\varepsilon'/\varepsilon)_\text{exp} = (16.6 \pm 2.3) \times 10^{-4}$, the large error in $(\varepsilon'/\varepsilon)_\text{SM}$ still leaves room for significant new physics (BSM) contributions to this ratio. We update the 2018 master formula for $(\varepsilon'/\varepsilon)_\text{BSM}$ valid in any extension beyond the SM without additional light degrees of freedom. We provide new values of the penguin parameters $B_6^{(1/2)}(\mu)$ and $B_8^{(3/2)}(\mu)$ at the $\mu$-scales used by the RBC-UKQCD collaboration and at lower scales $\mathcal{O}(1\,\text{GeV})$ used by ChPT and DQCD. We present semi-analytic formulae for $(\varepsilon'/\varepsilon)_\text{SM}$ in terms of these parameters and $\hat{\Omega}_\text{eff}$ that summarizes isospin-breaking corrections to this ratio. We stress the importance of lattice calculations of the $\mathcal{O}(\alpha_\text{em})$ contributions to the hadronic matrix elements necessary for the removal of renormalization scheme dependence at $\mathcal{O}(\alpha_\text{em})$ in the present analyses of $\varepsilon'/\varepsilon$.
high energy physics phenomenology
The Complete Calibration of the Color-Redshift Relation (C3R2) survey is a multi-institution, multi-instrument survey that aims to map the empirical relation of galaxy color to redshift to i~24.5 (AB), thereby providing a firm foundation for weak lensing cosmology with the Stage IV dark energy missions Euclid and WFIRST. Here we present 3171 new spectroscopic redshifts obtained in the 2016B and 2017A semesters with a combination of DEIMOS, LRIS, and MOSFIRE on the Keck telescopes. The observations come from all of the Keck partners: Caltech, NASA, the University of Hawaii, and the University of California. Combined with the 1283 redshifts published in DR1, the C3R2 survey has now obtained and published 4454 high quality galaxy redshifts. We discuss updates to the survey design and provide a catalog of photometric and spectroscopic data. Initial tests of the calibration method performance are given, indicating that the sample, once completed and combined with extensive data collected by other spectroscopic surveys, should allow us to meet the cosmology requirements for Euclid, and make significant headway toward solving the problem for WFIRST. We use the full spectroscopic sample to demonstrate that galaxy brightness is weakly correlated with redshift once a galaxy is localized in the Euclid or WFIRST color space, with potentially important implications for the spectroscopy needed to calibrate redshifts for faint WFIRST and LSST sources.
astrophysics
Network Control Systems (NCSs) have attracted much interest over the past decade as part of a move towards more decentralised control applications and the rise of cyberphysical system applications. Many practical NCSs face the challenges of limited communication bandwidth resources, reliability and lack of knowledge of network dynamics, particularly when wireless networks are involved. Machine learning (ML) combined with event-triggered control (ETC) has the potential to ease some of these challenges. For example, ML can be used to overcome the problem of a lack of network models by learning system behaviour or adapt to dynamically changing models by continually learning model dynamics. ETC can help to conserve bandwidth resources by communicating only when needed or when resources are available. Here, we present a review of the literature on work that combines ML and ETC. The literature on supervised, semi-supervised, unsupervised and reinforcement learning based approaches such as deep reinforcement learning and statistical learning in combination with ETC is explored. Furthermore, the difference between the application of these learning algorithms on model-based and model-free systems are discussed. Following the analysis of the literature, we highlight open research questions and challenges related to ML-based ETC and propose approaches to possible solutions to these challenges.
electrical engineering and systems science
The car-sharing problem, proposed by Luo, Erlebach and Xu in 2018, mainly focuses on an online model in which there are two locations: 0 and 1, and $k$ total cars. Each request which specifies its pick-up time and pick-up location (among 0 and 1, and the other is the drop-off location) is released in each stage a fixed amount of time before its specified start (i.e. pick-up) time. The time between the booking (i.e. released) time and the start time is enough to move empty cars between 0 and 1 for relocation if they are not used in that stage. The model, called $k$S2L-F, assumes that requests in each stage arrive sequentially regardless of the same booking time and the decision (accept or reject) must be made immediately. The goal is to accept as many requests as possible. In spite of only two locations, the analysis does not seem easy and the (tight) competitive ratio (CR) is only known to be 2.0 for $k=2$ and 1.5 for a restricted $k$, i.e., a multiple of three. In this paper, we remove all the holes of unknown CR; namely we prove that the CR is $\frac{2k}{k + \lfloor k/3 \rfloor}$ for all $k\geq 2$. Furthermore, if the algorithm can delay its decision until all requests have come in each stage, the CR is improved to roughly 4/3. We can take this advantage even more, precisely we can achieve a CR of $\frac{2+R}{3}$ if the number of requests in each stage is at most $Rk$, $1 \le R \le 2$, where we do not have to know the value of $R$ in advance. Finally we demonstrate that randomization also helps to get (slightly) better CR's.
computer science
We consider the ordering dynamics of the Ising model on a square lattice where an additional fixed number of bonds connect any two sites chosen randomly. The total number of shortcuts added is controlled by two parameters $p$ and $\alpha$. The structural properties of the network are investigated which show that that the small world behaviour is obtained along the line $\alpha=\frac{\ln (N/2p)}{\ln N}$, which separates regions with ultra small world like behaviour and short ranged lattice like behaviour. We obtain a rich phase diagram in the $p-\alpha$ plane showing the existence of different types of active and absorbing states to which Ising model evolves and their boundaries.
condensed matter
The majority of scientific papers are distributed in PDF, which pose challenges for accessibility, especially for blind and low vision (BLV) readers. We characterize the scope of this problem by assessing the accessibility of 11,397 PDFs published 2010--2019 sampled across various fields of study, finding that only 2.4% of these PDFs satisfy all of our defined accessibility criteria. We introduce the SciA11y system to offset some of the issues around inaccessibility. SciA11y incorporates several machine learning models to extract the content of scientific PDFs and render this content as accessible HTML, with added novel navigational features to support screen reader users. An intrinsic evaluation of extraction quality indicates that the majority of HTML renders (87%) produced by our system have no or only some readability issues. We perform a qualitative user study to understand the needs of BLV researchers when reading papers, and to assess whether the SciA11y system could address these needs. We summarize our user study findings into a set of five design recommendations for accessible scientific reader systems. User response to SciA11y was positive, with all users saying they would be likely to use the system in the future, and some stating that the system, if available, would become their primary workflow. We successfully produce HTML renders for over 12M papers, of which an open access subset of 1.5M are available for browsing at https://scia11y.org/
computer science
The force exerted by electromagnetic fields is of fundamental importance in broad sciences and applications, but its exact formulation inside materials is still controversial and unclear. The textbook-accepted formulation of electromagnetic force was proposed by Lorentz in the 19th century, but its validity has been challenged due to incompatibility with the special relativity and momentum conservation. The Einstein-Laub formulation, which can reconcile those conflicts, was suggested as an alternative to the Lorentz formulation. However, intense debates on the exact force inside materials are still going on due to lack of experimental evidences. Here, we report the first experimental investigation of topological charge of optical force inside a solid dielectric material, aiming to distinguish the two formulations. The experiments show that the optical force exerted by a Gaussian beam has components with the topological charge of both 2 and 0, which cannot be supported solely by the Lorentz or the Einstein-Laub formulation. Instead, we found a modified Helmholtz theory could explain our experimental results. The unraveled topological charge of optical force will not only contribute to the ultimate determination of the correct force formulation inside materials, but also update the fundamental working principle for many science and engineering branches involving electromagnetic forces.
physics
Kriging is the predominant method used for spatial prediction, but relies on the assumption that predictions are linear combinations of the observations. Kriging often also relies on additional assumptions such as normality and stationarity. We propose a more flexible spatial prediction method based on the Nearest-Neighbor Neural Network (4N) process that embeds deep learning into a geostatistical model. We show that the 4N process is a valid stochastic process and propose a series of new ways to construct features to be used as inputs to the deep learning model based on neighboring information. Our model framework outperforms some existing state-of-art geostatistical modelling methods for simulated non-Gaussian data and is applied to a massive forestry dataset.
statistics
Models of magnetohydrodynamic (MHD) equilibia that for computational convenience assume the existence of a system of nested magnetic flux surfaces tend to exhibit singular current sheets. These sheets are located on resonant flux surfaces that are associated with rational values of the rotational transform. We study the possibility of eliminating these singularities by suitable modifications of the plasma boundary, which we prescribe in a fixed boundary setting. We find that relatively straightforward iterative procedures can be used to eliminate weak current sheets that are generated at resonant flux surfaces by the nonlinear interactions of resonating wall harmonics. These types of procedures may prove useful in the design of fusion devices with configurations that enjoy improved stability and transport properties.
physics
The late stellar evolutionary phases of low and intermediate-mass stars are strongly constrained by their mass-loss rates. The wind surrounding cool evolved stars frequently shows non-spherical features, thought to be due to an unseen companion orbiting the donor star. We study the morphology of the circumbinary envelope, in particular around oxygen-rich asymptotic giant branch (AGB) stars. We run a grid of 70 3D hydrodynamics simulations of a progressively accelerating wind propagating in the Roche potential formed by a mass-loosing evolved star in orbit with a main sequence companion. We resolve the flow structure both in the immediate vicinity of the secondary, where bow shocks, outflows and wind-captured disks form, and up to 40 orbital separations, where spiral arms, arcs and equatorial density enhancements develop. When the companion is deeply engulfed in the wind, the lower terminal wind speeds and more progressive wind acceleration around oxygen-rich AGB stars make them more prone than carbon-rich AGB stars to display more disturbed outflows, a disk-like structure around the companion and a wind concentrated in the orbital plane. In these configurations, a large fraction of the wind is captured by the companion which leads to a significant shrinking of the orbit over the mass-loss timescale, if the donor star is at least a few times more massive than its companion. Provided the companion has a mass of at least a tenth of the mass of the donor star, it can compress the wind in the orbital plane up to large distances. Our grid of models covers a wide scope of configurations function of the dust chemical content, the terminal wind speed relative to the orbital speed, the extension of the dust condensation region around the cool evolved star and the mass ratio. It provides a frame of reference to interpret high-resolution maps of the outflows surrounding cool evolved stars.
astrophysics
We study the Cardy-like limit of the superconformal index of generic $\mathcal{N}=1$ SCFTs with ABCD gauge algebra, providing strong evidence for a universal formula that captures the behavior of the index at finite order in the rank and in the fugacities associated to angular momenta. The formula extends previous results valid at lowest order, and generalizes them to generic SCFTs. We corroborate the validity of our proposal by studying several examples, beyond the well-understood toric class. We compute the index also for models without a weakly-coupled gravity dual, whose gravitational anomaly is not of order one.
high energy physics theory
For a prime number $p$, we give a new restriction on pro-$p$ groups $G$ which are realizable as the maximal pro-$p$ Galois group $G_F(p)$ for a field $F$ containing a root of unity of order $p$. This restriction arises from Kummer Theory and the structure of the maximal $p$-radical extension of $F$. We study it in the abstract context of pro-$p$ groups $G$ with a continuous homomorphism $\theta\colon G\to1+p\mathbb{Z}_p$, and characterize it cohomologically, and in terms of 1-cocycles on $G$. This is used to produce new examples of pro-$p$ groups which do not occur as maximal pro-$p$ Galois groups of fields as above.
mathematics
It was recently argued that string theory on ${\rm AdS}_3\times {\rm S}^3\times \mathbb{T}^4$ with one unit ($k=1$) of NS-NS flux is exactly dual to the symmetric orbifold CFT ${\rm Sym}^N(\mathbb{T}^4)$. In this paper we show how to directly relate the $n$-point correlators of the two sides to one another. In particular, we argue that the correlators of the world-sheet theory are delta-function-localised in string moduli space to those configurations that allow for a holomorphic covering map of the $\text{S}^2$-boundary of $\text{AdS}_3$ by the world-sheet. This striking feature can be seen both from a careful Ward identity analysis, as well as from semi-classically exact AdS$_3$ solutions that are pinned to the boundary. The world-sheet correlators therefore have exactly the same structure as in the Lunin-Mathur construction of symmetric orbifold CFT correlators in terms of a covering surface -- which now gets identified with the world-sheet. Together with the results of arXiv:1803.04423 and arXiv:1812.01007 this essentially demonstrates how the $k=1$ $\text{AdS}_3$ string theory becomes equivalent to the spacetime orbifold CFT in the genus expansion.
high energy physics theory
We discuss the possibility of realising a two-component dark matter (DM) scenario where the two DM candidates differ from each other by virtue of their production mechanism in the early universe. One of the DM candidates is thermally generated in a way similar to the weakly interacting massive particle (WIMP) paradigm where the DM abundance is governed by its freeze-out while the other candidate is produced only from non-thermal contributions similar to freeze-in mechanism. We discuss this in a minimal extension of the standard model where light neutrino masses arise radiatively in a way similar to the scotogenic models with DM particles going inside the loop. The lepton asymmetry is generated at the same time from WIMP DM annihilations as well as partially from the mother particle for non-thermal DM. This can be achieved while satisfying the relevant experimental bounds, and keeping the scale of leptogenesis or the thermal DM mass as low as 3 TeV, well within present experimental reach. In contrast to the TeV scale thermal DM mass, the non-thermal DM can be as low as a few keV, giving rise to the possibility of a sub-dominant warm dark matter (WDM) component that can have interesting consequences on structure formation. The model also has tantalizing prospects of being detected at ongoing direct detection experiments as well as the ones looking for charged lepton flavour violating process like $\mu \rightarrow e \gamma$.
high energy physics phenomenology
Visual objects are composed of a recursive hierarchy of perceptual wholes and parts, whose properties, such as shape, reflectance, and color, constitute a hierarchy of intrinsic causal factors of object appearance. However, object appearance is the compositional consequence of both an object's intrinsic and extrinsic causal factors, where the extrinsic causal factors are related to illumination, and imaging conditions. Therefore, this paper proposes a unified tensor model of wholes and parts, and introduces a compositional hierarchical tensor factorization that disentangles the hierarchical causal structure of object image formation, and subsumes multilinear block tensor decomposition as a special case. The resulting object representation is an interpretable combinatorial choice of wholes' and parts' representations that renders object recognition robust to occlusion and reduces training data requirements. We demonstrate ourapproach in the context of face recognition by training on an extremely reduced dataset of synthetic images, and report encouragingface verification results on two datasets - the Freiburg dataset, andthe Labeled Face in the Wild (LFW) dataset consisting of real world images, thus, substantiating the suitability of our approach for data starved domains.
computer science
In this paper, we investigate the model reference adaptive control approach for uncertain piecewise affine systems with performance guarantees. The proposed approach ensures the error metric, defined as the weighted Euclidean norm of the state tracking error, to be confined within a user-defined time-varying performance bound. We introduce an auxiliary performance function to construct a barrier Lyapunov function. This auxiliary performance signal is reset at each switching instant, which prevents the transgression of the barriers caused by the jumps of the error metric at switching instants. The dwell time constraints are derived based on the parameters of the user-defined performance bound and the auxiliary performance function. We also prove that the Lyapunov function is non-increasing even at the switching instants and thus does not impose extra dwell time constraints. Furthermore, we propose the robust modification of the adaptive controller for the uncertain piecewise affine systems subject to unmatched disturbances. A Numerical example validates the correctness of the proposed approach.
electrical engineering and systems science
Models of discrete space and space-time that exhibit continuum-like behavior at large lengths could have profound implications for physics. They may tame the infinities that arise from quantizing gravity, and dispense with the machinery of the real numbers, which has no direct observational support. Yet despite sophisticated attempts at formulating discrete space, researchers have failed to construct even the simplest geometries. We investigate graphs as the most elementary discrete models of two-dimensional space. We show that if space is discrete, it must be disordered, by proving that all planar lattice graphs exhibit the same taxicab metric as square grids. We give an explicit recipe for growing disordered discrete space by sampling a Boltzmann distribution of graphs at low temperature. We then propose three conditions which any discrete model of Euclidean space must meet: have a Hausdorff dimension of two, support unique straight lines and obey Pythagoras' theorem. Our model satisfies all three, making it the first discrete model in which continuum-like behavior is recovered at large lengths.
condensed matter
We consider the problem of estimating the mean of a distribution supported by the $k$-dimensional probability simplex in the setting where an $\varepsilon$ fraction of observations are subject to adversarial corruption. A simple particular example is the problem of estimating the distribution of a discrete random variable. Assuming that the discrete variable takes $k$ values, the unknown parameter $\boldsymbol \theta$ is a $k$-dimensional vector belonging to the probability simplex. We first describe various settings of contamination and discuss the relation between these settings. We then establish minimax rates when the quality of estimation is measured by the total-variation distance, the Hellinger distance, or the $\mathbb L^2$-distance between two probability measures. We also provide confidence regions for the unknown mean that shrink at the minimax rate. Our analysis reveals that the minimax rates associated to these three distances are all different, but they are all attained by the sample average. Furthermore, we show that the latter is adaptive to the possible sparsity of the unknown vector. Some numerical experiments illustrating our theoretical findings are reported.
mathematics
Modern product design in the engineering domain is increasingly driven by computational analysis including finite-element based simulation, computational optimization, and modern data analysis techniques such as machine learning. To apply these methods, suitable data representations for components under development as well as for related design criteria have to be found. While a component's geometry is typically represented by a polygon surface mesh, it is often not clear how to parametrize critical design properties in order to enable efficient computational analysis. In the present work, we propose a novel methodology to obtain a parameterization of a component's plastic deformation behavior under stress, which is an important design criterion in many application domains, for example, when optimizing the crash behavior in the automotive context. Existing parameterizations limit computational analysis to relatively simple deformations and typically require extensive input by an expert, making the design process time intensive and costly. Hence, we propose a way to derive a compact descriptor of deformation behavior that is based on spectral mesh processing and enables a low-dimensional representation of also complex deformations.We demonstrate the descriptor's ability to represent relevant deformation behavior by applying it in a nearest-neighbor search to identify similar simulation results in a filtering task. The proposed descriptor provides a novel approach to the parametrization of geometric deformation behavior and enables the use of state-of-the-art data analysis techniques such as machine learning to engineering tasks concerned with plastic deformation behavior.
computer science
Optimal parameter setting for applications problems embedded into hardware graphs is key to practical quantum annealers (QA). Embedding chains typically crop up as harmful Griffiths phases, but can be used as a resource as we show here: to balance out singularities in the logical problem changing its universality class. Smart choice of embedding parameters reduces annealing times for random Ising chain from $O(exp[c\sqrt N])$ to $O(N^2)$. Dramatic reduction in time-to-solution for QA is confirmed by numerics, for which we developed a custom integrator to overcome convergence issues.
quantum physics
Plasma-based accelerators (PBAs), having demonstrated the production of GeV electron beams in only centimetre scales, offer a path towards a new generation of highly compact and cost-effective particle accelerators. However, achieving the required beam quality, particularly on the energy spread for applications such as free-electron lasers, remains a challenge. Here we investigate fundamental sources of energy spread and bunch length in PBAs which arise from the betatron motion of beam electrons. We present an analytical theory, validated against particle-in-cell simulations, which accurately describes these phenomena. Significant impact on the beam quality is predicted for certain configurations, explaining previously observed limitations on the achievable bunch length and energy spread. Guidelines for mitigating these contributions towards high-quality beams are deduced.
physics