text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
Visual localization, i.e., camera pose estimation in a known scene, is a core component of technologies such as autonomous driving and augmented reality. State-of-the-art localization approaches often rely on image retrieval techniques for one of two tasks: (1) provide an approximate pose estimate or (2) determine which parts of the scene are potentially visible in a given query image. It is common practice to use state-of-the-art image retrieval algorithms for these tasks. These algorithms are often trained for the goal of retrieving the same landmark under a large range of viewpoint changes. However, robustness to viewpoint changes is not necessarily desirable in the context of visual localization. This paper focuses on understanding the role of image retrieval for multiple visual localization tasks. We introduce a benchmark setup and compare state-of-the-art retrieval representations on multiple datasets. We show that retrieval performance on classical landmark retrieval/recognition tasks correlates only for some but not all tasks to localization performance. This indicates a need for retrieval approaches specifically designed for localization tasks. Our benchmark and evaluation protocols are available at https://github.com/naver/kapture-localization. | computer science |
We compute $\epsilon$-factorized differential equations for all dimensionally-regularized integrals of the nonplanar hexa-box topology, which contribute for instance to 2-loop 5-point QCD amplitudes. A full set of pure integrals is presented. For 5-point planar topologies, Gram determinants which vanish in $4$ dimensions are used to build compact expressions for pure integrals. Using unitarity cuts and computational algebraic geometry, we obtain a compact IBP system which can be solved in 8 hours on a single CPU core, overcoming a major bottleneck for deriving the differential equations. Alternatively, assuming prior knowledge of the alphabet of the nonplanar hexa-box, we reconstruct analytic differential equations from 30 numerical phase-space points, making the computation almost trivial with current techniques. We solve the differential equations to obtain the values of the master integrals at the symbol level. Full results for the differential equations and solutions are included as supplementary material. | high energy physics theory |
The longstanding Alperin weight conjecture and its blockwise version have been reduced to simple groups recently by Navarro, Tiep, Spaeth and Koshitani. Thus, to prove this conjecture, it suffices to verify the corresponding inductive condition for all finite simple groups. The first is to establish an equivariant bijection between irreducible Brauer characters and weights for the universal covering groups of simple groups. Assume q is a power of some odd prime p. We first prove the blockwise Alperin weight conjecture for Sp2n(q) and odd non-defining characteristics. If the decomposition matrix of Sp2n(q) is unitriangular with respect to an Aut(Sp2n(q))-stable basic set (this assumption holds for linear primes), we can establish an equivariant bijection between the irreducible Brauer characters and weights. | mathematics |
About 80\% of the mass of the present Universe is made up of the unknown (dark matter), while the rest is made up of ordinary matter. It is a very intriguing question why the {\it mass} densities of dark matter and ordinary matter (mainly baryons) are close to each other. It may be hinting the identity of dark matter and furthermore structure of a dark sector. A mirrored world provides a natural explanation to this puzzle. On the other hand, if mirror-symmetry breaking scale is low, it tends to cause cosmological problems. In this letter, we propose a mirrored unification framework, which breaks mirror-symmetry at the grand unified scale, but still addresses the puzzle. The dark matter mass is strongly related with the dynamical scale of QCD, which explains the closeness of the dark matter and baryon masses. Intermediate-energy portal interactions share the generated asymmetry between the visible and dark sectors. Furthermore, our framework is safe from cosmological issues by providing low-energy portal interactions to release the superfluous entropy of the dark sector into the visible sector. | high energy physics phenomenology |
The recent puzzling results of the XENON1T collaboration at few keV electronic recoils could be due to the scattering of solar neutrinos endowed with finite Majorana transition magnetic moments (TMMs). Within such general formalism, we find that the observed excess in the XENON1T data agrees well with this interpretation. The required TMM strengths lie within the limits set by current experiments, such as Borexino, specially when one takes into account a possible tritium contamination. | high energy physics phenomenology |
In this work, we quantify field emission properties of cathodes made from carbon nanotube (CNT) fibers. The cathodes were arranged in different configurations to determine the effect of cathode geometry on the emission properties. Various geometries were investigated including: 1) flat cut fiber tip, 2) folded fiber, 3) looped fiber and 4) and fibers wound around a cylinder. We employ a custom field emission microscope to quantify I-V characteristics in combination with laterally-resolved field-dependent electron emission area. Additionally we look at the very early emission stages, first when a CNT fiber is turned on for the first time which is then followed by multiple ramp-up/down. Upon the first turn on, all fibers demonstrated limited and discrete emission area. During ramping runs, all CNT fibers underwent multiple (minor and/or major) breakdowns which improved emission properties in that turn-on field decreased, field enhancement factor and emission area both increased. It is proposed that breakdowns are responsible for removing initially undesirable emission sites caused by stray fibers higher than average. This initial breakdown process gives way to a larger emission area that is created when the CNT fiber sub components unfold and align with the electric field. Our results form the basis for careful evaluation of CNT fiber cathodes for dc or low frequency pulsed power systems in which large uniform area emission is required, or for narrow beam high frequency applications in which high brightness is a must. | physics |
In the Hayden-Preskill thought experiment, the Hawking radiation emitted before a quantum state is thrown into the black hole is used along with the radiation collected later for the purpose of decoding the quantum state. A natural question is how the recoverability is affected if the stored early radiation is damaged or subject to decoherence, and/or the decoding protocol is imperfectly performed. We study the recoverability in the thought experiment in the presence of decoherence or noise in the storage of early radiation. | quantum physics |
We show that the uniform motion of a homogeneous distribution of electric charge can be stable or unstable depending on its geometry. When the electrodynamic body is perturbed from a state of rest, it starts to perform fast oscillations, irrespective of the frequency of the perturbation. This nonlinear oscillation is the result of the feedback interaction between Coulombian and radiative fields. The resulting spontaneous symmetry breaking of the Lorentz group implies that the principle of inertia only holds on average and suggests that the default state of matter is not necessarily uniform motion, but self-oscillation as well. We propose that the excitability of electrodynamic bodies under external perturbations, which leads to limit cycle oscillations, is at the basis of the wave particle duality and its related quantum effects. | physics |
The paper is motivated by the analysis of the relationship between ratings and teacher practices and beliefs, which are measured via a set of binary and ordinal items collected by a specific survey with nearly half missing respondents. The analysis, which is based on a two-level random effect model, must face two about the items measuring teacher practices and beliefs: (i) these items level 2 predictors severely affected by missingness; (ii) there is redundancy in the number of items and the number of categories of their measurement scale. tackle the first issue by considering a multiple imputation strategy based on information at both level 1 and level 2. For the second issue, we consider regularization techniques for ordinal predictors, also accounting for the multilevel data structure. The proposed solution combines existing methods in an original way to solve specific problem at hand, but it is generally applicable to settings requiring to select predictors affected by missing values. The results obtained with the final model out that some teacher practices and beliefs are significantly related to ratings about teacher ability to motivate students. | statistics |
We apply machine learning to the problem of finding numerical Calabi-Yau metrics. Building on Donaldson's algorithm for calculating balanced metrics on K\"ahler manifolds, we combine conventional curve fitting and machine-learning techniques to numerically approximate Ricci-flat metrics. We show that machine learning is able to predict the Calabi-Yau metric and quantities associated with it, such as its determinant, having seen only a small sample of training data. Using this in conjunction with a straightforward curve fitting routine, we demonstrate that it is possible to find highly accurate numerical metrics much more quickly than by using Donaldson's algorithm alone, with our new machine-learning algorithm decreasing the time required by between one and two orders of magnitude. | high energy physics theory |
String theory on AdS3 backgrounds arises as an IR limit of Little String Theory on NS5-branes. A wide variety of holographic RG flows from the fivebrane theory in the UV to (orbifolds of) AdS3 in the IR is amenable to exact treatment in worldsheet string theory as a class of null-gauged WZW models. The condensate of stringy winding operators which resolves the near-source structure of fivebranes on the Coulomb branch plays a crucial role in AdS3, revealing stringy structure invisible to the supergravity approximation. The D-brane sector contains precursors of the long strings which dominate black hole entropy in the dual spacetime CFT. | high energy physics theory |
We consider Hilbert modular varieties in characteristic p with Iwahori level at p and construct a geometric Jacquet-Langlands relation showing that the irreducible components are isomorphic to products of projective bundles over quaternionic Shimura varieties of level prime to p. We use this to establish a relation between mod p Hilbert and quaternionic modular forms that reflects the representation theory of GL_2 in characteristic p and generalizes a result of Serre for classical modular forms. Finally we study the fibres of the degeneracy map to level prime to p and prove a cohomological vanishing result that is used to associate Galois representations to mod p Hilbert modular forms. | mathematics |
Diffusive transport is characterized by a diffusivity tensor which may, in general, contain both a symmetric and an antisymmetric component. Although the latter is often neglected, we derive Green-Kubo relations showing it to be a general characteristic of random motion breaking time-reversal and parity symmetries, as encountered in chiral active matter. In analogy with the so-called odd viscosity appearing in chiral active fluids, we term this component the odd diffusivity. We show how odd diffusivity emerges in a chiral random walk model, and demonstrate the applicability of the Green-Kubo relations through molecular dynamics simulations of a passive tracer particle diffusing in a chiral active bath. | condensed matter |
State space structure of tripartite quantum systems is analyzed. In particular, it has been shown that the set of states separable across all the three bipartitions [say $\mathcal{B}^{int}(ABC)$] is a strict subset of the set of states having positive partial transposition (PPT) across the three bipartite cuts [say $\mathcal{P}^{int}(ABC)$] for all the tripartite Hilbert spaces $\mathbb{C}_A^{d_1}\otimes\mathbb{C}_B^{d_2}\otimes\mathbb{C}_C^{d_3}$ with $\min\{d_1,d_2,d_3\}\ge2$. The claim is proved by constructing state belonging to the set $\mathcal{P}^{int}(ABC)$ but not belonging to $\mathcal{B}^{int}(ABC)$. For $(\mathbb{C}^{d})^{\otimes3}$ with $d\ge3$, the construction follows from specific type of multipartite unextendible product bases. However, such a construction is not possible for $(\mathbb{C}^{2})^{\otimes3}$ since for any $n$ the bipartite system $\mathbb{C}^2\otimes\mathbb{C}^n$ cannot have any unextendible product bases [Phys. Rev. Lett. 82, 5385 (1999)]. For the $3$-qubit system we, therefore, come up with a different construction. | quantum physics |
Retinal image segmentation plays an important role in automatic disease diagnosis. This task is very challenging because the complex structure and texture information are mixed in a retinal image, and distinguishing the information is difficult. Existing methods handle texture and structure jointly, which may lead biased models toward recognizing textures and thus results in inferior segmentation performance. To address it, we propose a segmentation strategy that seeks to separate structure and texture components and significantly improve the performance. To this end, we design a structure-texture demixing network (STD-Net) that can process structures and textures differently and better. Extensive experiments on two retinal image segmentation tasks (i.e., blood vessel segmentation, optic disc and cup segmentation) demonstrate the effectiveness of the proposed method. | electrical engineering and systems science |
The partial lunar eclipse of July 16, 2019, left the lower part of the Moon illuminated at its maximum phase in Padova (Italy). Occulting it behind far buildings it was possible to compare the light of Jupiter and Saturn de-focused to the same diameter of the Moon with the light from the umbra. The luminosity of the eclipsed Moon as well as the Danjon index have been estimated and compared with ephemerides. January 21, 2019, July 27, 2019 and September 28, 2015 total lunar eclipses data are also published. | physics |
Many materials that are out of equilibrium can "learn" one or more inputs that are repeatedly applied. Yet, a common framework for understanding such memories is lacking. Here we construct minimal representations of cyclic memory behaviors as directed graphs, and we construct simple physically-motivated models that produce the same graph structures. We show how a model of worn grass between park benches can produce multiple transient memories---a behavior previously observed in dilute suspensions of particles and charge-density-wave conductors---and the Mullins effect. Isolating these behaviors in our simple model allows us to assess the necessary ingredients for these kinds of memory, and to quantify memory capacity. We contrast these behaviors with a simple Preisach model that produces return-point memory. Our analysis provides a unified method for comparing and diagnosing cyclic memory behaviors across different materials. | condensed matter |
Cui et al. describe the fabrication and characterization of planar pn-junction solar cells based on lead-halide perovskites. The doping densities measured using Hall effect measurements vary from $N_D = 10^{12} cm^{-3}$ to $8\times 10^{12} cm^{-3}$ for the solution-processed n-type layer and $N_A = 8\times 10^9 cm^{-3}$ for the evaporated p-type layer. While these devices outperform their counterparts, that are supposedly un-doped, the results raise three important questions: (i) Are the reported doping densities high enough to change the electrostatic potential distribution in the device from that for the un-doped ones, (ii) are the doping densities high enough for the pn-junction to remain intact under typical photovoltaic operation conditions and (iii) is a pn-junction beneficial for photovoltaic performance given the typical properties of lead-halide perovskites. | physics |
Prediction of user traffic in cellular networks has attracted profound attention for improving resource utilization. In this paper, we study the problem of network traffic traffic prediction and classification by employing standard machine learning and statistical learning time series prediction methods, including long short-term memory (LSTM) and autoregressive integrated moving average (ARIMA), respectively. We present an extensive experimental evaluation of the designed tools over a real network traffic dataset. Within this analysis, we explore the impact of different parameters to the effectiveness of the predictions. We further extend our analysis to the problem of network traffic classification and prediction of traffic bursts. The results, on the one hand, demonstrate superior performance of LSTM over ARIMA in general, especially when the length of the training time series is high enough, and it is augmented by a wisely-selected set of features. On the other hand, the results shed light on the circumstances in which, ARIMA performs close to the optimal with lower complexity. | computer science |
We investigate the light-cone-like spread of electronic correlations in a laser-driven quantum chain. Using the time-dependent density matrix renormalization group, we show that high-frequency driving leads to a Floquet-engineered spread velocity that determines the enhancement of density-density correlations when the ratio of potential and kinetic energies is effectively increased both by either a continuous or a pulsed drive. For large times we numerically show the existence of a Floquet steady state at not too long distances on the lattice with minimal heating. Intriguingly, we find a discontinuity of dynamically scaled correlations at the edge of the light cone, akin to the discontinuity known to exist for quantum quenches in Luttinger liquids. Our work demonstrates the potential of pump-probe experiments for investigating light-induced correlations in low-dimensional materials and puts quantitative speed limits on the manipulation of long-ranged correlations through Floquet engineering. | condensed matter |
In this contribution we deal with the problem of learning an undirected graph which encodes the conditional dependence relationship between variables of a complex system, given a set of observations of this system. This is a very central problem of modern data analysis and it comes out every time we want to investigate a deeper relationship between random variables, which is different from the classical dependence usually measured by the covariance. In particular, in this contribution we deal with the case of Gaussian Graphical Models (GGMs) for which the system of variables has a multivariate gaussian distribution. We study all the existing techniques for such a problem and propose a smart implementation of the symmetric parallel regression technique which turns out to be very competitive for learning sparse GGMs under high dimensional data regime. | statistics |
Linear response theory (LRT) is one of the main approaches to the dynamics of quantum many-body systems. However, this approach has limitations and requires, e.g., that the initial state is (i) mixed and (ii) close to equilibrium. In this paper, we discuss these limitations and study the nonequilibrium dynamics for a certain class of properly prepared initial states. Specifically, we consider thermal states of the quantum system in the presence of an additional static force which, however, become nonequilibrium states when this static force is eventually removed. While for weak forces the relaxation dynamics is well captured by LRT, much less is known in the case of strong forces, i.e., initial states far away from equilibrium. Summarizing our main results, we unveil that, for high temperatures, the nonequilibrium dynamics of so-called binary operators is always generated by an equilibrium correlation function. In particular, this statement holds true for states in the far-from-equilibrium limit, i.e., outside the linear response regime. In addition, we confirm our analytical results by numerically studying the dynamics of local fermionic occupation numbers and local energy densities in the spin-1/2 Heisenberg chain. Remarkably, these simulations also provide evidence that our results qualitatively apply in a more general setting, e.g., in the anisotropic XXZ model where the local energy is a non-binary operator, as well as for a wider range of temperature. Furthermore, exploiting the concept of quantum typicality, all of our findings are not restricted to mixed states, but are valid for pure initial states as well. | condensed matter |
In this work, we experimentally demonstrate an integrated circuit (IC) of 30 relaxation oscillators with reconfigurable capacitive coupling to solve the NP-Hard Maximum Cut (Max-Cut) problem. We show that under the influence of an external second-harmonic injection signal, the oscillator phases exhibit a bi-partition which can be used to calculate a high quality approximate Max-Cut solution. Leveraging the all-to-all reconfigurable coupling architecture, we experimentally evaluate the computational properties of the oscillators using randomly generated graph instances of varying size and edge density . Further, comparing the Max-Cut solutions with the optimal values, we show that the oscillators (after simple post-processing) produce a Max-Cut that is within 99% of the optimal value in 28 of the 36 measured graphs; importantly, the oscillators are particularly effective in dense graphs with the Max-Cut being optimal in seven out of nine measured graphs with edge density 0.8. Our work marks a step towards creating an efficient, room-temperature-compatible non-Boolean hardware-based solver for hard combinatorial optimization problems. | physics |
In this paper, we use a reconfigurable intelligent surface (RIS) to enhance the radar sensing and communication capabilities of a mmWave dual function radar communication system. To simultaneously localize the target and to serve the user, we propose to adaptively partition the RIS by reserving separate RIS elements for sensing and communication. We design a multi-stage hierarchical codebook to localize the target while ensuring a strong communication link to the user. We also present a method to choose the number of times to transmit the same beam in each stage to achieve a desired target localization probability of error. The proposed algorithm typically requires fewer transmissions than an exhaustive search scheme to achieve a desired target localization probability of error. Furthermore, the average spectral efficiency of the user with the proposed algorithm is found to be comparable to that of a RIS-assisted MIMO communication system without sensing capabilities and is much better than that of traditional MIMO systems without RIS. | electrical engineering and systems science |
This paper considers an angle-domain intelligent reflecting surface (IRS) system. We derive maximum likelihood (ML) estimators for the effective angles from the base station (BS) to the user and the effective angles of propagation from the IRS to the user. It is demonstrated that the accuracy of the estimated angles improves with the number of BS antennas. Also, deploying the IRS closer to the BS increases the accuracy of the estimated angle from the IRS to the user. Then, based on the estimated angles, we propose a joint optimization of BS beamforming and IRS beamforming, which achieves similar performance to two benchmark algorithms based on full CSI and the multiple signal classification (MUSIC) method respectively. Simulation results show that the optimized BS beam becomes more focused towards the IRS direction as the number of reflecting elements increases. Furthermore, we derive a closed-form approximation, upper bound and lower bound for the achievable rate. The analytical findings indicate that the achievable rate can be improved by increasing the number of BS antennas or reflecting elements. Specifically, the BS-user link and the BS-IRS-user link can obtain power gains of order $N$ and $NM^2$, respectively, where $N$ is the antenna number and $M$ is the number of reflecting elements. | computer science |
Novel leptophilic neutral currents can be tested at upcoming neutrino oscillation experiments using two complementary processes, neutrino trident production and neutrino-electron ($\nu-e$) elastic scattering. Considering generic anomaly-free $U(1)$ extensions of the Standard Model, we discuss the characteristics of $\nu-e$ scattering as well as $e^+e^-$ and $\mu^+\mu^-$ trident production at the DUNE near detector in the presence of such BSM scenarios. We then determine the sensitivity of DUNE in constraining the well-known $L_e - L_\mu$ and $L_\mu - L_\tau$ models. We conclude that DUNE will be able to probe these leptophilic models with unprecedented sensitivity, covering unproved explanations of the $(g-2)_\mu$ discrepancy. | high energy physics phenomenology |
We study string inspired two-field models of large-field inflation based on axion monodromy in the presence of an interacting heavier modulus. This class of models has enough structure to approximate at least part of the backreaction effects known in full string theory, such as kinetic mixing with the axion, and flattening of the scalar potential. Yet, it is simple enough to fully describe the structure of higher-point curvature perturbation interactions driven by the adjusting modulus backreaction dynamics. We find that the presence of the heavy modulus can be described via two equivalent effective field theories, both of which can incorporate reductions of the speed of sound. Hence, the presence of heavier moduli in axion monodromy inflation constructions will necessarily generate some amount of non-Gaussianity accompanied by changes to $n_s$ and $r$ beyond what results from just from the well known adiabatic flattening backreaction. | high energy physics theory |
The present study experimentally and numerically investigates the evaporation and resultant patterns of dried deposits of aqueous colloidal sessile droplets, when the droplets are initially elevated to a high temperature before being placed on a substrate held at ambient temperature. The system is then released for natural evaporation without applying any external perturbation. Infrared thermography and optical profilometry were used as essential tools for interfacial temperature measurements and quantification of the coffee-ring dimensions, respectively. Initially, a significant temperature gradient exists along the liquid-gas interface as soon as the droplet is deposited on the substrate which triggers a Marangoni stress-induced recirculation flow directed from the top of the droplet towards the contact line along the liquid-gas interface. Thus, the flow is in the reverse direction to that seen in the conventional substrate heating case. Interestingly, this temperature gradient decays rapidly -- within the first 10% of the total evaporation time and the droplet-substrate system reaches thermal equilibrium with ambient thereafter. Despite fast decay of the temperature gradient, the coffee-ring dimensions significantly diminish, leading to an inner deposit. This suppression of the coffee-ring effect is attributed to the fact that the initial Marangoni stress-induced recirculation flow continues until the last stage of the evaporation, even after the interfacial temperature gradient vanishes. This is essentially a consequence of liquid inertia. Overall, together with a new experimental condition, the present investigation discloses a distinct nature of Marangoni stress-induced flow in the drying droplet and its role in influencing the associated colloidal deposits, which was not explored previously. | physics |
Confining dark sectors at the GeV scale can lead to novel collider signatures including those termed emerging jets with large numbers of displaced vertices. The triggers at the LHC experiments were not designed with this type of new physics in mind, and triggering can be challenging, especially if the mediator is relatively light and/or has quantum numbers such that additional jets are not automatically produced in each event. We show that the efficiency and the total event rate at current triggers can be significantly improved by considering initial state radiation of the events, with the largest increase in rate coming from simulation of two additional jets. We also explore possible new triggers that employ hit counts in different tracker layers as input into a machine learning algorithm. We show that these new triggers can have reasonably low background rates, and that they are sensitive to a wide range of new physics parameters even when trained on a single model. | high energy physics phenomenology |
The use of state estimation technique offers a means of inferring the rotor-effective wind speed based upon solely standard measurements of the turbine. For the ease of design and computational concerns, such estimators are typically built based upon simplified turbine models that characterise the turbine with rigid blades. Large model mismatch, particularly in the power coefficient, could lead to degradation in estimation performance. Therefore, in order to effectively reduce the adverse impact of parameter uncertainties in the estimator model, this paper develops a wind sped estimator based on the concept of interacting multiple-model adaptive estimation. The proposed estimator is composed of a bank of extended Kalman filters and each filter model is developed based on different power coefficient mapping to match the operating turbine parameter. Subsequently, the algorithm combines the wind speed estimates provided by each filter based on their statistical properties. In addition, the proposed estimator not only can infer the rotor-effective wind speed, but also the uncertain system parameters, namely, the power coefficient. Simulation results demonstrate the proposed estimator achieved better improvement in estimating the rotor-effective wind speed and power coefficient compared to the standard Kalman filter approach. | electrical engineering and systems science |
Collective excitations in liquids are important for understanding liquid dynamical and thermodynamic properties. Gapped momentum states (GMS) are a notable feature of liquid dynamics predicted to operate in the transverse sector of collective excitations. Here, we combine inelastic neutron scattering experiments, theory and molecular dynamics modelling to study collective excitations and GMS in liquid Ga in a wide range of temperature and $k$-points. We find that all three lines of enquiry agree for the longitudinal sector of liquid dynamics. In the transverse sector, the experiments agree with theory, modelling as well as earlier X-ray experiments at larger $k$, whereas theory and modelling agree in a wide range of temperature and $k$-points. We observe the emergence and development of the $k$-gap in the transverse sector which increases with temperature and inverse of relaxation time as predicted theoretically. | condensed matter |
How many training data are needed to learn a supervised task? It is often observed that the generalization error decreases as $n^{-\beta}$ where $n$ is the number of training examples and $\beta$ an exponent that depends on both data and algorithm. In this work we measure $\beta$ when applying kernel methods to real datasets. For MNIST we find $\beta\approx 0.4$ and for CIFAR10 $\beta\approx 0.1$, for both regression and classification tasks, and for Gaussian or Laplace kernels. To rationalize the existence of non-trivial exponents that can be independent of the specific kernel used, we study the Teacher-Student framework for kernels. In this scheme, a Teacher generates data according to a Gaussian random field, and a Student learns them via kernel regression. With a simplifying assumption -- namely that the data are sampled from a regular lattice -- we derive analytically $\beta$ for translation invariant kernels, using previous results from the kriging literature. Provided that the Student is not too sensitive to high frequencies, $\beta$ depends only on the smoothness and dimension of the training data. We confirm numerically that these predictions hold when the training points are sampled at random on a hypersphere. Overall, the test error is found to be controlled by the magnitude of the projection of the true function on the kernel eigenvectors whose rank is larger than $n$. Using this idea we predict relate the exponent $\beta$ to an exponent $a$ describing how the coefficients of the true function in the eigenbasis of the kernel decay with rank. We extract $a$ from real data by performing kernel PCA, leading to $\beta\approx0.36$ for MNIST and $\beta\approx0.07$ for CIFAR10, in good agreement with observations. We argue that these rather large exponents are possible due to the small effective dimension of the data. | statistics |
Without relevant human priors, neural networks may learn uninterpretable features. We propose Dynamics of Attention for Focus Transition (DAFT) as a human prior for machine reasoning. DAFT is a novel method that regularizes attention-based reasoning by modelling it as a continuous dynamical system using neural ordinary differential equations. As a proof of concept, we augment a state-of-the-art visual reasoning model with DAFT. Our experiments reveal that applying DAFT yields similar performance to the original model while using fewer reasoning steps, showing that it implicitly learns to skip unnecessary steps. We also propose a new metric, Total Length of Transition (TLT), which represents the effective reasoning step size by quantifying how much a given model's focus drifts while reasoning about a question. We show that adding DAFT results in lower TLT, demonstrating that our method indeed obeys the human prior towards shorter reasoning paths in addition to producing more interpretable attention maps. Our code is available at https://github.com/kakao/DAFT. | statistics |
Quantum-simulator hardware promises new insights into problems from particle and nuclear physics. A major challenge is to reproduce gauge invariance, as violations of this quintessential property of lattice gauge theories can have dramatic consequences, e.g., the generation of a photon mass in quantum electrodynamics. Here, we introduce an experimentally friendly method to protect gauge invariance in $\mathrm{U}(1)$ lattice gauge theories against coherent errors in a controllable way. Our method employs only single-body energy-penalty terms, thus enabling practical implementations. As we derive analytically, some sets of penalty coefficients render undesired gauge sectors inaccessible by unitary dynamics for exponentially long times, and, for few-body error terms, with resources independent of system size. These findings constitute an exponential improvement over previously known results from energy-gap protection or perturbative treatments. In our method, the gauge-invariant subspace is protected by an emergent global symmetry, meaning it can be immediately applied to other symmetries. In our numerical benchmarks for continuous-time and digital quantum simulations, gauge protection holds for all calculated evolution times (up to $t>10^{10}/J$ for continuous time, with $J$ the relevant energy scale). Crucially, our gauge-protection technique is simpler to realize than the associated ideal gauge theory, and can thus be readily implemented in current ultracold-atom analog simulators as well as digital noisy intermediate scale quantum (NISQ) devices. | quantum physics |
We study sample complexity of optimizing "hill-climbing friendly" functions defined on a graph under noisy observations. We define a notion of convexity, and we show that a variant of best-arm identification can find a near-optimal solution after a small number of queries that is independent of the size of the graph. For functions that have local minima and are nearly convex, we show a sample complexity for the classical simulated annealing under noisy observations. We show effectiveness of the greedy algorithm with restarts and the simulated annealing on problems of graph-based nearest neighbor classification as well as a web document re-ranking application. | computer science |
Android devices are shipped in several flavors by more than 100 manufacturer partners, which extend the Android "vanilla" OS with new system services, and modify the existing ones. These proprietary extensions expose Android devices to reliability and security issues. In this paper, we propose a coverage-guided fuzzing platform (Chizpurfle) based on evolutionary algorithms to test proprietary Android system services. A key feature of this platform is the ability to profile coverage on the actual, unmodified Android device, by taking advantage of dynamic binary re-writing techniques. We applied this solution on three high-end commercial Android smartphones. The results confirmed that evolutionary fuzzing is able to test Android OS system services more efficiently than blind fuzzing. Furthermore, we evaluate the impact of different choices for the fitness function and selection algorithm. | computer science |
New gauge symmetries often appear in theories beyond the Standard Model. Here we study a model where lepton number is promoted to a gauge symmetry. Anomaly cancellation requires the introduction of additional leptons, the lightest of which is a natural leptophilic dark matter candidate. We perform a comprehensive study of both collider and dark matter phenomenology. Furthermore we find that the model exhibits a first order lepton number breaking phase transition in large regions of parameter space. The corresponding gravitational wave signal is computed, and its detectability at LISA and other future GW detectors assessed. Finally we comment on the complementarity of dark matter, collider and gravitational wave observables, and on the potential reach of future colliders. | high energy physics phenomenology |
This paper is devoted to give a complete unified study of several weak forms of $\ddb-$Lemma on compact complex manifolds. | mathematics |
Weight pruning has been widely acknowledged as a straightforward and effective method to eliminate redundancy in Deep Neural Networks (DNN), thereby achieving acceleration on various platforms. However, most of the pruning techniques are essentially trade-offs between model accuracy and regularity which lead to impaired inference accuracy and limited on-device acceleration performance. To solve the problem, we introduce a new sparsity dimension, namely pattern-based sparsity that comprises pattern and connectivity sparsity, and becoming both highly accurate and hardware friendly. With carefully designed patterns, the proposed pruning unprecedentedly and consistently achieves accuracy enhancement and better feature extraction ability on different DNN structures and datasets, and our pattern-aware pruning framework also achieves pattern library extraction, pattern selection, pattern and connectivity pruning and weight training simultaneously. Our approach on the new pattern-based sparsity naturally fits into compiler optimization for highly efficient DNN execution on mobile platforms. To the best of our knowledge, it is the first time that mobile devices achieve real-time inference for the large-scale DNN models thanks to the unique spatial property of pattern-based sparsity and the help of the code generation capability of compilers. | computer science |
Community structure is one of the most relevant features encountered in numerous real-world applications of networked systems. Despite the tremendous effort of scientists working on this subject over the past few decades to characterize, model, and analyze communities, more investigations are needed to better understand the impact of community structure and its dynamics on networked systems. Here, we first focus on generative models of communities in complex networks and their role in developing strong foundation for community detection algorithms. We discuss modularity and the use of modularity maximization as the basis for community detection. Then, we overview the Stochastic Block Model, its different variants, and inference of community structures from such models. Next, we focus on time evolving networks, where existing nodes and links can disappear and/or new nodes and links may be introduced. The extraction of communities under such circumstances poses an interesting and non-trivial problem that has gained considerable interest over the last decade. We briefly discuss considerable advances made in this field recently. Finally, we focus on immunization strategies essential for targeting the influential spreaders of epidemics in modular networks. Their main goal is to select and immunize a small proportion of individuals from the whole network to control the diffusion process. Various strategies have emerged over the years suggesting different ways to immunize nodes in networks with overlapping and non-overlapping community structure. We first discuss stochastic strategies that require little or no information about the network topology at the expense of their performance. Then, we introduce deterministic strategies that have proven to be very efficient in controlling the epidemic outbreaks, but require complete knowledge of the network. | physics |
How can we collect and use a video dataset to further improve spatiotemporal 3D Convolutional Neural Networks (3D CNNs)? In order to positively answer this open question in video recognition, we have conducted an exploration study using a couple of large-scale video datasets and 3D CNNs. In the early era of deep neural networks, 2D CNNs have been better than 3D CNNs in the context of video recognition. Recent studies revealed that 3D CNNs can outperform 2D CNNs trained on a large-scale video dataset. However, we heavily rely on architecture exploration instead of dataset consideration. Therefore, in the present paper, we conduct exploration study in order to improve spatiotemporal 3D CNNs as follows: (i) Recently proposed large-scale video datasets help improve spatiotemporal 3D CNNs in terms of video classification accuracy. We reveal that a carefully annotated dataset (e.g., Kinetics-700) effectively pre-trains a video representation for a video classification task. (ii) We confirm the relationships between #category/#instance and video classification accuracy. The results show that #category should initially be fixed, and then #instance is increased on a video dataset in case of dataset construction. (iii) In order to practically extend a video dataset, we simply concatenate publicly available datasets, such as Kinetics-700 and Moments in Time (MiT) datasets. Compared with Kinetics-700 pre-training, we further enhance spatiotemporal 3D CNNs with the merged dataset, e.g., +0.9, +3.4, and +1.1 on UCF-101, HMDB-51, and ActivityNet datasets, respectively, in terms of fine-tuning. (iv) In terms of recognition architecture, the Kinetics-700 and merged dataset pre-trained models increase the recognition performance to 200 layers with the Residual Network (ResNet), while the Kinetics-400 pre-trained model cannot successfully optimize the 200-layer architecture. | computer science |
We present LM-Reloc -- a novel approach for visual relocalization based on direct image alignment. In contrast to prior works that tackle the problem with a feature-based formulation, the proposed method does not rely on feature matching and RANSAC. Hence, the method can utilize not only corners but any region of the image with gradients. In particular, we propose a loss formulation inspired by the classical Levenberg-Marquardt algorithm to train LM-Net. The learned features significantly improve the robustness of direct image alignment, especially for relocalization across different conditions. To further improve the robustness of LM-Net against large image baselines, we propose a pose estimation network, CorrPoseNet, which regresses the relative pose to bootstrap the direct image alignment. Evaluations on the CARLA and Oxford RobotCar relocalization tracking benchmark show that our approach delivers more accurate results than previous state-of-the-art methods while being comparable in terms of robustness. | computer science |
The prompt $J/\psi$ photoproduction within the non-relativistic QCD (NRQCD) framework at the future Circular Electron Positron Collider (CEPC) is studied, including the contributions from both direct and resolved photons. Employing different sets of long distance matrix elements, the total cross section is dominated by the color-octet channel. We present different kinematic distributions of $J/\psi$ production and the results show there will be about 50 $J/\psi$ events when the transverse momentum of $J/\psi$ is up to 20 GeV. It renders that the $J/\psi$ photoprodution at the CEPC is a well laboratory to test the NRQCD and further clarify the universality problem in NRQCD between electron positron collider and hadron collider. | high energy physics phenomenology |
Non-Fermi liquids (NFL) are a class of strongly interacting gapless fermionic systems without long-lived quasiparticle excitations. An important group of NFL model features itinerant fermions coupled to soft bosonic fluctuations near a quantum-critical point (QCP), and are widely believed to capture the essential physics of many unconventional superconductors. However numerically the direct observation of a canonical NFL behavior in such systems, characterized by a power-law form in the Green's function, has been elusive. Here we consider a Sachdev-Ye-Kitaev (SYK)-like model with random Yukawa interaction between critical bosons and fermions (dubbed Yukawa-SYK model). We show it is immune from minus-sign problem and hence can be solved exactly via large-scale quantum Monte Carlo simulation beyond the large-$N$ limit accessible to analytical approaches. Our simulation demonstrates the Yukawa-SYK model features "self-tuned quantum criticality", namely the system is critical independent of the bosonic bare mass. We put these results to test at finite $N$, and our unbiased numerics reveal clear evidence of these exotic quantum-critical NFL properties -- the power-law behavior in Green's function of fermions and bosons -- which propels the theoretical understanding of critical Planckian metals and unconventional superconductors. | condensed matter |
We consider the deformations of a supersymmetric quantum field theory by adding spacetime-dependent terms to the action. We propose to describe the renormalization of such deformations in terms of some cohomological invariants, a class of solutions of a Maurer-Cartan equation. We consider the strongly coupled limit of $N=4$ supersymmetric Yang-Mills theory. In the context of AdS/CFT correspondence, we explain what corresponds to our invariants in classical supergravity. There is a leg amputation procedure, which constructs a solution of the Maurer-Cartan equation from tree diagramms of SUGRA. We consider a particular example of the beta-deformation. It is known that the leading term of the beta-function is cubic in the parameter of the beta-deformation. We give a cohomological interpretation of this leading term. We conjecture that it is actually encoded in some simpler cohomology class, which is quadratic in the parameter of the beta-deformation. | high energy physics theory |
We consider a renormalizable theory, which successfully explains the number of Standard Model (SM) fermion families and whose non-SM scalar sector includes an axion dark matter as well as a field responsible for cosmological inflation. In such theory, the axion gets its mass via radiative corrections at one-loop level mediated by virtual top quark, right handed Majorana neutrinos and SM gauge bosons. Its mass is obtained in the range $4$ keV$\div$ $40$ keV, consistent with the one predicted by XENON1T experiment, when the right handed Majorana neutrino mass is varied from $100$ GeV up to $350$ GeV, thus implying that the light active neutrino masses are generated from a low scale type I seesaw mechanism. Furthermore, the theory under consideration can also successfully accommodates the XENON1T excess provided that the PQ symmetry is spontaneously broken at the $10^{10}$ GeV scale. | high energy physics phenomenology |
Moment-based sufficient dimension reduction methods such as sliced inverse regression may not work well in the presence of heteroscedasticity. We propose to first estimate the expectiles through kernel expectile regression, and then carry out dimension reduction based on random projections of the regression expectiles. Several popular inverse regression methods in the literature are extended under this general framework. The proposed expectile-assisted methods outperform existing moment-based dimension reduction methods in both numerical studies and an analysis of the Big Mac data. | statistics |
At the onset of an interaction between two initially independent systems, each system tends to experience an increase in its n-Renyi entropies, such as its von Neumann entropy (n = 1) and its mixedness (n = 2). We here ask which properties of a system determine how quickly its Renyi entropies increase and, therefore, how sensitive the system is to becoming entangled. We find that the rate at which the n-Renyi entropy increases in an interaction is determined by a quantity which we term the n-fragility of the system. The 2-fragility is closely related to the notion of 2-norm coherence, in that it too quantifies the extent to which a density matrix is off-diagonal with respect to the eigenbasis of a reference operator. Nevertheless, the 2-fragility is not a coherence monotone in the resource theoretic sense since it depends also on the eigenvalues of the reference operator. It is this additional sensitivity to the eigenvalues of the reference operator, here the interaction Hamiltonian, which enables the 2-fragility to quantify the rate of entropy production in interactions. We give an example using the light-matter interaction and we anticipate applications to the study of the rates at which two systems exchange classical and quantum information when starting to interact. | quantum physics |
Reproducible research in Machine Learning has seen a salutary abundance of progress lately: workflows, transparency, and statistical analysis of validation and test performance. We build on these efforts and take them further. We offer a principled experimental design methodology, based on linear mixed models, to study and separate the effects of multiple factors of variation in machine learning experiments. This approach allows to account for the effects of architecture, optimizer, hyper-parameters, intentional randomization, as well as unintended lack of determinism across reruns. We illustrate that methodology by analyzing Matching Networks, Prototypical Networks and TADAM on the miniImagenet dataset. | statistics |
A connected graph is called a bi-block graph if each of its blocks is a complete bipartite graph. Let $\mathcal{B}(\mathbf{k}, \alpha)$ be the class of bi-block graph on $\mathbf{k}$ vertices with given independence number $\alpha$. It is easy to see that every bi-block graph is a bipartite graph. For a bipartite graph $G$ on $\mathbf{k}$ vertices, the independence number $\alpha(G)$ satisfies $\ceil*{\frac{\mathbf{k}}{2}} \leq \alpha(G) \leq \mathbf{k}-1$. In this article, we prove that the maximum spectral radius $\rho(G)$ among all graphs $G$ in $\mathcal{B}(\mathbf{k}, \alpha)$, is uniquely attained for the complete bipartite graph $K_{\alpha, \mathbf{k}-\alpha}$. | mathematics |
We report NuSTAR and Chandra observations of two X-ray transients, SWIFT J174540.7$-$290015 (T15) and SWIFT J174540.2$-$290037 (T37), which were discovered by the Neil Gehrels Swift Observatory in 2016 within $r\sim1$ pc of Sgr A*. NuSTAR detected bright X-ray outbursts from T15 and T37, likely in the soft and hard states, with 3-79~keV luminosities of $8\times10^{36}$ and $3\times10^{37}$ erg/s, respectively. No X-ray outbursts have previously been detected from the two transients and our Chandra ACIS analysis puts an upper limit of $L_X \lesssim 2 \times10^{31}$ erg/s on their quiescent 2-8 keV luminosities. No pulsations, significant QPOs, or type I X-ray bursts were detected in the NuSTAR data. While T15 exhibited no significant red noise, the T37 power density spectra are well characterized by three Lorentzian components. The declining variability of T37 above $\nu \sim 10$ Hz is typical of black hole (BH) transients in the hard state. NuSTAR spectra of both transients exhibit a thermal disk blackbody, X-ray reflection with broadened Fe atomic features, and a continuum component well described by Comptonization models. Their X-ray reflection spectra are most consistent with high BH spin ($a_{*} \gtrsim 0.9$) and large disk density ($n_e\sim10^{21}$ cm$^{-3}$). Based on the best-fit ionization parameters and disk densities, we found that X-ray reflection occurred near the inner disk radius, which was derived from the relativistic broadening and thermal disk component. These X-ray characteristics suggest the outbursting BH-LMXB scenario for both transients and yield the first BH spin measurements from X-ray transients in the central 100 pc region. | astrophysics |
AC optimal power flow (AC-OPF) problems need to be solved more frequently in the future to maintain stable and economic operation. To tackle this challenge, a deep neural network-based voltage-constrained approach (DeepOPF-V) is proposed to find feasible solutions with high computational efficiency. It predicts voltages of all buses and then uses them to obtain all remaining variables. A fast post-processing method is developed to enforce generation constraints. The effectiveness of DeepOPF-V is validated by case studies of several IEEE test systems. Compared with existing approaches, DeepOPF-V achieves a state-of-art computation speedup up to three orders of magnitude and has better performance in preserving the feasibility of the solution. | electrical engineering and systems science |
We give bounds on the primes of geometric bad reduction for curves of genus three of primitive CM type in terms of the CM orders. In the case of genus one, there are no primes of geometric bad reduction because CM elliptic curves are CM abelian varieties, which have potential good reduction everywhere. However, for genus at least two, the curve can have bad reduction at a prime although the Jacobian has good reduction. Goren and Lauter gave the first bound in the case of genus two. In the cases of hyperelliptic and Picard curves, our results imply bounds on primes appearing in the denominators of invariants and class polynomials, which are important for algorithmic construction of curves with given characteristic polynomials over finite fields. | mathematics |
Causal nonseparability refers to processes where events take place in a coherent superposition of different causal orders. These may be the key resource for experimental violations of causal inequalities and have been recently identified as resources for concrete information-theoretic tasks. Here, we take a step forward by deriving a complete operational framework for causal nonseparability as a resource. Our first contribution is a formal definition of quantum control of causal orders, a stronger form of causal nonseparability (with the celebrated quantum switch as best-known example) where the causal orders of events for a target system are coherently controlled by a control system. We then build a resource theory -- for both generic causal nonseparability and quantum control of causal orders -- with a physically-motivated class of free operations, based on process-matrix concatenations. We present the framework explicitly in the mindset with a control register. However, our machinery is versatile, being applicable also to scenarios with a target register alone. Moreover, an important subclass of our operations not only is free with respect to causal nonseparability and quantum control of causal orders but also preserves the very causal structure of causal processes. Hence, our treatment contains, as a built-in feature, the basis of a resource theory of quantum causal networks too. As applications, first, we establish a sufficient condition for pure-process free convertibility. This imposes a hierarchy of quantum control of causal orders with the quantum switch at the top. Second, we prove that causal-nonseparability distillation exists, i.e. we show how to convert multiple copies of a process with arbitrarily little causal nonseparability into fewer copies of a quantum switch. Our findings reveal conceptually new, unexpected phenomena, with both fundamental and practical implications. | quantum physics |
Ginzburg-Landau theory of continuous phase transitions implicitly assumes that microscopic changes are negligible in determining the thermodynamic properties of the system. In this work we provide an example that clearly contrasts with this assumption. We show that topological frustration can change the nature of a second order quantum phase transition separating two different ordered phases. Even more remarkably, frustration is triggered simply by a suitable choice of boundary conditions in a 1D chain. While with every other BC each of two phases is characterized by its own local order parameter, with frustration no local order can survive. We construct string order parameters to distinguish the two phases, but, having proved that topological frustration is capable of altering the nature of a system's phase transition, our results pose a clear challenge to the current understanding of phase transitions in complex quantum systems. | condensed matter |
The odd diagram of a permutation is a subset of the classical diagram with additional parity conditions. In this paper, we study classes of permutations with the same odd diagram, which we call odd diagram classes. First, we prove a conjecture relating odd diagram classes and 213- and 312-avoiding permutations. Secondly, we show that each odd diagram class is a Bruhat interval. Instrumental to our proofs is an explicit description of the Bruhat edges that link permutations in a class. | mathematics |
Case-control designs are an important tool in contrasting the effects of well-defined treatments. In this paper, we reconsider classical concepts, assumptions and principles and explore when the results of case-control studies can be endowed a causal interpretation. Our focus is on identification of target causal quantities, or estimands. We cover various estimands relating to intention-to-treat or per-protocol effects for popular sampling schemes (case-base, survivor, and risk-set sampling), each with and without matching. Our approach may inform future research on different estimands, other variations of the case-control design or settings with additional complexities. | statistics |
The technological prototype of the CALICE highly granular silicon-tungsten electromagnetic calorimeter (SiW-ECAL) was tested in a beam at DESY in 2017. The setup comprised seven layers of silicon sensors. Each layer comprised four sensors, with each sensor containing an array of 256 $5.5\times5.5$ mm$^2$ silicon PIN diodes. The four sensors covered a total area of $18\times18$ cm$^2$, and comprised a total of 1024 channels. The readout was split into a trigger line and a charge signal line. Key performance results for signal over noise for the two output lines are presented, together with a study of the uniformity of the detector response. Measurements of the response to electrons for the tungsten loaded version of the detector are also presented. | physics |
The key challenge in cross-modal retrieval is to find similarities between objects represented with different modalities, such as image and text. However, each modality embeddings stem from non-related feature spaces, which causes the notorious 'heterogeneity gap'. Currently, many cross-modal systems try to bridge the gap with self-attention. However, self-attention has been widely criticized for its quadratic complexity, which prevents many real-life applications. In response to this, we propose T-EMDE - a neural density estimator inspired by the recently introduced Efficient Manifold Density Estimator (EMDE) from the area of recommender systems. EMDE operates on sketches - representations especially suitable for multimodal operations. However, EMDE is non-differentiable and ingests precomputed, static embeddings. With T-EMDE we introduce a trainable version of EMDE which allows full end-to-end training. In contrast to self-attention, the complexity of our solution is linear to the number of tokens/segments. As such, T-EMDE is a drop-in replacement for the self-attention module, with beneficial influence on both speed and metric performance in cross-modal settings. It facilitates communication between modalities, as each global text/image representation is expressed with a standardized sketch histogram which represents the same manifold structures irrespective of the underlying modality. We evaluate T-EMDE by introducing it into two recent cross-modal SOTA models and achieving new state-of-the-art results on multiple datasets and decreasing model latency by up to 20%. | statistics |
Coating metal nanotips with a negative electron affinity material like hydrogen-terminated diamond bears promise for a high brightness photocathode. We report a recipe on the fabrication of diamond coated tungsten tips. A tungsten wire is etched electrochemically to a nanometer sharp tip, dip-seeded in diamond suspension and subsequently overgrown with a diamond film by plasma-enhanced chemical vapor deposition. With dip-seeding only, the seeding density declines towards the tip apex due to seed migration during solvent evaporation. The migration of seeds can be counteracted by nitrogen gas flow towards the apex, which makes coating of the apex with nanometer-thin diamond possible. At moderate gas flow, diamond grows homogeneously at shaft and apex whereas at high flow diamond grows in the apex region only. With this technique, we achieve a thickness of a few tens of nanometers of diamond coating within less than 1 $\mu$m away from the apex. Conventional transmission electron microscopy (TEM), electron diffraction and electron energy loss spectroscopy confirm that the coating is composed of dense nanocrystalline diamond with a typical grain size of 20 nm. High resolution TEM reveals graphitic paths between the diamond grains. | condensed matter |
To study how mental object representations are related to behavior, we estimated sparse, non-negative representations of objects using human behavioral judgments on images representative of 1,854 object categories. These representations predicted a latent similarity structure between objects, which captured most of the explainable variance in human behavioral judgments. Individual dimensions in the low-dimensional embedding were found to be highly reproducible and interpretable as conveying degrees of taxonomic membership, functionality, and perceptual attributes. We further demonstrated the predictive power of the embeddings for explaining other forms of human behavior, including categorization, typicality judgments, and feature ratings, suggesting that the dimensions reflect human conceptual representations of objects beyond the specific task. | statistics |
Previous research has found that voices can provide reliable information for gender classification with a high level of accuracy. In social psychology, perceived vocal masculinity and femininity has often been considered as an important feature on social behaviours. While previous studies have characterised acoustic features that contributed to perceivers' judgements of speakers' vocal masculinity or femininity, there is limited research on building an objective masculinity/femininity scoring model and characterizing the independent acoustic factors that contribute to the judgements of speakers' vocal masculinity or femininity. In this work, we firstly propose an objective masculinity/femininity scoring system based on the Extreme Random Forest and then characterize the independent and meaningful acoustic factors contributing to perceivers' judgements by using a correlation matrix based hierarchical clustering method. The results show the objective masculinity/femininity ratings strongly correlated with the perceived masculinity/femininity ratings when we used an optimal speech duration of 7 seconds, with a correlation coefficient of up to .63 for females and .77 for males. 9 independent clusters of acoustic measures were generated from our modelling of femininity judgements for female voices and 8 clusters were found for masculinity judgements for male voices. The results revealed that, for both sexes, the F0 mean is the most critical acoustic measure affects the judgement of vocal masculinity and femininity. The F3 mean, F4 mean and VTL estimators are found to be highly inter-correlated and appeared in the same cluster, forming the second significant factor. Next, F1 mean, F2 mean and F0 standard deviation are independent factors that share similar importance. The voice perturbation measures, including HNR, jitter and shimmer, are of lesser importance. | computer science |
We investigate optimal subsampling for quantile regression. We derive the asymptotic distribution of a general subsampling estimator and then derive two versions of optimal subsampling probabilities. One version minimizes the trace of the asymptotic variance-covariance matrix for a linearly transformed parameter estimator and the other minimizes that of the original parameter estimator. The former does not depend on the densities of the responses given covariates and is easy to implement. Algorithms based on optimal subsampling probabilities are proposed and asymptotic distributions and asymptotic optimality of the resulting estimators are established. Furthermore, we propose an iterative subsampling procedure based on the optimal subsampling probabilities in the linearly transformed parameter estimation which has great scalability to utilize available computational resources. In addition, this procedure yields standard errors for parameter estimators without estimating the densities of the responses given the covariates. We provide numerical examples based on both simulated and real data to illustrate the proposed method. | statistics |
In these lectures we discuss N-extended conformal supergravity and its spectrum in four dimensions. These theories can be considered as the massless limit of Einstein-Weyl supergravity and by taking into account their enhanced gauge symmetries, we derive their massless spectrum, which in general contains a dipole-ghost graviton multiplet and an N-fold tripole-ghost gravitino multiplet. | high energy physics theory |
The rapid expansion of social network provides a suitable platform for users to deliver messages. Through the social network, we can harvest resources and share messages in a very short time. The developing of social network has brought us tremendous conveniences. However, nodes that make up the network have different spreading capability, which are constrained by many factors, and the topological structure of network is the principal element. In order to calculate the importance of nodes in network more accurately, this paper defines the improved H-index centrality (IH) according to the diversity of neighboring nodes, then uses the cumulative centrality (MC) to take all neighboring nodes into consideration, and proposes the extended mixing H-index centrality (EMH). We evaluate the proposed method by Susceptible-Infected-Recovered (SIR) model and monotonicity which are used to assess accuracy and resolution of the method, respectively. Experimental results indicate that the proposed method is superior to the existing measures of identifying nodes in different networks. | computer science |
I discuss fluid flow at the interface between solids with anisotropic roughness. I show that for randomly rough surfaces with anisotropic roughness, the contact area percolate at the same relative contact area as for isotropic roughness, and that the Bruggeman effective medium theory and the critical junction theory give nearly the same results for the fluid flow conductivity. This shows that, in most cases, the surface roughness observed at high magnification is irrelevant for fluid flow problems such as the leakage of static seals, and fluid squeeze-out. | condensed matter |
We expand the theory of log canonical $3$-fold complements. More precisely, fix a set $\Lambda \subset \mathbb{Q}$ satisfying the descending chain condition with $\overline{\Lambda} \subset \mathbb{Q}$, and let $(X,B+B')$ be a log canonical 3-fold with $\mathrm{coeff}(B) \in \Lambda$ and $K_X + B$ $\mathbb{Q}$-Cartier. Then, there exists a natural number $n$, only depending on $\Lambda$, such that the following holds. Given a contraction $f \colon X \rightarrow T$ and $t \in T$ with $K_X + B + B' \sim_\mathbb{Q} 0$ over $t$, there exists $\Gamma \geq 0$ such that $\Gamma \sim -n(K_X + B)$ over $t \in T$, and $(X,B+\Gamma/n)$ is log canonical. | mathematics |
In order to determine whether or not an effect is absent based on a statistical test, the recommended frequentist tool is the equivalence test. Typically, it is expected that an appropriate equivalence margin has been specified before any data are observed. Unfortunately, this can be a difficult task. If the margin is too small, then the test's power will be substantially reduced. If the margin is too large, any claims of equivalence will be meaningless. Moreover, it remains unclear how defining the margin afterwards will bias one's results. In this short article, we consider a series of hypothetical scenarios in which the margin is defined post-hoc or is otherwise considered controversial. We also review a number of relevant, potentially problematic actual studies from clinical trials research, with the aim of motivating a critical discussion as to what is acceptable and desirable in the reporting and interpretation of equivalence tests. | statistics |
One of the puzzles of the SM is the large hierarchy between the Yukawa couplings of different flavours. Yukawa couplings of the first and the second generation are constrained only very weakly so far. However, one can obtain large deviations in the Yukawa couplings in several New Physics (NP) models, such as e.g. models with new vector-like quarks, or new Higgs bosons that couple naturally to individual fermion families. In this work, we investigate the potential bounds on the NP Higgs Yukawa couplings modification $ \kappa_f$ for light quarks from double-Higgs at the LHC, starting from a model independent formalism. We have looked at the two Higgs boson final state $ b \bar b \gamma \gamma $, and the relevant experimental cuts to reduce backgrounds and estimated the potential exclusion bounds for $ \kappa_f$. We have considered both linear and non-linear effective field theory for the Higgs light quark coupling modifications. | high energy physics phenomenology |
Tensor B-spline methods are a high-performance alternative to solve partial differential equations (PDEs). This paper gives an overview on the principles of Tensor B-spline methodology, shows their use and analyzes their performance in application examples, and discusses its merits. Tensors preserve the dimensional structure of a discretized PDE, which makes it possible to develop highly efficient computational solvers. B-splines provide high-quality approximations, lead to a sparse structure of the system operator represented by shift-invariant separable kernels in the domain, and are mesh-free by construction. Further, high-order bases can easily be constructed from B-splines. In order to demonstrate the advantageous numerical performance of tensor B-spline methods, we studied the solution of a large-scale heat-equation problem (consisting of roughly 0.8 billion nodes!) on a heterogeneous workstation consisting of multi-core CPU and GPUs. Our experimental results nicely confirm the excellent numerical approximation properties of tensor B-splines, and their unique combination of high computational efficiency and low memory consumption, thereby showing huge improvements over standard finite-element methods (FEM). | computer science |
A previous article shows that any linear height bounded normal proof of a tautology in the Natural Deduction for Minimal implicational logic $M_{\supset}$ is as huge as it is redundant. More precisely, any proof in a family of super-polynomially sized and linearly height bounded proofs have a sub-derivation that occurs super-polynomially many times in it. In this article, we show that by collapsing all the repeated sub-derivations we obtain a smaller structure, a rooted Directed Acyclic Graph (r-DAG), that is polynomially upper-bounded on the size of $\alpha$ and it is a certificate that $\alpha$ is a tautology that can be verified in polynomial time. In other words, for every huge proof of a tautology in $M_{\supset}$, we obtain a succinct certificate for its validity. Moreover, we show an algorithm able to check this validity in polynomial time on the certificate's size. Comments on how the results in this article are related to a proof of the conjecture $NP=CoNP$ appears in conclusion. | computer science |
We consider the inverse problem of parameter estimation in a diffuse interface model for tumour growth. The model consists of a fourth-order Cahn-Hilliard system and contains three phenomenological parameters: the tumour proliferation rate, the nutrient consumption rate, and the chemotactic sensitivity. We study the inverse problem within the Bayesian framework and construct the likelihood and noise for two typical observation settings. One setting involves an infinite-dimensional data space where we observe the full tumour. In the second setting we observe only the tumour volume, hence the data space is finite-dimensional. We show the well-posedness of the posterior measure for both settings, building upon and improving the analytical results in [C. Kahle and K.F. Lam, Appl. Math. Optim. (2018)]. A numerical example involving synthetic data is presented in which the posterior measure is numerically approximated by the sequential Monte Carlo approach with tempering. | mathematics |
A two-hop energy harvesting communication network is considered, in which measurement updates are transmitted by a source to a destination through an intermediate relay. Updates are to be sent in a timely fashion that minimizes the age of information, defined as the time elapsed since the most recent update at the destination was generated at the source. The source and the relay communicate using energy harvested from nature, which is stored in infinite-sized batteries. Both nodes use fixed transmission rates, and hence updates incur fixed delays (service times). Two problems are formulated: an offline problem, in which the energy arrival information is known a priori, and an online problem, in which such information is revealed causally over time. In both problems, it is shown that it is optimal to transmit updates from the source just in time as the relay is ready to forward them to the destination, making the source and the relay act as one combined node. A recurring theme in the optimal policy is that updates should be as uniformly spread out over time as possible, subject to energy causality and service time constraints. This is perfectly achieved in the offline setting, and is achieved almost surely in the online setting by a best effort policy. | computer science |
Rotational misalignment or twisting of two mono-layers of graphene strongly influences its electronic properties. Structurally, twisting leads to large periodic supercell structures, which in turn can support intriguing strongly correlated behaviour. Here, we propose a highly tunable scheme to synthetically emulate twisted bilayer systems with ultracold atoms trapped in an optical lattice. In our scheme, neither a physical bilayer nor twist is directly realized. Instead, two synthetic layers are produced exploiting coherently-coupled internal atomic states, and a supercell structure is generated \emph{via} a spatially-dependent Raman coupling. To illustrate this concept, we focus on a synthetic square bilayer lattice and show that it leads to tunable quasi-flatbands and Dirac cone spectra under certain magic supercell periodicities. The appearance of these features are explained using a perturbative analysis. Our proposal can be implemented using available state-of-the-art experimental techniques, and opens the route towards the controlled study of strongly-correlated flat band accompanied by hybridization physics akin to magic angle bilayer graphene in cold atom quantum simulators. | condensed matter |
We perform numerical experiments on one-dimensional singularly perturbed problems of reaction-convection-diffusion type, using isogeometric analysis. In particular, we use a Galerkin formulation with B-splines as basis functions. The question we address is: how should the knots be chosen in order to get uniform, exponential convergence in the maximum norm? We provide specific guidelines on how to achieve precisely this, for three different singularly perturbed problems. | mathematics |
This paper presents the conceptual design of a low-cost simple printer head for Braille embossers. Such device consists of a set of three rotary cam-follower mechanisms that, upon actuation, produce deformation on paper. The set of cam-followers is actuated by a single servomotor which rotation determines which cam-follower strikes the paper. Braille characters are quickly embossed by column using the proposed system. The aim of this research is to provide new actuation ideas for making Braille embossers more affordable. | computer science |
We report the creation of ultracold bosonic dipolar $^{23}\textrm{Na}^{39}\textrm{K}$ molecules in their absolute rovibrational ground state. Starting from weakly bound molecules immersed in an ultracold atomic mixture, we coherently transfer the dimers to the rovibrational ground state using an adiabatic Raman passage. We analyze the two-body decay in a pure molecular sample and in molecule-atom mixtures and find an unexpectedly low two-body decay coefficient for collisions between molecules and $^{39}\textrm{K}$ atoms in a selected hyperfine state. The preparation of bosonic $^{23}\textrm{Na}^{39}\textrm{K}$ molecules opens the way for future comparisons between fermionic and bosonic ultracold ground-state molecules of the same chemical species. | condensed matter |
We present the analytic formula for the Energy-Energy Correlation (EEC) in electron-positron annihilation computed in perturbative QCD to next-to-next-to-next-to-leading order (N$^3$LO) in the back-to-back limit. In particular, we consider the EEC arising from the annihilation of an electron-positron pair into a virtual photon as well as a Higgs boson and their subsequent inclusive decay into hadrons. Our computation is based on a factorization theorem of the EEC formulated within Soft-Collinear Effective Theory (SCET) for the back-to-back limit. We obtain the last missing ingredient for our computation - the jet function - from a recent calculation of the transverse-momentum dependent fragmentation function (TMDFF) at N$^3$LO. We combine the newly obtained N$^3$LO jet function with the well known hard and soft function to predict the EEC in the back-to-back limit. The leading transcendental contribution of our analytic formula agrees with previously obtained results in $\mathcal{N} = 4$ supersymmetric Yang-Mills theory. We obtain the $N=2$ Mellin moment of the bulk region of the EEC using momentum sum rules. Finally, we obtain the first resummation of the EEC in the back-to-back limit at N$^3$LL$^\prime$ accuracy, resulting in a factor of $\sim 4$ reduction of uncertainties in the peak region compared to N$^3$LL predictions. | high energy physics phenomenology |
The electroweak (EW) sector of the Minimal Supersymmetric Standard Model (MSSM), with the lightest neutralino as Dark Matter (DM) candidate, can account for a variety of experimental data. This includes the DM content of the universe, DM direct detection limits, EW SUSY searches at the LHC and in particular the so far persistent $3-4\,\sigma$ discrepancy between the experimental result for the anomalous magnetic moment of the muon, $(g-2)_\mu$, and its Standard Model (SM) prediction. The recently published ``MUON G-2'' result is within $0.8\,\sigma$ in agreement with the older BNL result on $(g-2)_\mu$. The combination of the two results was given as $a_\mu^{\rm exp} = (11 659206.1 \pm 4.1c) \times 10^{-10}$, yielding a new deviation from the SM prediction of $\Delta a_\mu = (25.1 \pm 5.9) \times 10^{-10}$, corresponding to $4.2\,\sigma$. Using this improved bound we update the results presented in [1] and set new upper limits on the allowed parameters space of the EW sector of the MSSM. We find that with the new $(g-2)_\mu$ result the upper limits on the (next-to-) lightest SUSY particle are in the same ballpark as previously, yielding updated upper limits on these masses of $\sim 600$ GeV. In this way, a clear target is confirmed for future (HL-)LHC EW searches, as well as for future high-energy $e^+e^-$ colliders, such as the ILC or CLIC. | high energy physics phenomenology |
The theory that the extra-space component of the gauge field is identified with the standard model Higgs boson, is called the gauge-Higgs unification (GHU) scenario. We examine how the small neutrino masses are naturally generated in the GHU framework. We find out two model classes where the following matter multiplets are introduced : 1. adjoint rep. lepton $\Psi_{A}$, 2. fundamental rep. lepton $\Psi_{F}$ and scalar $\Sigma_{F}$. We present a concrete model in each class. At the model in class 1, the neutrino masses are generated by the admixture of the seesaw mechanism type-I and -III. At the model in class 2, the masses are generated by the inverse seesaw mechanism. | high energy physics phenomenology |
Granular flow out of a silo is studied experimentally and numerically. The time evolution of the discharge rate as well as the normal force (apparent weight) at the bottom of the container is monitored. We show, that particle stiffness has a strong effect on the qualitative features of silo discharge. For deformable grains with a Young's modulus of about $Y_m\approx 40$ kPa in a silo with basal pressure of the order of 4 kPa lowering the friction coefficient leads to a gradual change in the discharge curve: the flow rate becomes filling height dependent, it decreases during the discharge process. For hard grains with a Young's modulus of about $Y_m\approx 500$ MPa the flow rate is much less sensitive to the value of the friction coefficient. Using DEM data combined with a coarse-graining methodology allows us to compute all the relevant macroscopic fields, namely, linear momentum, density and stress tensors. The observed difference in the discharge in the low friction limit is connected to a strong difference in the pressure field: while for hard grains Janssen-screening is effective, leading to high vertical stress near the silo wall and small pressure above the orifice region, for deformable grains the pressure above the orifice is larger and gradually decreases during the discharge process. We have analyzed the momentum balance in the region of the orifice (near the location of the outlet) for the case of soft particles with low friction coefficient, and proposed a phenomenological formulation that predicts the linear decrease of the flow rate with decreasing filling height. | condensed matter |
Probably the most dramatic historical challenge to scientific realism concerns Arnold Sommerfeld's 1916 derivation of the fine structure energy levels of hydrogen. Not only were his predictions good, he derived exactly the same formula that would later drop out of Dirac's 1928 treatment, something not possible using 1925 Schroedinger-Heisenberg quantum mechanics. And yet the most central elements of Sommerfeld's theory were not even approximately true: his derivation leans heavily on a classical approach to elliptical orbits, including the necessary adjustments to these orbits demanded by relativity. Even physicists call Sommerfeld's success a 'miracle', which rather makes a joke of the so-called 'no miracles argument'. However, this can all be turned around. Here I argue that the realist has a story to tell vis-a-vis the discontinuities between the old and the new theory, leading to a realist defence based on sufficient continuity of relevant structure. | physics |
Core-collapse supernovae span a wide range of energies, from much less than to much greater than the binding energy of the progenitor star. As a result, the shock wave generated from a supernova explosion can have a wide range of Mach numbers. In this paper, we investigate the propagation of shocks with arbitrary initial strengths in polytropic stellar envelopes using a suite of spherically symmetric hydrodynamic simulations. We interpret these results using the three known self-similar solutions for this problem: the Sedov-Taylor blastwave describes an infinitely strong shock and the self-similar solutions from Coughlin et al. (2018b) (Paper I) and Coughlin et al. (2019) (Paper II) describe a weak and infinitely weak shock (the latter being a rarefaction wave). We find that shocks, no matter their initial strengths, evolve toward either the infinitely strong or infinitely weak self-similar solutions at sufficiently late times. For a given density profile, a single function characterizes the long-term evolution of a shock's radius and strength. However, shocks with strengths near the self-similar solution for a weak shock (from Paper I) evolve extremely slowly with time. Therefore, the self-similar solutions for infinitely strong and infinitely weak shocks are not likely to be realized in low-energy stellar explosions, which will instead retain memory of the shock strength initiated in the stellar interior. | astrophysics |
Bayesian Optimization using Gaussian Processes is a popular approach to deal with the optimization of expensive black-box functions. However, because of the a priori on the stationarity of the covariance matrix of classic Gaussian Processes, this method may not be adapted for non-stationary functions involved in the optimization problem. To overcome this issue, a new Bayesian Optimization approach is proposed. It is based on Deep Gaussian Processes as surrogate models instead of classic Gaussian Processes. This modeling technique increases the power of representation to capture the non-stationarity by simply considering a functional composition of stationary Gaussian Processes, providing a multiple layer structure. This paper proposes a new algorithm for Global Optimization by coupling Deep Gaussian Processes and Bayesian Optimization. The specificities of this optimization method are discussed and highlighted with academic test cases. The performance of the proposed algorithm is assessed on analytical test cases and an aerospace design optimization problem and compared to the state-of-the-art stationary and non-stationary Bayesian Optimization approaches. | statistics |
Quantum Mechanics (QM) predicts the correlation between measurements performed in remote regions of a spatially spread entangled state to be higher than allowed by the intuitive concepts of Locality and Realism (LR). This high correlation forbids the introduction of nonlinear operators of evolution in QM (which would be desirable for several reasons), for it would lead to faster-than-light signaling. As a way out of this situation, it has been hypothesized that the high quantum correlation can be observed only after a time longer than L/c has elapsed (where L is the spatial spread of the entangled state and c is the speed of light). In shorter times, a level of correlation compatible with LR would be observed instead. A simple model following this hypothesis is described. It has not been disproved by any of the performed experiments to date. A test achievable with accessible means is proposed. The data recorded in a similar but incomplete experiment (which was done in 2012 with a different purpose, and repeated in 2018 producing essentially the same results) are analyzed, and are found consistent with the described model. Yet, we stress that a specific experiment is absolutely needed. | quantum physics |
Medical image slice interpolation is an active field of research. The methods for this task can be categorized into two broad groups: intensity-based and object-based interpolation methods. While intensity-based methods are generally easier to perform and less computationally expensive, object-based methods are capable of producing more accurate results and account for deformable changes in the objects within the slices. In this paper, performance of two well-known object-based interpolation methods is analyzed and compared. Here, a deformable registration-based method specifically designed for medical applications and a learning-based method, trained for video frame interpolation, are considered. While the deformable registration-based technique is capable of accurate modeling of the changes in the shapes of the objects within slices, the learning-based method is able to produce results with similar accuracy, but with a much sharper appearance in a fraction of the time. This is despite the fact that the learning-based approach is not trained on medical images and rather is trained using regular video footage. However, experiments show that the method is capable of accurate slice interpolation results. | electrical engineering and systems science |
A predictive distribution over a sequence of $N+1$ events is said to be "frequency mimicking" whenever the probability for the final event conditioned on the outcome of the first $N$ events equals the relative frequency of successes among them. Infinitely extendible exchangeable distributions that universally inhere this property are known to have several annoying concomitant properties. We motivate frequency mimicking assertions over a limited subdomain in practical problems of finite inference, and we identify their computable coherent implications. We provide some computed examples using reference distributions, and we introduce computational software to generate any specification. The software derives from an inversion of the finite form of the exchangeability representation theorem. Three new theorems delineate the extent of the usefulness of such distributions, and we show why it may not be appropriate to extend the frequency mimicking assertions for a specified value of $N$ to any arbitrary larger size of $N$. The constructive results identify the source and structure of "adherent masses" in the limit of a sequence of finitely additive distributions. Appendices develop a novel geometrical representation of conditional probabilities which illuminate the analysis. | statistics |
We identify string corrections to the EM memory effect. Though largely negligible in the low-energy limit, the effect become relevant in high-energy collisions and in extreme events. We illustrate our findings in a simple unoriented bosonic string model. Thanks to the coherent effect of the infinite tower of open string resonances, the corrections are non-perturbative in $\alpha'$, modulated in retarded time and slowly decaying even at large distances from the source. Remarkably compact expressions obtain for special choices of the kinematics in tree-level 4-point amplitudes. We discuss further corrections occurring at higher-points and the exponential damping resulting from broadening and shifting of the massive poles due to loops. Finally we estimate the range of the parameters and masses for detectability in semi-realistic (Type I) contexts and propose a rationale for this string memory effect. | high energy physics theory |
Scanning micro-mirror actuators are silicon-based oscillatory micro-electro-mechanical systems (MEMS). They enable laser distance measurements for automotive LIDAR applications as well as projection modules for the consumer market. For MEMS applications, the geometric structure is typically designed to serve a number of functional requirements. Most importantly, the mode spectrum contains a single high-Q mode, the drive mode, which per design is expected to yield the only resonantly excited geometric motion during operation. Yet here, we report on the observation of a resonant three-mode excitation via a process known as spontaneous parametric down-conversion. We show that this phenomenon, most extensively studied in the field of nonlinear optics, originates from three-wave coupling induced by geometric nonlinearities. In combination with further Duffing-type nonlinearities, the micro mirror displays a variety of nonlinear dynamical behaviour ranging from stationary state bifurcations to dynamical instabilities observable via amplitude modulations. We are able to explain and emulate all experimental observations using a single fundamental model. In particular, our analysis allows us to understand the conditions for the onset of three-wave down-conversion which if not accounted for in the design of the MEMS structure, can have drastic impact on its functionality even leading to fracture. | condensed matter |
In this paper, we propose an active learning method for an inverse problem that aims to find an input that achieves a desired structured-output. The proposed method provides new acquisition functions for minimizing the error between the desired structured-output and the prediction of a Gaussian process model, by effectively incorporating the correlation between multiple outputs of the underlying multi-valued black box output functions. The effectiveness of the proposed method is verified by applying it to two synthetic shape search problem and real data. In the real data experiment, we tackle the input parameter search which achieves the desired crystal growth rate in silicon carbide (SiC) crystal growth modeling, that is a problem of materials informatics. | statistics |
Vector bosons heavier than $10^{-22}$ eV can be viable dark matter candidates with distinctive experimental signatures. Ultralight dark matter generally requires a non-thermal origin to achieve the observed density, while still behaving like a pressureless fluid at late times. We show that such a production mechanism naturally occurs for vectors whose mass originates from a dark Higgs. If the dark Higgs has a large field value after inflation, the energy in the Higgs field can be efficiently transferred to vectors through parametric resonance. Computing the resulting abundance and spectra requires careful treatment of the transverse and longitudinal components, whose dynamics are governed by distinct differential equations. We study these equations in detail and find that the mass of the vector may be as low as $10 ^{ - 18 }$ eV, while making up the dominant dark matter abundance. This opens up a wide mass range of vector dark matter as cosmologically viable, further motivating their experimental search. | high energy physics phenomenology |
We use the terms "$\infty$-categories" and "$\infty$-functors" to mean the objects and morphisms in an "$\infty$-cosmos." Quasi-categories, Segal categories, complete Segal spaces, naturally marked simplicial sets, iterated complete Segal spaces, $\theta_n$-spaces, and fibered versions of each of these are all $\infty$-categories in this sense. We show that the basic category theory of $\infty$-categories and $\infty$-functors can be developed from the axioms of an $\infty$-cosmos; indeed, most of the work is internal to a strict 2-category of $\infty$-categories, $\infty$-functors, and natural transformations. In the $\infty$-cosmos of quasi-categories, we recapture precisely the same theory developed by Joyal and Lurie, although in most cases our definitions, which are 2-categorical rather than combinatorial in nature, present a new incarnation of the standard concepts. In the first lecture, we define an $\infty$-cosmos and introduce its "homotopy 2-category," using formal category theory to define and study equivalences and adjunctions between $\infty$-categories. In the second lecture, we study (co)limits of diagrams taking values in an $\infty$-category and the relationship between (co)limits and adjunctions. In the third lecture, we introduce comma $\infty$-categories, which are used to encode the universal properties of (co)limits and adjointness and prove "model independence" results. In the fourth lecture, we introduce (co)cartesian fibrations, describe the calculus of "modules" between $\infty$-categories, and use this framework to prove the Yoneda lemma and develop the theory of pointwise Kan extensions of $\infty$-functors. | mathematics |
Power systems solvers are vital tools in planning, operating, and optimizing electrical distribution networks. The current generation of solvers employ computationally expensive iterative methods to compute sequential solutions. To accelerate these simulations, this paper proposes a novel method that replaces the physics-based solvers with data-driven models for many steps of the simulation. In this method, computationally inexpensive data-driven models learn from training data generated by the power flow solver and are used to predict system solutions. Clustering is used to build a separate model for each operating mode of the system. Heuristic methods are developed to choose between the model and solver at each step, managing the trade-off between error and speed. For the IEEE 123 bus test system this methodology is shown to reduce simulation time for a typical quasi-steady state time-series simulation by avoiding the solver for 86.7% of test samples, achieving a median prediction error of 0.049%. | electrical engineering and systems science |
We describe new irreducible components of the moduli space of rank $2$ semistable torsion free sheaves on the three-dimensional projective space whose generic point corresponds to non-locally free sheaves whose singular locus is either 0-dimensional or consists of a line plus disjoint points. In particular, we prove that the moduli spaces of semistable sheaves with Chern classes $(c_1,c_2,c_3)=(-1,2n,0)$ and $(c_1,c_2,c_3)=(0,n,0)$ always contain at least one rational irreducible component. As an application, we prove that the number of such components grows as the second Chern class grows, and compute the exact number of irreducible components of the moduli spaces of rank 2 semistable torsion free sheaves with Chern classes $(c_1,c_2,c_3)=(-1,2,m)$ for all possible values for $m$; all components turn out to be rational. Furthermore, we also prove that these moduli spaces are connected, showing that some of sheaves here considered are smoothable. | mathematics |
In this work, we study kink collisions in a scalar field model with scalar-kinetic coupling. This model supports kink/antikink solutions with inner structure in the energy density. The collision of two such kinks is simulated by using the Fourier spectral method. We numerically calculate how the critical velocity and the widths of the first three two bounce windows vary with the model parameters. After that, we report some interesting collision results including two-bion escape final states, kink-bion-antikink intermediate states and kink or antikink intertwined final states. These results show that kinks with inner structure in the energy density have similar properties as those of the double kinks. | high energy physics theory |
In this paper we report on a high current density ion beam profile diagnostics with a slit-based system as a reliable method, capable of high thermal load applications. The task arose in frames of a point-like neutron source development for neutron radiography. In previous research, it was suggested to construct such a system as a D-D neutron generator based on the high current gasdynamic ion source, which utilises the plasma of electron cyclotron resonance discharge sustained by powerful millimeter wave gyrotron radiation. This device is able to produce focused D+ beams with a characteristic diameter of 1 mm, total current above 100 mA, and current density at a level of several A/cm^2. Study of such intense beams profile to obtain the best focusing efficiency and minimize neutron producing area appeared to be a challenging task. The paper also demonstrates the possibility of fast neutron imaging with a point-like powerful neutron generator (neutron yield on the level of 10^10 1/s). | physics |
Kusner asked if $n+1$ points is the maximum number of points in $\mathbb{R}^n$ such that the $\ell_p$ distance between any two points is $1$. We present an improvement to the best known upper bound when $p$ is large in terms of $n$, as well as a generalization of the bound to $s$-distance sets. We also study equilateral sets in the $\ell_p$ sums of Euclidean spaces, deriving upper bounds on the size of an equilateral set for when $p=\infty$, $p$ is even, and for any $1\le p<\infty$. | mathematics |
Selective withdrawal is a desired phenomenon in transferring oil from large caverns in US Strategic petroleum reserve, because entrainment of oil at the time during withdrawal poses a risk of contaminating the environment. In order to predict a critical submergence depth at a critical flow rate, a selective withdrawal experiment at a high Reynolds Number was conducted. A tube was positioned through a liquid-liquid interface that draws the lower liquid upwards. Analysis of the normal stress balance across the interface produced a Weber number, utilizing dynamic pressure scaling, that predicted the transition to entrainment. An inviscid flow analysis, using Bernoulli's principle, assuming an ellipsoidal control volume surface for the iso-velocity profile produced a linear relationship between the Weber number and the scaled critical submergence depth. The analytical model was validated using the experimental data resulting in a robust model for predicting transition from selective withdrawal to entrainment. | physics |
We explore the photon transfer in the nonlinear parity-time-symmetry system of two coupled cavities, which contains nonlinear gain and loss dependent on the intracavity photons. Analytical solution to the steady state gives a saturated gain, which satisfy the parity-time symmetry automatically. The eigen-frequency self-adapts the nonlinear saturated gain to reach the maximum efficiency in the steady state. We find that the saturated gain in the weak coupling regime does not match the loss in the steady state, exhibiting an appearance of a spontaneous symmetry-breaking. The photon transmission efficiency in the parity-time-symmetric regime is robust against the variation of the coupling strength, which improves the results of the conventional methods by tuning the frequency or the coupling strength to maintain optimal efficiency. Our scheme provides an experimental platform for realizing the robust photon transfer in cavities with nonlinear parity-time symmetry. | quantum physics |
We reconsider complex scalar singlet dark matter stabilised by a $\mathbb{Z}_{3}$ symmetry. We refine the stability bounds on the potential and use constraints from unitarity on scattering at finite energy to place a stronger lower limit on the direct detection cross section. In addition, we improve the treatment of the thermal freeze-out by including the evolution of the dark matter temperature and its feedback onto relic abundance. In the regions where the freeze-out is dominated by resonant or semi-annihilation, the dark matter decouples kinetically from the plasma very early, around the onset of the chemical decoupling. This results in a modification of the required coupling to the Higgs, which turns out to be at most few per cent in the semi-annihilation region, thus giving credence to the standard approach to the relic density calculation in this regime. In contrast, for dark matter mass just below the Higgs resonance, the modification of the Higgs invisible width and direct and indirect detection signals can be up to a factor $6.7$. The model is then currently allowed at $56.8$ GeV to $58.4$ GeV (depending on the details of early kinetic decoupling) $\lesssim M_{S} \lesssim 62.8$ GeV and at $M_{S} \gtrsim 122$ GeV if the freeze-out is dominated by semi-annihilation. We show that the whole large semi-annihilation region will be probed by the near-future measurements at the XENONnT experiment. | high energy physics phenomenology |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.