text
stringlengths
11
9.77k
label
stringlengths
2
104
In this paper we study the extension of Painlev\'e/gauge theory correspondence to circular quivers by focusing on the special case of $SU(2)$ $\mathcal{N}=2^*$ theory. We show that the Nekrasov-Okounkov partition function of this gauge theory provides an explicit combinatorial expression and a Fredholm determinant formula for the tau-function describing isomonodromic deformations of $SL_2$ flat connections on the one-punctured torus. This is achieved by reformulating the Riemann-Hilbert problem associated to the latter in terms of chiral conformal blocks of a free-fermionic algebra. This viewpoint provides the exact solution of the renormalization group flow of the $SU(2)$ $\mathcal{N}=2^*$ theory on self-dual $\Omega$-background and, in the Seiberg-Witten limit, an elegant relation between the IR and UV gauge couplings.
high energy physics theory
There is compelling evidence for a highly energetic Seyfert explosion (10^{56-57} erg) that occurred in the Galactic Centre a few million years ago. The clearest indications are the x-ray/gamma-ray "10 kpc bubbles" identified by the Rosat and Fermi satellites. In an earlier paper, we suggested another manifestation of this nuclear activity, i.e. elevated H-alpha emission along a section of the Magellanic Stream due to a burst (or flare) of ionizing radiation from Sgr A*. We now provide further evidence for a powerful flare event: UV absorption line ratios (in particular CIV/CII, SiIV/SiII) observed by the Hubble Space Telescope reveal that some Stream clouds towards both galactic poles are highly ionized by a source capable of producing ionization energies up to at least 50 eV. We show how these are clouds caught in a beam of bipolar, radiative "ionization cones" from a Seyfert nucleus associated with Sgr A*. In our model, the biconic axis is tilted by about 15 deg from the South Galactic Pole with an opening angle of roughly 60 deg. For the Stream at such large Galactic distances (D > 75 kpc), nuclear activity is a plausible explanation for all of the observed signatures: elevated H-alpha emission and H ionization fraction (X_e > 0.5), enhanced CIV/CII and SiIV/SiII ratios, and high CIV and SiIV column densities. Wind-driven "shock cones" are ruled out because the Fermi bubbles lose their momentum and energy to the Galactic corona long before reaching the Stream. The nuclear flare event must have had a radiative UV luminosity close to the Eddington limit (f_E ~ 0.1-1). Our time-dependent Seyfert flare models adequately explain the observations and indicate the Seyfert flare event took place T_o = 3.5 +/- 1 Myr ago. The timing estimates are consistent with the mechanical timescales needed to explain the x-ray/gamma-ray bubbles in leptonic jet/wind models (2-8 Myr).
astrophysics
The parameters in Monte Carlo (MC) event generators are tuned on experimental measurements by evaluating the goodness of fit between the data and the MC predictions. The relative importance of each measurement is adjusted manually in an often time-consuming, iterative process to meet different experimental needs. In this work, we introduce several optimization formulations and algorithms with new decision criteria for streamlining and automating this process. These algorithms are designed for two formulations: bilevel optimization and robust optimization. Both formulations are applied to the datasets used in the ATLAS A14 tune and to the dedicated hadronization datasets generated by the sherpa generator, respectively. The corresponding tuned generator parameters are compared using three metrics. We compare the quality of our automatic tunes to the published ATLAS A14 tune. Moreover, we analyze the impact of a pre-processing step that excludes data that cannot be described by the physics models used in the MC event generators.
mathematics
We extend the general formalism discussed in the previous paper to two models with charge fluctuations: the interacting resonant level model and the Anderson impurity model. In the interacting resonant level model, we find the exact time-evolving wavefunction and calculate the steady state impurity occupancy to leading order in the interaction. In the Anderson impurity model, we find the non-equilibrium steady state for small or large Coulomb repulsion $U$, and we find that the steady state current to leading order in $U$ agrees with a Keldysh perturbation theory calculation.
condensed matter
Since most of the research about grey forecasting models is focused on developing novel models and improving accuracy, relatively limited attention has been paid to the modelling mechanism and relationships among diverse kinds of models. This paper aims to unify and reconstruct continuous-time grey models, highlighting the differences and similarities among different models. First, the unified form of grey models is proposed and simplified into a reduced-order ordinary differential equation. Then, the integral matching that consists of integral transformation and least squares, is proposed to estimate the structural parameter and initial value simultaneously. The cumulative sum operator, an essential element in grey modelling, proves to be the discrete approximation of the integral transformation formula. Next, grey models are reconstructed by the integral matching-based ordinary differential equations. Finally, the existing grey models are compared with the reconstructed models through extensive simulation studies, and a real-world example shows how to apply and further verify the reconstructed model.
statistics
Tensor linear regression is an important and useful tool for analyzing tensor data. To deal with high dimensionality, CANDECOMP/PARAFAC (CP) low-rank constraints are often imposed on the coefficient tensor parameter in the (penalized) $M$-estimation. However, we show that the corresponding optimization may not be attainable, and when this happens, the estimator is not well-defined. This is closely related to a phenomenon, called CP degeneracy, in low-rank tensor approximation problems. In this article, we provide useful results of CP degeneracy in tensor regression problems. In addition, we provide a general penalized strategy as a solution to overcome CP degeneracy. The asymptotic properties of the resulting estimation are also studied. Numerical experiments are conducted to illustrate our findings.
statistics
We re-derive the first law of black hole mechanics in the context of the Heterotic Superstring effective action compactified on a torus to leading order in alpha prime, using Wald's formalism, covariant Lie derivatives and momentum maps. The Kalb-Ramond field strength of this theory has Abelian Chern-Simons terms which induce Nicolai-Townsend transformations of the Kalb-Ramond field. We show how to deal with all these gauge symmetries deriving the first law in terms of manifestly gauge-invariant quantities. In presence of Chern-Simons terms, several definitions of the conserved charges exist, but the formalism picks up only one of them to play a role in the first law. This work is a first step towards the derivation of the first law at first order in alpha prime where, more complicated, non-Abelian, Lorentz ("gravitational") and Yang-Mills Chern-Simons terms are included in the Kalb-Ramond field strength. The derivation of a first law is a necessary step towards the derivation of a manifestly gauge-invariant entropy formula which is still lacking in the literature. In its turn, this entropy formula is needed to compare unambiguously macroscopic and microscopic black hole entropies.
high energy physics theory
We introduce an approach for imposing physically informed inductive biases in learned simulation models. We combine graph networks with a differentiable ordinary differential equation integrator as a mechanism for predicting future states, and a Hamiltonian as an internal representation. We find that our approach outperforms baselines without these biases in terms of predictive accuracy, energy accuracy, and zero-shot generalization to time-step sizes and integrator orders not experienced during training. This advances the state-of-the-art of learned simulation, and in principle is applicable beyond physical domains.
computer science
Liquid-liquid phase separation has emerged as one of the important paradigms in the chemical physics as well as biophysics of charged macromolecular systems. We elucidate an equilibrium phase separation mechanism based on charge regulation, i.e., protonation-deprotonation equilibria controlled by pH, in an idealized macroion system which can serve as a proxy for simple coacervation. First, a low-density density-functional calculation reveals the dominance of two-particle configurations coupled by ion adsorption on neighboring macroions. Then a binary cell model, solved on the Debye-H\"uckel as well as the full nonlinear Poisson-Boltzmann level, unveils the charge-symmetry breaking as inducing the phase separation between low- and high-density phases as a function of pH. These results can be identified as a charge symmetry broken complex coacervation between chemically identical macroions.
condensed matter
We show that the dynamical Casimir effect in an optomechanical system can be achieved under incoherent mechanical pumping. We adopt a fully quantum-mechanical approach for both the cavity field and the oscillating mirror. The dynamics is then evaluated using a recently-developed master equation approach in the dressed picture, including both zero and finite temperature photonic reservoirs. This analysis shows that the dynamical Casimir effect can be observed even when the mean value of the mechanical displacement is zero. This opens up new possibilities for the experimental observation of this effect. We also calculate cavity emission spectra both in the resonant and the dispersive regimes, providing useful information on the emission process.
quantum physics
Here we develop two quantum-computational models for supervised and unsupervised classification tasks in quantum world. Presuming that the states of a set of given quantum systems (or objects) belong to one of two known classes, the objective here is to decide to which of these classes each system belongs -- without knowing its state. The supervised binary classification algorithm is based on having a training sample of quantum systems whose class memberships are already known. The unsupervised binary classification algorithm, however, uses a quantum oracle which knows the class memberships of the states of the computational basis. Both algorithms require the ability to evaluate the fidelity between states of the quantum systems with unknown states, for which here we also develop a general scheme.
quantum physics
Biomolecular processes are typically modeled using chemical reaction networks coupled to infinitely large chemical reservoirs. A difference in chemical potential between these reservoirs can drive the system into a non-equilibrium steady state (NESS). In reality, these processes take place in finite systems containing a finite number of molecules. In such systems, a NESS can be reached with the help of an externally driven pump for which we introduce a simple model. Crucial parameters are the pumping rate and the finite size of the chemical reservoir. We apply this model to a simple biochemical oscillator, the Brusselator, and quantify the performance using the number of coherent oscillations. As a surprising result, we find that higher precision can be achieved with finite-size reservoirs even though the corresponding current fluctuations are larger than in the ideal infinite case.
condensed matter
Testing is one of the most important steps in software development. It ensures the quality of software. Continuous Integration (CI) is a widely used testing system that can report software quality to the developer in a timely manner during the development progress. Performance, especially scalability, is another key factor for High Performance Computing (HPC) applications. Though there are many applications and tools to profile the performance of HPC applications, none of them are integrated into the continuous integration. On the other hand, no current continuous integration tools provide easy-to-use scalability test capabilities. In this work, we propose BeeSwarm, a scalability test system that can be easily applied to the current CI test environment enabling scalability test capability for HPC developers. As a showcase, BeeSwarm is integrated into Travis CI and GitLab CI to execute the scalability test workflow on Chameleon cloud.
computer science
Quantum image processing employs quantum computing to capture, manipulate, and recover images in various formats. This requires representations of encoded images using the quantum mechanical composition of any potential computing hardware. In this study, a quantum hue, saturation, and lightness (QHSL) color model is proposed to organize and conceptualize color-assignment attributes using the properties of quantum mechanics (i.e., entanglement and parallelism). The proposed color model is used to define representations of a two-dimensional QHSL image and investigate its data storage, chromatic transformation, and pseudocolor applications. The QHSL representation is introduced for the production of quantum images using triple perceptually relevant components. The efficient use of QHSL images is further explored for applications in the fields of computer vision and image analysis.
quantum physics
This paper provides sufficient conditions for the existence of solutions for two-person zero-sum games with possibly noncompact decision sets for both players. Payoff functions may be unbounded, and we do not assume any convexity/concavity-type conditions. For such games expected payoff may not exist for some pairs of strategies. The results of the paper imply several classic results, and they are illustrated with the number guessing game. The paper also provides sufficient conditions for the existence of a value and solutions for each player.
mathematics
In theories with long-range forces like QED or perturbative gravity, loop corrections lead to vanishing amplitudes. There are two well-known procedures to address these infrared divergences: dressing of asymptotic states and inclusion of soft emission. Although both yield the same IR-finite rates, we point out that they are not equivalent since they encode different infrared scales. In particular, dressing states are independent of the resolution scale of radiation. Instead, they define radiative vacua in the von Neumann space. After a review of these concepts, the goal of this paper is to present a combined formalism that can simultaneously describe both dressing and radiation. This unified approach allows us to tackle the problem of quantum decoherence due to tracing over unresolved radiation. We obtain an IR-finite density matrix with non-vanishing off-diagonal elements and estimate how its purity depends on scattering kinematics and the resolution scale. Along the way, we comment on collinear divergences as well as the connection of large gauge transformations and dressing.
high energy physics theory
The evaluation of higher-order cross-sections is an important component in the search for new physics, both at hadron colliders and elsewhere. For most new physics processes of interest, total cross-sections are known at next-to-leading order (NLO) in the strong coupling $\alpha_s$, and often beyond, via either higher-order terms at fixed powers of $\alpha_s$, or multi-emission resummation. However, the computation time for such higher-order cross-sections is prohibitively expensive, and precludes efficient evaluation in parameter-space scans beyond two dimensions. Here we describe the software tool $\textsf{xsec}$, which allows for fast evaluation of cross-sections based on the use of machine-learning regression, using distributed Gaussian processes trained on a pre-generated sample of parameter points. This first version of the code provides all NLO Minimal Supersymmetric Standard Model strong-production cross-sections at the LHC, for individual flavour final states, evaluated in a fraction of a second. Moreover, it calculates regression errors, as well as estimates of errors from higher-order contributions, from uncertainties in the parton distribution functions, and from the value of $\alpha_s$. While we focus on a specific phenomenological model of supersymmetry, the method readily generalises to any process where it is possible to generate a sufficient training sample.
high energy physics phenomenology
We investigate the properties of local minima of the energy landscape of a continuous non-convex optimization problem, the spherical perceptron with piecewise linear cost function and show that they are critical, marginally stable and displaying a set of pseudogaps, singularities and non-linear excitations whose properties appear to be in the same universality class of jammed packings of hard spheres. The piecewise linear perceptron problem appears as an evolution of the purely linear perceptron optimization problem that has been recently investigated in [1]. Its cost function contains two non-analytic points where the derivative has a jump. Correspondingly, in the non-convex/glassy phase, these two points give rise to four pseudogaps in the force distribution and this induces four power laws in the gap distribution as well. In addition one can define an extended notion of isostaticity and show that local minima appear again to be isostatic in this phase. We believe that our results generalize naturally to more complex cases with a proliferation of non-linear excitations as the number of non-analytic points in the cost function is increased.
condensed matter
In this paper, we consider the $H_{\infty}$ optimal control problem for a Markovian jump linear system (MJLS) over a lossy communication network. It is assumed that the controller communicates with each actuator through a different communication channel. We solve the $H_{\infty}$ optimization problem for a Transmission Control Protocol (TCP) using the theory of dynamic games and obtain a state-feedback controller. The infinite horizon $H_{\infty}$ optimization problem is analyzed as a limiting case of the finite horizon optimization problem. Then, we obtain the corresponding state-feedback controller, and show that it stabilizes the closed-loop system in the face of random packet dropouts.
electrical engineering and systems science
Reinforcement learning (RL) in discrete action space is ubiquitous in real-world applications, but its complexity grows exponentially with the action-space dimension, making it challenging to apply existing on-policy gradient based deep RL algorithms efficiently. To effectively operate in multidimensional discrete action spaces, we construct a critic to estimate action-value functions, apply it on correlated actions, and combine these critic estimated action values to control the variance of gradient estimation. We follow rigorous statistical analysis to design how to generate and combine these correlated actions, and how to sparsify the gradients by shutting down the contributions from certain dimensions. These efforts result in a new discrete action on-policy RL algorithm that empirically outperforms related on-policy algorithms relying on variance control techniques. We demonstrate these properties on OpenAI Gym benchmark tasks, and illustrate how discretizing the action space could benefit the exploration phase and hence facilitate convergence to a better local optimal solution thanks to the flexibility of discrete policy.
statistics
We show that every positive definite closed 4-manifold with $b_2^+>1$ and without 1-handles has a vanishing stable cohomotopy Seiberg-Witten invariant, and thus admits no symplectic structure. We also show that every closed oriented 4-manifold with $b_2^+\not\equiv 1$ and $b_2^-\not\equiv 1\pmod{4}$ and without 1-handles admits no symplectic structure for at least one orientation of the manifold. In fact, relaxing the 1-handle condition, we prove these results under more general conditions which are much easier to verify.
mathematics
MFiX-Exa is a new code being actively developed at Lawrence Berkeley National Laboratory and the National Energy Technology Laboratory as part of the U.S. Department of Energy's Exascale Computing Project. The starting point for the MFiX-Exa code development was the extraction of basic computational fluid dynamic (CFD) and discrete element method (DEM) capabilities from the existing MFiX-DEM code which was refactored into an AMReX code architecture, herein referred to as the preliminary MFiX-Exa code. Although drastic changes to the codebase will be required to produce an exascale capable application, benchmarking of the originating code helps to establish a valid start point for future development. In this work, four benchmark cases are considered, each corresponding to experimental data sets with history of CFD-DEM validation. We find that the preliminary MFiX-Exa code compares favorably with classic MFiX-DEM simulation predictions for three slugging/bubbling fluidized beds and one spout-fluid bed. Comparison to experimental data is also acceptable (within accuracy expected from previous CFD-DEM benchmarking and validation exercises) which is comprised of several measurement techniques including particle tracking velocimetry, positron emission particle tracking and magnetic resonance imaging. The work concludes with an overview of planned developmental work and potential benchmark cases to validate new MFiX-Exa capabilities.
physics
This paper presents a novel model-reference reinforcement learning control method for uncertain autonomous surface vehicles. The proposed control combines a conventional control method with deep reinforcement learning. With the conventional control, we can ensure the learning-based control law provides closed-loop stability for the overall system, and potentially increase the sample efficiency of the deep reinforcement learning. With the reinforcement learning, we can directly learn a control law to compensate for modeling uncertainties. In the proposed control, a nominal system is employed for the design of a baseline control law using a conventional control approach. The nominal system also defines the desired performance for uncertain autonomous vehicles to follow. In comparison with traditional deep reinforcement learning methods, our proposed learning-based control can provide stability guarantees and better sample efficiency. We demonstrate the performance of the new algorithm via extensive simulation results.
electrical engineering and systems science
We obtain new bounds of exponential sums modulo a prime $p$ with sparse polynomials $a_0x^{n_0} + \cdots + a_{\nu}x^{n_\nu}$. The bounds depend on various greatest common divisors of exponents $n_0, \ldots, n_\nu$ and their differences. In particular, two new bounds for binomials are obtained, improving previous results in broad ranges of parameters.
mathematics
When a two-dimensional material, adhered onto a compliant substrate, is subjected to compression it can undertake a buckling instability yielding to a periodic rippling. Interestingly, when black phosphorus flakes are compressed along the zig-zag crystal direction the flake buckles forming ripples with a 40% longer period than that obtained when the compression is applied along the armchair direction. This anisotropic buckling stems from the puckered honeycomb crystal structure of black phosphorus and a quantitative analysis of the ripple period allows us to determine the Youngs's modulus of few-layer black phosphorus along the armchair direction (EbP_AC = 35.1 +- 6.3 GPa) and the zig-zag direction (EbP_ZZ = 93.3 +- 21.8 GPa).
physics
We propose a cloud-based multimodal dialog platform for the remote assessment and monitoring of Amyotrophic Lateral Sclerosis (ALS) at scale. This paper presents our vision, technology setup, and an initial investigation of the efficacy of the various acoustic and visual speech metrics automatically extracted by the platform. 82 healthy controls and 54 people with ALS (pALS) were instructed to interact with the platform and completed a battery of speaking tasks designed to probe the acoustic, articulatory, phonatory, and respiratory aspects of their speech. We find that multiple acoustic (rate, duration, voicing) and visual (higher order statistics of the jaw and lip) speech metrics show statistically significant differences between controls, bulbar symptomatic and bulbar pre-symptomatic patients. We report on the sensitivity and specificity of these metrics using five-fold cross-validation. We further conducted a LASSO-LARS regression analysis to uncover the relative contributions of various acoustic and visual features in predicting the severity of patients' ALS (as measured by their self-reported ALSFRS-R scores). Our results provide encouraging evidence of the utility of automatically extracted audiovisual analytics for scalable remote patient assessment and monitoring in ALS.
electrical engineering and systems science
The New Horizons spacecraft's flyby of Kuiper Belt Object (KBO) (486958) Arrokoth revealed a bilobed shape with highly flattened lobes both aligned to its equatorial plane, and a rotational axis almost aligned to the orbital plane (obliquity ~99 deg). Arrokoth belongs to the Cold Classical Kuiper Belt Object population that occupies dynamically undisturbed orbits around the Sun, and as such, is a primitive object that formed in situ. Therefore, whether its shape is primordial or evolutionary carries important implications for understanding the evolution of both KBOs and potentially their dynamically derived objects, Centaurs and Jupiter Family Comets (JFC). Applying our mass loss driven shape evolution model (MONET), here we suggest that the current shape of Arrokoth could be of evolutionary origin due to volatile outgassing in a timescale of about 1 to 100 Myr, while its spin state would not significantly affected. We further argue that such a process may be ubiquitous in the evolution of the shape of KBOs shortly after their formation. This shape changing process could also be reactivated when KBOs dynamically evolve to become Centaurs and then JFCs and receive dramatically increased solar heating.
astrophysics
High-mobility wireless communications have received a lot of attentions in the past few years. In this paper, we consider angle-domain Doppler shifts compensation scheme to reduce the channel time variation for the high-mobility uplink from a high-speed terminal (HST) to a base station (BS). We propose to minimize Doppler spread by antenna weighting under the constraint of maintaining radiation efficiency. The sequential parametric convex approximation (SPCA) algorithm is exploited to solve the above non-convex problem. Moreover, in order to save the RF chains, we further exploit the idea of antenna selection for high-mobility wireless communications. Simulations verify the effectiveness of the proposed studies.
electrical engineering and systems science
This paper introduces Selective-Backprop, a technique that accelerates the training of deep neural networks (DNNs) by prioritizing examples with high loss at each iteration. Selective-Backprop uses the output of a training example's forward pass to decide whether to use that example to compute gradients and update parameters, or to skip immediately to the next example. By reducing the number of computationally-expensive backpropagation steps performed, Selective-Backprop accelerates training. Evaluation on CIFAR10, CIFAR100, and SVHN, across a variety of modern image models, shows that Selective-Backprop converges to target error rates up to 3.5x faster than with standard SGD and between 1.02--1.8x faster than a state-of-the-art importance sampling approach. Further acceleration of 26% can be achieved by using stale forward pass results for selection, thus also skipping forward passes of low priority examples.
computer science
Passive parity-time-symmetric medium provides a feasible scheme to investigate non-Hermitian systems experimentally. Here, we design a passive PT-symmetric acoustic grating with a period equal to exact PT-symmetric medium. This treatment enhances the diffraction ability of a passive PT-symmetric grating with more compact modulation. Above all, it eliminates the first-order disturbance of previous design in diffraction grating. Additional cavities and small leaked holes on top plate in a 2D waveguide are used to construct a parity-time-symmetric potential. The combining between additional cavities and leaked holes makes it possible to modulate the real and imaginary parts of refractive index simultaneously. When the real and imaginary parts of refractive index are balanced in modulation, asymmetric diffraction can be observed between a pair of oblique incident waves. This demonstration provides a feasible way to construct passive parity-time-symmetric acoustic medium. It opens new possibilities for further investigation of acoustic wave control in non-Hermitian systems.
physics
Some deep neural networks are invariant to some input transformations, such as Pointnetis permutation invariant to the input point cloud. In this paper, we demonstrated this property can be powerful in the defense of gradient based attacks. Specifically, we apply random input transformation which is invariant to networks we want to defend. Extensive experiments demonstrate that the proposed scheme outperforms the SOTA defense methods, and breaking the attack accuracy into nearly zero.
computer science
The concepts and tools from the theory of non-Hermitian quantum systems are used to investigate the dynamics of a quantum thermal machine. This approach allows us to characterize in full generality the analytical time-dependent dynamics of an autonomous quantum thermal machine, by solving a non-Hermitian Liouvillian for an arbitrary initial state. We show that the thermal machine features a number of exceptional points for experimentally realistic parameters. The signatures of a third-order exceptional point, both in the short and long-time regimes are demonstrated. As these points correspond to regimes of critical decay towards the steady state, in analogy with a critically damped oscillator, our work opens interesting possibilities for the precise control of the dynamics of quantum thermal machines.
quantum physics
The purpose of the present paper is to introduce a new subclasses of the function class of bi-univalent functions defined in the open unit disc. Furthermore, we obtain estimates on the coefficients $|a_{2}|$ and $|a_{3}|$ for functions of this class. Some results related to this work will be briefly indicated.
mathematics
We show that a categorical generalization of the the Poincar\'e symmetry which is based on the $n$-crossed modules becomes natural and simple when $n=3$ and that the corresponding 3-form and 4-form gauge fields have to be a Dirac spinor and a Lorentz scalar, respectively. Hence by using a Poincar\'e 4-group we naturally incorporate fermionic and scalar matter into the corresponding 4-connection. The internal symmetries can be included into the 4-group structure by using a 3-crossed module based on the $SO(3,1) \times K$ group, so that for $K=U(1) \times SU(2) \times SU(3)$ we can include the Standard Model into this categorification scheme.
high energy physics theory
This paper presents a systematic method to analyze stability and robustness of uncertain Quantum Input-Output Networks (QIONs). A general form of uncertainty is introduced into quantum networks in the SLH formalism. Results of this paper are built up on the notion of uncertainty decomposition wherein the quantum network is decomposed into nominal (certain) and uncertain sub-networks in cascade connection. Sufficient conditions for robust stability are derived using two different methods. In the first approach, a generalized small-gain theorem is presented and in the second approach, robust stability is analyzed within the framework of Lyapunov theory. In the second method, the robust stability problem is reformulated as feasibility of a Linear Matrix Inequality (LMI), which can be examined using the well-established systematic methods in the literature.
quantum physics
We consider a gravitational perturbation of the Jackiw-Teitelboim (JT) gravity with an arbitrary dilaton potential and study the condition under which the quadratic action can be seen as a $T\bar{T}$-deformation of the matter action. As a special case, the flat-space JT gravity discussed by Dubovsky et al[arXiv:1706.06604 ] is included. Another interesting example is a hyperbolic dilaton potential. This case is equivalent to a classical Liouville gravity with a negative cosmological constant and then a finite $T\bar{T}$-deformation of the matter action is realized as a gravitational perturbation on AdS$_2$.
high energy physics theory
In this note, we study the action of $O(d,d)$ transformations on the integrable structure of two-dimensional non-linear sigma models via the doubled formalism. We construct the Lax pairs associated with the $O(d,d)$-transformed model and find that they are in general non-local because they depend on the winding modes. We conclude that every $O(d,d;\mathbb{R})$ deformation preserves integrability. As an application we compute the Lax pairs for continuous families of deformations, such as $J\bar{J}$ marginal deformations and TsT transformations of the three-sphere with $H$-flux.
high energy physics theory
Magnetic charge propagation in bulk frustrated materials has yielded a paradigm-shift in science, allowing the symmetry between electricity and magnetism to be studied. Recent work is now suggesting magnetic charge dynamics upon the spin-ice surface may have important implications in determining the ordering and associated phase space. Here we detail a 3D artificial spin-ice, a 3D nanostructured array of magnetic islands which captures the exact geometry of bulk systems, allowing field-driven dynamics of magnetic charge to be directly visualized upon the surface. Using magnetic microscopy, we observe vastly different magnetic charge dynamics along two principle directions. These striking differences are found to be due to the surface-termination and associated coordination which yields different energetics and interaction strengths for magnetic charges upon the surface.
condensed matter
In 2012, Diem introduced a new figure of merit for cryptographic sequences called expansion complexity. Recently, a series of paper has been published for analysis of expansion complexity and for testing sequences in terms of this new measure of randomness. In this paper, we continue this analysis. First we study the expansion complexity in terms of the Gr\"obner basis of the underlying polynomial ideal. Next, we prove bounds on the expansion complexity for random sequences. Finally, we study the expansion complexity of sequences defined by differential equations, including the inversive generator.
mathematics
We numerically study the possibility of many-body localization transition in a disordered quantum dimer model on the honeycomb lattice. By using the peculiar constraints of this model and state-of-the-art exact diagonalization and time evolution methods, we probe both eigenstates and dynamical properties and conclude on the existence of a localization transition, on the available time and length scales (system sizes of up to N=108 sites). We critically discuss these results and their implications.
condensed matter
In this paper, we argue that in many basic algorithms for machine learning, including support vector machine (SVM) for classification, principal component analysis (PCA) for dimensionality reduction, and regression for dependency estimation, we need the inner products of the data samples, rather than the data samples themselves. Motivated by the above observation, we introduce the problem of private inner product retrieval for distributed machine learning, where we have a system including a database of some files, duplicated across some non-colluding servers. A user intends to retrieve a subset of specific size of the inner products of the data files with minimum communication load, without revealing any information about the identity of the requested subset. For achievability, we use the algorithms for multi-message private information retrieval. For converse, we establish that as the length of the files becomes large, the set of all inner products converges to independent random variables with uniform distribution, and derive the rate of convergence. To prove that, we construct special dependencies among sequences of the sets of all inner products with different length, which forms a time-homogeneous irreducible Markov chain, without affecting the marginal distribution. We show that this Markov chain has a uniform distribution as its unique stationary distribution, with rate of convergence dominated by the second largest eigenvalue of the transition probability matrix. This allows us to develop a converse, which converges to a tight bound in some cases, as the size of the files becomes large. While this converse is based on the one in multi-message private information retrieval, due to the nature of retrieving inner products instead of data itself some changes are made to reach the desired result.
computer science
We consider a scenario where an SU(2) triplet scalar acts as the portal for a scalar dark matter particle. We identify regions of the parameter space, where such a triplet coexists with the usual Higgs doublet consistently with all theoretical as well as neutrino, accelerator and dark matter constraints, and the triplet-dominated neutral state has substantial invisible branching fraction. LHC signals are investigated for such regions, in the final state {\em same-sign dilepton + $\ge$ 2 jets + $\not E_T$.} While straightforward detectability at the high-luminosity run is predicted for some benchmark points in a cut-based analysis, there are other benchmarks where one has to resort to gradient boosting/neural network techniques in order to achieve appreciable signal significance.
high energy physics phenomenology
We study the decoherence process induced by a spin chain environment on a central spin consisting of R spins and we apply it on the dynamics of quantum correlations (QCs) of three interacting qubits. In order to see the impact of the initial prepared state of the spin chain environment on the decoherence process, we assume the spin chain environment prepared in two main ways, namely, either the ground state or the vacuum state in the momentum space. We develop a general heuristic analysis when the spin chain environment in prepared in these states, in order to understand the decoherence process against the physical parameters. We show that the decoherence process is mainly determined by the choice of the initial prepared state, the number of spin of the chain, the coupling strength, the anisotropy parameter and the position from the quantum critical point. In fact, in the strong coupling regime, the decoherence process does not appear for the environment prepared in the vacuum state and it behaves oscillatory in the case of evolution from ground state. On the other hand, in the weak coupling regime and far from the quantum critical point, decoherence induced by the ground state is weaker than that of the vacuum state. Finally, we show that, QCs are completely shielded from decoherence in the case of evolution from the W state and obey the same dynamics as the decoherence factors for the GHZ state.
quantum physics
We study generating functions of ordinary and plane partitions coloured by the action of a finite subgroup of the corresponding special linear group. After reviewing known results for the case of ordinary partitions, we formulate a conjecture concerning a factorisation property of the generating function of coloured plane partitions that can be thought of as an orbifold analogue of a conjecture of Maulik et al., now a theorem, in three-dimensional Donaldson-Thomas theory. We study natural quantisations of the generating functions arising from geometry, discuss a quantised version of our conjecture, and prove a positivity result for the quantised coloured plane partition function under a geometric assumption.
mathematics
Carbonaceous nano-grains are present at the surface of protoplanetary disks around Herbig Ae/Be stars, where most of the central star UV energy is dissipated. Efficiently coupled to the gas, nano-grains are able to trace the disk outer flaring parts, and possibly the gaps from which the larger grains are missing. We examine the spatial distribution and evolution of the nano-dust emission in the (pre-)transitional disk HD100546 that shows annular gaps, rings, and spirals, and reveals rich carbon nano-dust spectroscopic signatures (aromatic, aliphatic) in a wide spatial range (~20-200au). We analyse adaptive optics spectroscopic observations from 3 to 4um and imaging and spectroscopic observations from 8 to 12um. We compare the data to model predictions using the THEMIS model with the radiative transfer code POLARIS calculating heating of micro- and nanometric dust grains for a given disk structure. The aromatic features at 3.3, 8.6 and 11.3um, as well as, the aliphatic ones from 3.4 to 3.5um are spatially extended with band morphologies dependong on local physical conditions. The aliphatic-to-aromatic band ratio 3.4/3.3 increases with the distance from the star suggesting UV processing. In the 8-12um observed spectra, features characteristic of aromatic particles and crystalline silicates are detected with their relative contribution changing with distance to the star. The model predicts that the features and adjacent continuum are due to different combinations of grain sub-populations, with a dependence on the UV field intensity. Shorter wavelength features are dominated by the smallest grains (< 0.7nm) throughout the disk, while at longer wavelengths what dominates the emission close to the star is a mix of several grain populations, and far away from the star is the largest nano-grain population.
astrophysics
We study a scalable alternative to robust gradient descent (RGD) techniques that can be used when the gradients can be heavy-tailed, though this will be unknown to the learner. The core technique is simple: instead of trying to robustly aggregate gradients at each step, which is costly and leads to sub-optimal dimension dependence in risk bounds, we choose a candidate which does not diverge too far from the majority of cheap stochastic sub-processes run for a single pass over partitioned data. In addition to formal guarantees, we also provide empirical analysis of robustness to perturbations to experimental conditions, under both sub-Gaussian and heavy-tailed data. The result is a procedure that is simple to implement, trivial to parallelize, which keeps the formal strength of RGD methods but scales much better to large learning problems.
statistics
The formation of $O^{-}$ and $O_{2}^{-}$ ions via dissociative electron attachment to a pulsed supersonic jet of $O_{2}$ molecules containing weakly bound small van der Waals clusters seeded in a beam of argon is reported. The energy dependence of the $O^{-}$ and $O_{2}^{-}$ yield exhibits three peaks near 7, 11 and 16 eV incident electron energies. The 7 eV peak arises from the $^{2}\Pi_{u}$ state of $O_{2}^{-}$ whereas, the 11 and 16 eV peaks are ascribed to two distinct resonance states:$ ^{2}\Sigma_{g}^{+} $ and $^{2}\Sigma_{u}^{+}$ states of $O_{2}^{-}$, respectively, via a violation of the $\Sigma^{+} \rightleftharpoons \Sigma^{-}$ selection rule. The dependence of the cross-section of these two new peaks at $\sim$11 and $\sim$16 eV on the proportion of the carrier gas is also investigated and an optimum proportion has been observed experimentally which gives the lowest temperature of 14.86 K and highest Mach number of 72.31 for the pulsed supersonic jet.
physics
Quality and intelligibility of speech signals are degraded under additive background noise which is a critical problem for hearing aid and cochlear implant users. Motivated to address this problem, we propose a novel speech enhancement approach using a deep spectrum image translation network. To this end, we suggest a new architecture, called VGG19-UNet, where a deep fully convolutional network known as VGG19 is embedded at the encoder part of an image-to-image translation network, i.e. U-Net. Moreover, we propose a perceptually-modified version of the spectrum image that is represented in Mel frequency and power-law non-linearity amplitude domains, representing good approximations of human auditory perception model. By conducting experiments on a real challenge in speech enhancement, i.e. unseen noise environments, we show that the proposed approach outperforms other enhancement methods in terms of both quality and intelligibility measures, represented by PESQ and ESTOI, respectively.
electrical engineering and systems science
Sophisticated generative adversary network (GAN) models are now able to synthesize highly realistic human faces that are difficult to discern from real ones visually. In this work, we show that GAN synthesized faces can be exposed with the inconsistent corneal specular highlights between two eyes. The inconsistency is caused by the lack of physical/physiological constraints in the GAN models. We show that such artifacts exist widely in high-quality GAN synthesized faces and further describe an automatic method to extract and compare corneal specular highlights from two eyes. Qualitative and quantitative evaluations of our method suggest its simplicity and effectiveness in distinguishing GAN synthesized faces.
computer science
We study the geometric response of three-dimensional non-Hermitian crystalline systems with non-trivial point gap topology. For systems with four-fold rotation symmetry, we show that in presence of disclination lines with a total Frank angle which is an integer multiple of $2\pi$, there can be non-trivial, one-dimensional point gap topology along the direction of disclination lines. This results in disclination-induced non-Hermitian skin effects. We extend the recently proposed non-Hermitian field theory approach to describe this phenomenon as a Euclidean Wen-Zee term. Furthermore, by doubling a non-Hermitian Hamiltonian to a Hermitian 3D chiral topological insulator, we show that the disclination-induced skin modes are zero modes of the surface Dirac fermion(s) in the presence of a pseudo-magnetic flux induced by disclinations.
condensed matter
Local and fast structural probes using synchrotron radiation have shown nanoscale striped puddles and nanoscale phase separation in doped perovskites.It is known that the striped phases in doped perovskites are due to competing interactions involving charge, spin and lattice degrees of freedom,but while many theoretical models for spin and charge stripes have been proposed we are missing theoretical models describing the complex lattice striped modulations in doped perovskites. In this work we show that two different stripes can be represented as a superposition of a pair of stripes, U(theta_n) and D(theta_n), characterized by perovskite tilts where one of the pair is rotated in relation to the other partner by an angle Delta(theta _n)=pi/2. The spatial distribution of the U and D stripes is reduced to all possible maps in the well-known mathematical four-color theorem. Both the periodic striped puddles and random structures can be represented by using $planar$ graphs with a chromatic number. To observe the colors in mapping experiments, it is necessary to recover variously oriented tilting effects from the replica. It is established that there is an interplay between the annihilation/creation of new stripes and ordering/disordering tilts in relation to the theta _n angle in the CuO2 plane, where the characteristic shape of the stripes coincides with the tilting-ordered regions. By their origin, the boundaries between the stripes should be atomically sharp.
condensed matter
Requirements on subsystems have been made clear in this paper for a linear time invariant (LTI) networked dynamic system (NDS), under which subsystem interconnections can be estimated from external output measurements. In this NDS, subsystems may have distinctive dynamics, and subsystem interconnections are arbitrary. It is assumed that system matrices of each subsystem depend on its (pseudo) first principle parameters (FPPs) through a linear fractional transformation (LFT). It has been proven that if in each subsystem, the transfer function matrix (TFM) from its internal inputs to its external outputs is of full normal column rank (FNCR), while the TFM from its external inputs to its internal outputs is of full normal row rank (FNRR), then the structure of the NDS is identifiable. Moreover, under some particular situations like there are no direct information transmission from an internal input to an internal output in each subsystem, a necessary and sufficient condition is established for NDS structure identifiability. A matrix valued polynomial (MVP) rank based equivalent condition is further derived, which depends affinely on subsystem (pseudo) FPPs and can be independently verified for each subsystem. From this condition, some necessary conditions are obtained for both subsystem dynamics and its (pseudo) FPPs, using the Kronecker canonical form (KCF) of a matrix pencil.
electrical engineering and systems science
The structure of the molten salt (LiF)$_{0.465}$(NaF)$_{0.115}$(KF)$_{0.42}$ (FLiNaK), a potential coolant for molten salt nuclear reactors, has been studied by ab initio molecular dynamics simulations and neutron total scattering experiments. We find that the salt retains well-defined short-range structural correlations out to approximately 9 Angstroms at typical reactor operating temperatures. The experimentally determined pair distribution function can be described with quantitative accuracy by the molecular dynamics simulations. These results indicate that the essential ionic interactions are properly captured by the simulations, providing a launching point for future studies of FLiNaK and other molten salts for nuclear reactor applications.
condensed matter
We present a combined experimental and theoretical study of the mutual neutralization process in collisions of lithium ions (Li+) with deuterium anions (D-) at collision energies below 1 eV. We employ a merged-beam apparatus to determine total and state-to-state mutual neutralization cross sections. We perform nuclear dynamics calculations using the multi-channel Landau-Zener model based on accurate ab initio molecular data. We obtain an excellent agreement between the experimental and theoretical results over the energy range covered in this work. We show that the basis sets used in the ab initio calculations have a limited influence on the total cross section, but strongly impacts the results obtained for the partial cross sections or the reaction branching ratios. This demonstrates the important role of high-precision measurements to validate the theoretical approaches used to study gas-phase reactive processes. Finally, we compute mutual neutralization rate coefficients for Li+ + H- and Li+ + D-, and discuss their significance for astrochemistry models.
physics
Vilenkin groups, introduced by F. Ya Vilenkin, form a class of locally compact abelian groups. The present paper consists of the characterization of Parseval frame multiwavelets associated to multiresolution analysis (MRA) in the Vilenkin group. Further, we introduce the pseudo-scaling function along with a class of generalized low pass filters and study their properties in Vilenkin group.
mathematics
In network data analysis, it is becoming common to work with a collection of graphs that exhibit \emph{heterogeneity}. For example, neuroimaging data from patient cohorts are increasingly available. A critical analytical task is to identify communities, and graph Laplacian-based methods are routinely used. However, these methods are currently limited to a single network and do not provide measures of uncertainty on the community assignment. In this work, we propose a probabilistic network model called the ``Spiked Laplacian Graph'' that considers each network as an invertible transform of the Laplacian, with its eigenvalues modeled by a modified spiked structure. This effectively reduces the number of parameters in the eigenvectors, and their sign patterns allow efficient estimation of the community structure. Further, the posterior distribution of the eigenvectors provides uncertainty quantification for the community estimates. Subsequently, we introduce a Bayesian non-parametric approach to address the issue of heterogeneity in a collection of graphs. Theoretical results are established on the posterior consistency of the procedure and provide insights on the trade-off between model resolution and accuracy. We illustrate the performance of the methodology on synthetic data sets, as well as a neuroscience study related to brain activity in working memory. Keywords: Hierarchical Community Detection, Isoperimetric Constant, Mixed-Effect Eigendecomposition, Normalized Graph Cut, Stiefel Manifold
statistics
From the perspective of many body physics, the transmon qubit architectures currently developed for quantum computing are systems of coupled nonlinear quantum resonators. A significant amount of intentional frequency detuning (disorder) is required to protect individual qubit states against the destabilizing effects of nonlinear resonator coupling. Here we investigate the stability of this variant of a many-body localized (MBL) phase for system parameters relevant to current quantum processors of two different types, those using untunable qubits (IBM type) and those using tunable qubits (Delft/Google type). Applying three independent diagnostics of localization theory - a Kullback-Leibler analysis of spectral statistics, statistics of many-body wave functions (inverse participation ratios), and a Walsh transform of the many-body spectrum - we find that these computing platforms are dangerously close to a phase of uncontrollable chaotic fluctuations.
quantum physics
Shallow water environments create a challenging channel for communications. In this paper, we focus on the challenges posed by the frequency-selective signal distortion called the Doppler effect. We explore the design and performance of machine learning (ML) based demodulation methods --- (1) Deep Belief Network-feed forward Neural Network (DBN-NN) and (2) Deep Belief Network-Convolutional Neural Network (DBN-CNN) in the physical layer of Shallow Water Acoustic Communication (SWAC). The proposed method comprises of a ML based feature extraction method and classification technique. First, the feature extraction converts the received signals to feature images. Next, the classification model correlates the images to a corresponding binary representative. An analysis of the ML based proposed demodulation shows that despite the presence of instantaneous frequencies, the performance of the algorithm shows an invariance with a small 2dB error margin in terms of bit error rate (BER).
electrical engineering and systems science
We show that the $4d$ ${\cal N}=1$ $SU(3)$ $N_f=6$ SQCD is the model obtained when compactifying the rank one E-string theory on a three punctured sphere (a trinion) with a particular value of flux. The $SU(6)\times SU(6)\times U(1)$ global symmetry of the theory, when decomposed into the $SU(2)^3\times U(1)^3\times SU(6)$ subgroup, corresponds to the three $SU(2)$ symmetries associated to the three punctures and the $U(1)^3 \times SU(6)$ subgroup of the $E_8$ symmetry of the E-string theory. All the puncture symmetries are manifest in the UV and thus we can construct ordinary Lagrangians flowing in the IR to any compactification of the E-string theory. We generalize this claim and argue that the ${\cal N}=1$ $SU(N+2)$ SQCD in the middle of the conformal window, $N_f=2N+4$, is the theory obtained by compactifying the $6d$ minimal $(D_{N+3},D_{N+3})$ conformal matter SCFT on a sphere with two maximal $SU(N+1)$ punctures, one minimal $SU(2)$ puncture, and with a particular value of flux. The $SU(2N+4)\times SU(2N+4)\times U(1)$ symmetry of the UV Lagrangian decomposes into $SU(N+1)^2\times SU(2)$ puncture symmetries and the $U(1)^3\times SU(2N+4)$ subgroup of the $SO(12+4N)$ symmetry group of the $6d$ SCFT. The models constructed from the trinions exhibit a variety of interesting strong coupling effects. For example, one of the dualities arising geometrically from different pair-of-pants decompositions of a four punctured sphere is an $SU(N+2)$ generalization of the Intriligator-Pouliot duality of $SU(2)$ SQCD with $N_f=4$, which is a degenerate, $N=0$, instance of our discussion.
high energy physics theory
We study the processes $K\bar{K} \to \phi$, $\pi D \to D^\ast$, $\pi \bar{D} \to \bar{D}^\ast$, and the production of $\psi (3770)$, $\psi (4040)$, $\psi (4160)$, and $\psi (4415)$ mesons in collisions of charmed mesons or charmed strange mesons. The process of 2-to-1 meson-meson scattering involves a quark and an antiquark from the two initial mesons annihilating into a gluon and subsequently the gluon being absorbed by the spectator quark or antiquark. Transition amplitudes for the scattering process derive from the transition potential in conjunction with mesonic quark-antiquark wave functions and the relative-motion wave function of the two initial mesons. We derive these transition amplitudes in the partial wave expansion of the relative-motion wave function of the two initial mesons so that parity and total-angular-momentum conservation are maintained. We calculate flavor and spin matrix elements in accordance with the transition potential and unpolarized cross sections for the reactions using the transition amplitudes. Cross sections for the production of $\psi (4040)$, $\psi (4160)$, and $\psi (4415)$ relate to nodes in their radial wave functions. We suggest the production of $\psi (4040)$, $\psi (4160)$, and $\psi (4415)$ as probes of hadronic matter that results from the quark-gluon plasma created in ultrarelativistic heavy-ion collisions.
high energy physics phenomenology
The Betz limit expresses the maximum proportion of the kinetic energy flux incident on an energy conversion device that can be extracted from an unbounded flow. The derivation of the Betz limit requires an assumption of steady flow through a notional actuator disk that is stationary in the streamwise direction. The present derivation relaxes the assumptions of steady flow and streamwise actuator disk stationarity, which expands the physically realizable parameter space of flow conditions upstream and downstream of the actuator disk. A key consequence of this generalization is the existence of unsteady motions that can, in principle, lead to energy conversion efficiencies that exceed the Betz limit not only transiently, but also in time-averaged performance. Potential physical implementations of those unsteady motions are speculated.
physics
We examine the remnant phase of radio galaxies using three-dimensional hydrodynamical simulations of relativistic jets propagating through cluster environments. By switching the jets off once the lobes have reached a certain length we can study how the energy distribution between the lobes and shocked intra-cluster medium compares to that of an active source, as well as calculate synchrotron emission properties of the remnant sources. We see that as a result of disturbed cluster gas beginning to settle back into the initial cluster potential, streams of dense gas are pushed along the jet axis behind the remnant lobes, causing them to rise out of the cluster faster than they would due to buoyancy. This leads to increased adiabatic losses and a rapid dimming. The rapid decay of total flux density and surface brightness may explain the small number of remnant sources found in samples with a high flux density limit and may cause analytic models to overestimate the remnant fraction expected in sensitive surveys such as those now being carried out with LOFAR.
astrophysics
The adverse effects of sea water environment on the fatigue life of woven carbon fiber/vinyl ester composites are established at room temperature in view of long-term survivability of offshore structures. It is observed that the influence of sea water saturation on the fatigue life is more pronounced when the maximum cyclic displacement approaches maximum quasi-static deflection, that is, the reduction in the number of cycles to failure are comparable between dry and sea water saturated samples at lower strain ranges (~37% at 0.46% strain), but are drastically different at higher strain ranges (~90% at 0.62% strain). Key damage modes that manifest during the fatigue loading is also identified, and a non-linear model is established for predicting low cycle fatigue life of these composites in dry and sea water saturated conditions.
physics
The isomerisation of azo dyes can induce conformational change which have potential applications in medicine and environmental protection. We developed an agar diffusion assay to test the capture and release of biologically active molecules from an azo electro-optic polymer, Poly (Disperse Red 1 methacrylate) (DR1/PMMA). The assay monitors the growth of bacteria placed in soft agar under a glass coverslip. Antibiotics can then be applied on the coverslip resulting in the clearance of the area under the coverslip due to growth inhibition. This assay demonstrates that DR1/PMMA is able to capture either tetracycline or ampicillin and the relative amount of DR1/PMMA required for capture was determined. Finally, the active antibiotics can be released from DR1/PMMA by exposure to green light but not by exposure to white light or heat.
physics
The placement of a magnetic monopole into an electrically-neutral chiral plasma with a non-zero axial density results in an electric polarization of the matter. The electric current produced by the chiral magnetic effect is balanced by charge diffusion and Ohmic dissipation, which generates a non-trivial charge distribution. In turn, the latter induces a separation of chiralities along the magnetic field of the monopole due to the chiral separation effect. We find the stationary states of such a system, with vanishing total electric current and stationary axial current balanced by the chiral anomaly. In this solution, the monopole becomes "dressed" with an electric charge that is proportional to the averaged chiral density of the matter -- forming a chiral dyon. The interplay between the chiral effects on the one hand, and presence of magnetic field of the monopole on the other, may affect the evolution of the monopole density in the early Universe, contribute to the process of baryogenesis, and can also be instrumental for detection of relic monopoles using chiral materials.
high energy physics phenomenology
We construct a class of backgrounds with a warp factor and anti-de Sitter asymptotics, which are dual to boundary systems that have a ground state with a short-range two-point correlation function. The solutions of probe scalar fields on these backgrounds are obtained by means of confluent hypergeometric functions. The explicit analytical expressions of a class of short-range correlation functions on the boundary and the correlation lengths $\xi$ are derived from gravity computation. The two-point function calculated from gravity side is explicitly shown to exponentially decay with respect to separation in the infrared. Such feature inevitably appears in confining gauge theories and certain strongly correlated condensed matter systems.
high energy physics theory
We develop a general method for incentive-based programming of hybrid quantum-classical computing systems using reinforcement learning, and apply this to solve combinatorial optimization problems on both simulated and real gate-based quantum computers. Relative to a set of randomly generated problem instances, agents trained through reinforcement learning techniques are capable of producing short quantum programs which generate high quality solutions on both types of quantum resources. We observe generalization to problems outside of the training set, as well as generalization from the simulated quantum resource to the physical quantum resource.
quantum physics
For over 40 years, physicists have considered possible uses for neutrino detectors in nuclear nonproliferation, arms control, and fissile materials security. Neutrinos are an attractive fission signature because they readily pass through matter. The same property makes neutrinos challenging to detect in systems that would be practical for nuclear security applications. This colloquium presents a broad overview of several potential neutrino applications, including the near-field monitoring of known reactors, far-field monitoring of known or discovery of undeclared reactors, detection of reactor waste streams, and detection of nuclear explosions. We conclude that recent detector advances have made near-field monitoring feasible. Farther-field reactor detection and waste stream detection monitoring are possible in some cases with further research and development. Very long-range reactor monitoring and nuclear explosion detection do not appear feasible for the foreseeable future due to considerable physical and/or practical constraints.
physics
In this paper, we use the instantaneous Bethe-Salpeter method to calculate the semi-leptonic and non-leptonic production of the orbitally excited scalar $ D_0^* $ in $ B $ meson decays. When the final state is $ 1P $ state $ D_0^*(2400) $, our theoretical decay rate is consistent with experimental data. For $ D_J^*(3000) $ final state, which was observed by LHCb collaboration recently and here treated as the orbitally excited scalar $D^{*}_{0}(2P)$, its rate is in the order of $ 10^{-4} \sim 10^{-6}$. We find the special node structure of $D^{*}_{0}(2P)$ wave function possibly results in the suppression of its branching ratio and the abnormal uncertainty. The $3P$ states production rate is in the order of $ 10^{-5}$.
high energy physics phenomenology
Aaronson and Ambainis (SICOMP `18) showed that any partial function on $N$ bits that can be computed with an advantage $\delta$ over a random guess by making $q$ quantum queries, can also be computed classically with an advantage $\delta/2$ by a randomized decision tree making ${O}_q(N^{1-\frac{1}{2q}}\delta^{-2})$ queries. Moreover, they conjectured the $k$-Forrelation problem -- a partial function that can be computed with $q = \lceil k/2 \rceil$ quantum queries -- to be a suitable candidate for exhibiting such an extremal separation. We prove their conjecture by showing a tight lower bound of $\widetilde{\Omega}(N^{1-1/k})$ for the randomized query complexity of $k$-Forrelation, where the advantage $\delta = 2^{-O(k)}$. By standard amplification arguments, this gives an explicit partial function that exhibits an $O_\epsilon(1)$ vs $\Omega(N^{1-\epsilon})$ separation between bounded-error quantum and randomized query complexities, where $\epsilon>0$ can be made arbitrarily small. Our proof also gives the same bound for the closely related but non-explicit $k$-Rorrelation function introduced by Tal (FOCS `20). Our techniques rely on classical Gaussian tools, in particular, Gaussian interpolation and Gaussian integration by parts, and in fact, give a more general statement. We show that to prove lower bounds for $k$-Forrelation against a family of functions, it suffices to bound the $\ell_1$-weight of the Fourier coefficients between levels $k$ and $(k-1)k$. We also prove new interpolation and integration by parts identities that might be of independent interest in the context of rounding high-dimensional Gaussian vectors.
quantum physics
Methods for modeling large driven dissipative quantum systems are becoming increasingly urgent due to recent experimental progress in a number of photonic platforms. We demonstrate the positive-P method to be ideal for this purpose across a wide range of parameters, focusing on the archetypal driven dissipative Bose-Hubbard model. Notably, these parameters include intermediate regimes where interactions and dissipation are comparable, and especially cases with low occupations for which common semiclassical approximations can break down. The presence of dissipation can alleviate instabilities in the method that are known to occur for closed systems, allowing the simulation of dynamics up to and including the steady state. Throughout the parameter space of the model, we determine the magnitude of dissipation that is sufficient to make the method useful and stable, finding its region of applicability to be complementary to that of truncated Wigner. We then demonstrate its use in a number of examples with nontrivial quantum correlations, including a demonstration of solving the urgent open problem of large and highly non-uniform systems with even tens of thousands of sites.
quantum physics
In two-dimensional heterostructures, crystalline atomic layers with differing lattice parameters can stack directly one on another. The resultant close proximity of atomic lattices with differing periodicity can lead to new phenomena. For umklapp processes, this opens the possibility for interlayer umklapp scattering, where interactions are mediated by the transfer of momenta to or from the lattice in the neighbouring layer. Using angle-resolved photoemission spectroscopy to study a graphene on InSe heterostructure, we present evidence that interlayer umklapp processes can cause hybridization between bands from neighbouring layers in regions of the Brillouin zone where bands from only one layer are expected, despite no evidence for moir/'e-induced replica bands. This phenomenon manifests itself as 'ghost' anti-crossings in the InSe electronic dispersion. Applied to a range of suitable 2DM pairs, this phenomenon of interlayer umklapp hybridization can be used to create strong mixing of their electronic states, giving a new tool for twist-controlled band structure engineering.
condensed matter
While molecular gas mass is usually derived from $^{12}$CO($J$=1-0) - the most fundamental line to explore molecular gas - it is often derived from $^{12}$CO($J$=2-1) assuming a constant $^{12}$CO($J$=2-1)/$^{12}$CO($J$=1-0) line ratio ($R_{2/1}$). We present variations of $R_{2/1}$ and effects of the assumption that $R_{2/1}$ is a constant in 24 nearby galaxies using $^{12}$CO data obtained with the Nobeyama 45-m radio telescope and IRAM 30-m telescope. The median of $R_{2/1}$ for all galaxies is 0.61, and the weighted mean of $R_{2/1}$ by $^{12}$CO($J$=1-0) integrated-intensity is 0.66 with a standard deviation of 0.19. The radial variation of $R_{2/1}$ shows that it is high (~0.8) in the inner ~1 kpc while its median in disks is nearly constant at 0.60 when all galaxies are compiled. In the case that the constant $R_{2/1}$ of 0.7 is adopted, we found that the total molecular gas mass derived from $^{12}$CO($J$=2-1) is underestimated/overestimated by ~20%, and at most by 35%. The scatter of a molecular gas surface density within each galaxy becomes larger by ~30%, and at most by 120%. Indices of the spatially resolved Kennicutt-Schmidt relation by $^{12}$CO($J$=2-1) are underestimated by 10-20%, at most 39% in 17 out of 24 galaxies. $R_{2/1}$ has good positive correlations with star-formation rate and infrared color, and a negative correlation with molecular gas depletion time. There is a clear tendency of increasing $R_{2/1}$ with increasing kinetic temperature ($T_{\rm kin}$). Further, we found that not only $T_{\rm kin}$ but also pressure of molecular gas is important to understand variations of $R_{2/1}$. Special considerations should be made when discussing molecular gas mass and molecular gas properties inferred from $^{12}$CO($J$=2-1) instead of $^{12}$CO($J$=1-0).
astrophysics
Millimeter wave wireless systems rely heavily on directional communication in narrow steerable beams. Tools to measure the spatial and temporal nature of the channel are necessary to evaluate beamforming and related algorithms. This paper presents a novel 60~GHz phased-array based directional channel sounder and data analysis procedure that can accurately extract paths and their transmit and receive directions. The gains along each path can also be measured for analyzing blocking scenarios. The sounder is validated in an indoor office environment.
electrical engineering and systems science
We investigate the group gradings on the algebra of upper triangular matrices over an arbitrary field, viewed as a Lie algebra. These results were obtained a few years early by the same authors. We provide streamlined proofs, and present a complete classification of isomorphism classes of the gradings. We also provide a classification of the practical isomorphism classes of the gradings, which is a better alternative way to consider these gradings up to being essentially the same object. Finally, we investigate in details the case where the characteristic of the base field is $2$, a topic that was neglected in previous works.
mathematics
A time-changed mixed fractional Brownian motion is an iterated process constructed as the superposition of mixed fractional Brownian motion and other process. In this paper we consider mixed fractional Brownian motion of parameters a, b and H\in(0, 1) time-changed by two processes, gamma and tempered stable subordinators. We present their main properties paying main attention to the long range dependence. We deduce that the fractional Brownian motion time-changed by gamma and tempered stable subordinators has long range dependence property for all H\in(0, 1).
mathematics
We study rotor walk, a deterministic counterpart of the simple random walk, on infinite transient graphs. We show that the final rotor configuration of the rotor walk follows the law of the wired uniform spanning forest oriented toward infinity (OWUSF) measure when the initial rotor configuration is sampled from OWUSF. This result holds for all graphs for which each tree in the wired spanning forest has one single end almost surely. This answers a question posed in a previous work of the author (Chan 2018).
mathematics
The contributions of this paper are twofold. First, we show the potential interest of Complex-Valued Neural Network (CVNN) on classification tasks for complex-valued datasets. To highlight this assertion, we investigate an example of complex-valued data in which the real and imaginary parts are statistically dependent through the property of non-circularity. In this context, the performance of fully connected feed-forward CVNNs is compared against a real-valued equivalent model. The results show that CVNN performs better for a wide variety of architectures and data structures. CVNN accuracy presents a statistically higher mean and median and lower variance than Real-Valued Neural Network (RVNN). Furthermore, if no regularization technique is used, CVNN exhibits lower overfitting. The second contribution is the release of a Python library (Barrachina 2019) using Tensorflow as back-end that enables the implementation and training of CVNNs in the hopes of motivating further research on this area.
statistics
Most existing studies on the double/debiased machine learning method concentrate on the causal parameter estimation recovering from the first-order orthogonal score function. In this paper, we will construct the $k^{\mathrm{th}}$-order orthogonal score function for estimating the average treatment effect (ATE) and present an algorithm that enables us to obtain the debiased estimator recovered from the score function. Such a higher-order orthogonal estimator is more robust to the misspecification of the propensity score than the first-order one does. Besides, it has the merit of being applicable with many machine learning methodologies such as Lasso, Random Forests, Neural Nets, etc. We also undergo comprehensive experiments to test the power of the estimator we construct from the score function using both the simulated datasets and the real datasets.
statistics
$\texttt{HEPfit}$ is a flexible open-source tool which, given the Standard Model or any of its extensions, allows to $\textit{i)}$ fit the model parameters to a given set of experimental observables; $\textit{ii)}$ obtain predictions for observables. $\texttt{HEPfit}$ can be used either in Monte Carlo mode, to perform a Bayesian Markov Chain Monte Carlo analysis of a given model, or as a library, to obtain predictions of observables for a given point in the parameter space of the model, allowing $\texttt{HEPfit}$ to be used in any statistical framework. In the present version, around a thousand observables have been implemented in the Standard Model and in several new physics scenarios. In this paper, we describe the general structure of the code as well as models and observables implemented in the current release.
high energy physics phenomenology
We have used catalogues from several Galactic plane surveys and dedicated observations to investigate the relationship between various maser species and Galactic star forming clumps, as identified by the ATLASGAL survey. The maser transitions of interest are the 6.7 & 12.2 GHz methanol masers, 22.2 GHz water masers, and the masers emitting in the four ground-state hyperfine structure transitions of hydroxyl. We find clump association rates for the water, hydroxyl and methanol masers to be 56, 39 and 82 per cent respectively, within the Galactic longitude range of 60{\deg} > $l$ > -60{\deg}. We investigate the differences in physical parameters between maser associated clumps and the full ATLASGAL sample, and find that clumps coincident with maser emission are more compact with increased densities and luminosities. However, we find the physical conditions within the clumps are similar for the different maser species. A volume density threshold of $n$(H$_{2}$) > 10$^{4.1}$ cm$^{-3}$ for the 6.7 GHz methanol maser found in our previous study is shown to be consistent across for all maser species investigated. We find limits that are required for the production of maser emission to be 500 L$_{\odot}$ and 6 M$_{\odot}$ respectively. The evolutionary phase of maser associated clumps is investigated using the L/M ratio of clumps coincident with maser emission, and these have similar L/M ranges (~10$^{0.2}$ - 10$^{2.7}$ L$_{\odot}$/M$_{\odot}$) regardless of the associated transitions. This implies that the conditions required for the production of maser emission only occur during a relatively narrow period during a star's evolution. Lower limits of the statistical lifetimes for each maser species are derived, ranging from ~0.4 - 2 x 10$^{4}$ yrs and are in good agreement with the "straw man" evolutionary model previously presented.
astrophysics
A symmetry-preserving Poincar\'e-covariant quark+diquark Faddeev equation treatment of the nucleon is used to deliver parameter-free predictions for the nucleon's axial and induced pseudoscalar form factors, $G_A$ and $G_P$, respectively. The result for $G_A$ can reliably be represented by a dipole form factor characterised by an axial charge $g_A=G_A(0)=1.25(3)$ and a mass-scale $M_A = 1.23(3) m_N$, where $m_N$ is the nucleon mass; and regarding $G_P$, the induced pseudoscalar charge $g_p^\ast = 8.80(23)$, the ratio $g_p^\ast/g_A = 7.04(22)$, and the pion pole dominance Ansatz is found to provide a reliable estimate of the directly computed result. The ratio of flavour-separated quark axial charges is also calculated: $g_A^d/g_A^u=-0.16(2)$. This value expresses a marked suppression of the size of the $d$-quark component relative to that found in nonrelativistic quark models and owes to the presence of strong diquark correlations in the nucleon Faddeev wave function -- both scalar and axial-vector, with the scalar diquark being dominant. The predicted form for $G_A$ should provide a sound foundation for analyses of the neutrino-nucleus and antineutrino-nucleus cross-sections that are relevant to modern accelerator neutrino experiments.
high energy physics phenomenology
The SAVVY project aims to improve the analyses of adverse event (AE) data in clinical trials through the use of survival techniques appropriately dealing with varying follow-up times and competing events (CEs). Although statistical methodologies have advanced, in AE analyses often the incidence proportion, the incidence density, or a non-parametric Kaplan-Meier estimator (KME) are used, which either ignore censoring or CEs. In an empirical study including randomized clinical trials from several sponsor organisations, these potential sources of bias are investigated. The main aim is to compare the estimators that are typically used in AE analysis to the Aalen-Johansen estimator (AJE) as the gold-standard. Here, one-sample findings are reported, while a companion paper considers consequences when comparing treatment groups. Estimators are compared with descriptive statistics, graphical displays and with a random effects meta-analysis. The influence of different factors on the size of the bias is investigated in a meta-regression. Comparisons are conducted at the maximum follow-up time and at earlier evaluation time points. CEs definition does not only include death before AE but also end of follow-up for AEs due to events possibly related to the disease course or the treatment. Ten sponsor organisations provided 17 trials including 186 types of AEs. The one minus KME was on average about 1.2-fold larger than the AJE. Leading forces influencing bias were the amount of censoring and of CEs. As a consequence, the average bias using the incidence proportion was less than 5%. Assuming constant hazards using incidence densities was hardly an issue provided that CEs were accounted for. There is a need to improve the guidelines of reporting risks of AEs so that the KME and the incidence proportion are replaced by the AJE with an appropriate definition of CEs.
statistics
Automata learning has been successfully applied in the verification of hardware and software. The size of the automaton model learned is a bottleneck for scalability, and hence optimizations that enable learning of compact representations are important. This paper exploits monads, both as a mathematical structure and a programming construct, to design, prove correct, and implement a wide class of such optimizations. The former perspective on monads allows us to develop a new algorithm and accompanying correctness proofs, building upon a general framework for automata learning based on category theory. The new algorithm is parametric on a monad, which provides a rich algebraic structure to capture non-determinism and other side-effects. We show that our approach allows us to uniformly capture existing algorithms, develop new ones, and add optimizations. The latter perspective allows us to effortlessly translate the theory into practice: we provide a Haskell library implementing our general framework, and we show experimental results for two specific instances: non-deterministic and weighted automata.
computer science
We introduce and solve a model of interacting electrons and phonons that is a natural generalization of the Sachdev-Ye-Kitaev-model and that becomes superconducting at low temperatures. In the normal state two Non-Fermi liquid fixed points with distinct universal exponents emerge. At weak coupling superconductivity prevents the onset of low-temperature quantum criticality, reminiscent of the behavior in several heavy-electron and iron-based materials. At strong coupling, pairing of highly incoherent fermions sets in deep in the Non-Fermi liquid regime, a behavior qualitatively similar to that in underdoped cuprate superconductors. The pairing of incoherent time-reversal partners is protected by a mechanism similar to Anderson's theorem for disordered superconductors. The superconducting ground state is characterized by coherent quasiparticle excitations and higher-order bound states thereof, revealing that it is no longer an ideal gas of Cooper pairs, but a strongly coupled pair fluid. The normal-state incoherency primarily acts to suppress the weight of the superconducting coherence peak and reduce the condensation energy. Based on this we expect strong superconducting fluctuations, in particular at strong coupling.
condensed matter
Loss of power and clear description of treatment differences are key issues in designing and analyzing a clinical trial where non-proportional hazard is a possibility. A log-rank test may be very inefficient and interpretation of the hazard ratio estimated using Cox regression is potentially problematic. In this case, the current ICH E9 (R1) addendum would suggest designing a trial with a clinically relevant estimand, e.g., expected life gain. This approach considers appropriate analysis methods for supporting the chosen estimand. However, such an approach is case specific and may suffer lack of power for important choices of the underlying alternate hypothesis distribution. On the other hand, there may be a desire to have robust power under different deviations from proportional hazards. Also, we would contend that no single number adequately describes treatment effect under non-proportional hazards scenarios. The cross-pharma working group has proposed a combination test to provide robust power under a variety of alternative hypotheses. These can be specified for primary analysis at the design stage and methods appropriately accounting for combination test correlations are efficient for a variety of scenarios. We have provided design and analysis considerations based on a combination test under different non-proportional hazard types and present a straw man proposal for practitioners. The proposals are illustrated with real life example and simulation.
statistics
Valtancoli in his paper entitled [P. Valtancoli, Canonical transformations, and minimal length J. Math. Phys. 56, 122107 (2015)] has shown how the deformation of the canonical transformations can be made compatible with the deformed Poisson brackets. Based on this work and through an appropriate canonical transformation, we solve the problem of one dimensional (1D) damped harmonic oscillator at the classical limit of the Snyder-de Sitter (SdS) space. We show that the equations of the motion can be described by trigonometric functions with frequency and period depending on the deformed and the damped parameters. We eventually discuss the influences of these parameters on the motion of the system.
high energy physics theory
In extreme learning machines (ELM) the hidden-layer coefficients are randomly set and fixed, while the output-layer coefficients of the neural network are computed by a least squares method. The randomly-assigned coefficients in ELM are known to influence its performance and accuracy significantly. In this paper we present a modified batch intrinsic plasticity (modBIP) method for pre-training the random coefficients in the ELM neural networks. The current method is devised based on the same principle as the batch intrinsic plasticity (BIP) method, namely, by enhancing the information transmission in every node of the neural network. It differs from BIP in two prominent aspects. First, modBIP does not involve the activation function in its algorithm, and it can be applied with any activation function in the neural network. In contrast, BIP employs the inverse of the activation function in its construction, and requires the activation function to be invertible (or monotonic). The modBIP method can work with the often-used non-monotonic activation functions (e.g. Gaussian, swish, Gaussian error linear unit, and radial-basis type functions), with which BIP breaks down. Second, modBIP generates target samples on random intervals with a minimum size, which leads to highly accurate computation results when combined with ELM. The combined ELM/modBIP method is markedly more accurate than ELM/BIP in numerical simulations. Ample numerical experiments are presented with shallow and deep neural networks for function approximation and boundary/initial value problems with partial differential equations. They demonstrate that the combined ELM/modBIP method produces highly accurate simulation results, and that its accuracy is insensitive to the random-coefficient initializations in the neural network. This is in sharp contrast with the ELM results without pre-training of the random coefficients.
physics
In this paper, mid-wave infrared photodetection based on an InAs/GaSb type-II superlattice p-i-n photodetector grown directly on Si substrate is demonstrated and characterized. Excitation power dependence on integrated intensity from the photoluminescence measurements reveals a power coefficient of P~I0.74, indicating that defects related process is playing an important role in the predominant recombination channel for photogenerated carriers. At 70 K, the device exhibits a dark current density of 2.3 A/cm2 under -0.1 V bias. Arrhenius analysis of dark current shows activation energies much less than half of the active layer bandgap, which suggests that the device is mainly limited by surface leakage and defect-assisted tunneling, consistent with the photoluminescence analysis. The detector shows 50% cutoff wavelength at ~5.5 um at 70 K under bias of -0.1 V. The corresponding peak responsivity and specific detectivity are 1.2 A/W and 1.3*10e9 cm*Hz1/2/W, respectively. Based on these optoelectronics characterization results, reduction of defects by optimizing the III/V-Si interface, and suppression of surface leakage channels are argued to be the main factors for performance improvement in this Si-based T2SL detector towards low cost, large-format MWIR detection system on Si photonics platform.
physics
In this paper, we associate a class of Hurwitz matrix polynomials with Stieltjes positive definite matrix sequences. This connection leads to an extension of two classical criteria of Hurwitz stability for real polynomials to matrix polynomials: tests for Hurwitz stability via positive definiteness of block-Hankel matrices built from matricial Markov parameters and via matricial Stieltjes continued fractions. We obtain further conditions for Hurwitz stability in terms of block-Hankel minors and quasiminors, which may be viewed as a weak version of the total positivity criterion.
mathematics
We study the comparison problem of distribution equality between two random samples under a right censoring scheme. To address this problem, we design a series of tests based on energy distance and kernel mean embeddings. We calibrate our tests using permutation methods and prove that they are consistent against all fixed continuous alternatives. To evaluate our proposed tests, we simulate survival curves from previous clinical trials. Additionally, we provide practitioners with a set of recommendations on how to select parameters/distances for the delay effect problem. Based on the method for parameter tunning that we propose, we show that our tests demonstrate a considerable gain of statistical power against classical survival tests.
statistics
We have realized that under Lorentz transformations the tick number of a moving common clock remains unchanged, that is, the hand of the clock never runs slow, but the time interval between its two consecutive ticks contracts, so the relative time has to be recorded by using the tau-clocks required by the transformations, instead of unreal slowing clocks. Thus it is argued that using rest common clocks or the equivalent the measured velocity of light emitted by a moving source, which is quasi-velocity of foreign light, is dependent of the source velocity. Nevertheless, the velocity of foreign light that should be measured by using tau-clocks is independent of the source velocity. The velocity of native light emitted by a rest source obeys the postulate of relativity in accordance with both Maxwell equations and the result of Michelson-Morley experiment. On the other hand, the velocity of foreign light obeys both Ritz's emission theory except the Lorentz factor and the postulate of constancy of light velocity if measured by using tau-clocks. Thus the emission theory does not conflict with special relativity. The present argument leads to a logical consequence that the so-called positive conclusions from experiments testing constancy of the velocity of light emitted by moving sources if using common clocks or the equivalent, instead of tau-clocks, exactly contradicts Lorentz transformations.
physics
We systematically calculate the isospin violating decay, $D_s^*\to D_s\pi^0$, with the heavy meson chiral perturbation theory up to $\mathcal{O}(p^3)$ including the loop diagrams. The $\mathcal{O}(p^3)$ tree level amplitudes contain four undetermined LECs. We use two strategies to estimate them. With the nonanalytic dominance approximation, we get $\Gamma[D_s^\ast\to D_s\pi^0]=(3.38\pm0.12)$ eV. With the naturalness assumption, we give a possible range of the isospin violating decay width, $[1.11-6.88]$ eV. We find that the contribution of the $\mathcal{O}(p^3)$ corrections might be significant.
high energy physics phenomenology
We present GLNet, a self-supervised framework for learning depth, optical flow, camera pose and intrinsic parameters from monocular video - addressing the difficulty of acquiring realistic ground-truth for such tasks. We propose three contributions: 1) we design new loss functions that capture multiple geometric constraints (eg. epipolar geometry) as well as an adaptive photometric loss that supports multiple moving objects, rigid and non-rigid, 2) we extend the model such that it predicts camera intrinsics, making it applicable to uncalibrated video, and 3) we propose several online refinement strategies that rely on the symmetry of our self-supervised loss in training and testing, in particular optimizing model parameters and/or the output of different tasks, thus leveraging their mutual interactions. The idea of jointly optimizing the system output, under all geometric and photometric constraints can be viewed as a dense generalization of classical bundle adjustment. We demonstrate the effectiveness of our method on KITTI and Cityscapes, where we outperform previous self-supervised approaches on multiple tasks. We also show good generalization for transfer learning in YouTube videos.
computer science
I discuss the implementation of the Peccei-Quinn mechanism in a minimal realization of the Pati-Salam partial unification scheme. The axion mass is shown to be related to the Pati-Salam breaking scale and it is predicted via a two-loop renormalization group analysis to be in the window $m_a \in [10^{-11}, \, 3 \times 10^{-7}]$ eV, as a function of a sliding Left-Right symmetry breaking scale. This parameter space will be fully covered by the late phases of the axion Dark Matter experiments ABRACADABRA and CASPEr-Electric. A Left-Right symmetry breaking scenario as low as 20 TeV is obtained for a Pati-Salam breaking of the order of the reduced Planck mass.
high energy physics phenomenology
Large-scale association analysis between multivariate responses and predictors is of great practical importance, as exemplified by modern business applications including social media marketing and crisis management. Despite the rapid methodological advances, how to obtain scalable estimators with free tuning of the regularization parameters remains unclear under general noise covariance structures. In this paper, we develop a new methodology called sequential scaled sparse factor regression (SESS) based on a new viewpoint that the problem of recovering a jointly low-rank and sparse regression coefficient matrix can be decomposed into several univariate response sparse regressions through regular eigenvalue decomposition. It combines the strengths of sequential estimation and scaled sparse regression, thus sharing the scalability and the tuning free property for sparsity parameters inherited from the two approaches. The stepwise convex formulation, sequential factor regression framework, and tuning insensitiveness make SESS highly scalable for big data applications. Comprehensive theoretical justifications with new insights into high-dimensional multi-response regressions are also provided. We demonstrate the scalability and effectiveness of the proposed method by simulation studies and stock short interest data analysis.
statistics
We propose the Neural Logic Machine (NLM), a neural-symbolic architecture for both inductive learning and logic reasoning. NLMs exploit the power of both neural networks---as function approximators, and logic programming---as a symbolic processor for objects with properties, relations, logic connectives, and quantifiers. After being trained on small-scale tasks (such as sorting short arrays), NLMs can recover lifted rules, and generalize to large-scale tasks (such as sorting longer arrays). In our experiments, NLMs achieve perfect generalization in a number of tasks, from relational reasoning tasks on the family tree and general graphs, to decision making tasks including sorting arrays, finding shortest paths, and playing the blocks world. Most of these tasks are hard to accomplish for neural networks or inductive logic programming alone.
computer science
Doping can profoundly affect the electronic- and optical-structure of semiconductors. Here we address the effect of surplus charges on non-radiative (NR) exciton and trion decay in doped semiconducting single-wall carbon nanotubes. The dependence of exciton photoluminescence quantum yields and exciton decay on the doping level, with its characteristically stretched-exponential kinetics, is attributed to diffusion-limited NR decay at charged impurity sites. By contrast, trion decay is unimolecular with a rate constant of $2.0\,\rm ps^{-1}$. Our experiments thus show that charged impurities not only trap trions and scavenge mobile excitons but that they also facilitate efficient NR energy dissipation for both.
condensed matter
We present an experimental optical implementation of a parallel-in-time discrete model of quantum evolution, based on the entanglement between the quantum system and a finite dimensional quantum clock. The setup is based on a programmable spatial light modulator which entangles the polarization and transverse spatial degrees of freedom of a single photon. It enables the simulation of a qubit history state containing the whole evolution of the system, capturing its main features in a simple and configurable scheme. We experimentally determine the associated system-time entanglement, which is a measure of distinguishable quantum evolution, and also the time average of observables, which in the present realization can be obtained through one single measurement.
quantum physics
BitLocker is a full-disk encryption feature available in recent Windows versions. It is designed to protect data by providing encryption for entire volumes and it makes use of a number of different authentication methods. In this paper we present a solution, named BitCracker, to attempt the decryption, by means of a dictionary attack, of memory units encrypted by BitLocker with a user supplied password or the recovery password. To that purpose, we resort to GPU (Graphics Processing Units) that are, by now, widely used as general-purpose coprocessors in high performance computing applications. BitLocker decryption process requires the computation of a very large number of SHA- 256 hashes and also AES, so we propose a very fast solution, highly tuned for Nvidia GPU, for both of them. We analyze the performance of our CUDA implementation on several Nvidia GPUs and we carry out a comparison of our SHA-256 hash with the Hashcat password cracker tool. Finally, we present our OpenCL version, recently released as a plugin of the John The Ripper tool.
computer science