text
stringlengths
11
9.77k
label
stringlengths
2
104
We analyze the anomalous magnetic moment of the muon $g-2$ in the $\mu\nu$SSM. This $R$-parity violating model solves the $\mu$ problem reproducing simultaneously neutrino data, only with the addition of right-handed neutrinos. In the framework of the $\mu\nu$SSM, light left muon-sneutrino and wino masses can be naturally obtained driven by neutrino physics. This produces an increase of the dominant chargino-sneutrino loop contribution to muon $g-2$, solving the gap between the theoretical computation and the experimental data. To analyze the parameter space, we sample the $\mu\nu$SSM using a likelihood data-driven method, paying special attention to reproduce the current experimental data on neutrino and Higgs physics, as well as flavor observables such as $B$ and $\mu$ decays. We then apply the constraints from LHC searches for events with multi-leptons + MET on the viable regions found. They can probe these regions through chargino-chargino, chargino-neutralino and neutralino-neutralino pair production. We conclude that significant regions of the parameter space of the $\mu\nu$SSM can explain muon $g-2$ data.
high energy physics phenomenology
An infinite dimensional system such as a quantum harmonic oscillator offers a potentially unbounded Hilbert space for computation, but accessing and manipulating the entire state space requires a physically unrealistic amount of energy. When such a quantum harmonic oscillator is coupled to a qubit, for example via a Jaynes-Cummings interaction, it is well known that the total Hilbert space can be separated into independently accessible subspaces of constant energy, but the number of subspaces is still infinite. Nevertheless, a closed four-dimensional Hilbert space can be analytically constructed from the lowest energy states of the qubit-oscillator system. We extend this idea and show how a $d$-dimensional Hilbert space can be analytically constructed, which is closed under a finite set of unitary operations resulting solely from manipulating standard Jaynes-Cummings Hamiltonian terms. Moreover, we prove that the first-order sideband pulses and carrier pulses comprise a universal set for quantum operations on the qubit-oscillator qudit. This work suggests that the combination of a qubit and a bosonic system may serve as hardware-efficient quantum resources for quantum information processing.
quantum physics
Effective Field Theory (EFT) is the successful paradigm underlying modern theoretical physics, including the "Core Theory" of the Standard Model of particle physics plus Einstein's general relativity. I will argue that EFT grants us a unique insight: each EFT model comes with a built-in specification of its domain of applicability. Hence, once a model is tested within some domain (of energies and interaction strengths), we can be confident that it will continue to be accurate within that domain. Currently, the Core Theory has been tested in regimes that include all of the energy scales relevant to the physics of everyday life (biology, chemistry, technology, etc.). Therefore, we have reason to be confident that the laws of physics underlying the phenomena of everyday life are completely known.
physics
Machine learning is nowadays a standard technique for data analysis within software applications. Software engineers need quality assurance techniques that are suitable for these new kinds of systems. Within this article, we discuss the question whether standard software testing techniques that have been part of textbooks since decades are also useful for the testing of machine learning software. Concretely, we try to determine generic smoke tests that can be used to assert that basic functions can be executed without crashing. We found that we can derive such tests using techniques similar to equivalence classes and boundary value analysis. Moreover, we found that these concepts can also be applied to hyperparameters, to further improve the quality of the smoke tests. Even though our approach is almost trivial, we were able to find bugs in all three machine learning libraries that we tested and severe bugs in two of the three libraries. This demonstrates that common software testing techniques are still valid in the age of machine learning and that they are suitable to find and prevent severe bugs, even in mature machine learning libraries.
computer science
Hirschfeldt and Jockusch (2016) introduced a two-player game in which winning strategies for one or the other player precisely correspond to implications and non-implications between $\Pi^1_2$ principles over $\omega$-models of $\mathsf{RCA}_0$. They also introduced a version of this game that similarly captures provability over $\mathsf{RCA}_0$. We generalize and extend this game-theoretic framework to other formal systems, and establish a certain compactness result that shows that if an implication $\mathsf{Q} \to \mathsf{P}$ between two principles holds, then there exists a winning strategy that achieves victory in a number of moves bounded by a number independent of the specific run of the game. This compactness result generalizes an old proof-theoretic fact noted by H.~Wang (1981), and has applications to the reverse mathematics of combinatorial principles. We also demonstrate how this framework leads to a new kind of analysis of the logical strength of mathematical problems that refines both that of reverse mathematics and that of computability-theoretic notions such as Weihrauch reducibility, allowing for a kind of fine-structural comparison between $\Pi^1_2$ principles that has both computability-theoretic and proof-theoretic aspects, and can help us distinguish between these, for example by showing that a certain use of a principle in a proof is ``purely proof-theoretic'', as opposed to relying on its computability-theoretic strength. We give examples of this analysis to a number of principles at the level of $\mathsf{B}\Sigma^0_2$, uncovering new differences between their logical strengths.
mathematics
The quirk particle carries Lorentz force and long-range infracolor force, while suffers relatively large ionization energy loss inside the detector. It can be indirectly constrained by mono-jet search or directly search through co-planar hits if the confinement scale is not too low ($\Lambda \gtrsim 100$ eV). Considering the ionization energy loss inside tracker, we improve the co-planar search. We also will solve the equation of motion for quirks numerically by including all of the important contributions. Based on our selection strategy, the $\sim 100$ fb$^{-1}$ dataset at the LHC will be able to probe the colored fermion/scalar quirks with masses up to {2.1/1.1 TeV}, and the color neutral fermion/scalar quirks with masses up to {450/150 GeV}, respectively.
high energy physics phenomenology
The sampling of probability distributions specified up to a normalization constant is an important problem in both machine learning and statistical mechanics. While classical stochastic sampling methods such as Markov Chain Monte Carlo (MCMC) or Langevin Dynamics (LD) can suffer from slow mixing times there is a growing interest in using normalizing flows in order to learn the transformation of a simple prior distribution to the given target distribution. Here we propose a generalized and combined approach to sample target densities: Stochastic Normalizing Flows (SNF) -- an arbitrary sequence of deterministic invertible functions and stochastic sampling blocks. We show that stochasticity overcomes expressivity limitations of normalizing flows resulting from the invertibility constraint, whereas trainable transformations between sampling steps improve efficiency of pure MCMC/LD along the flow. By invoking ideas from non-equilibrium statistical mechanics we derive an efficient training procedure by which both the sampler's and the flow's parameters can be optimized end-to-end, and by which we can compute exact importance weights without having to marginalize out the randomness of the stochastic blocks. We illustrate the representational power, sampling efficiency and asymptotic correctness of SNFs on several benchmarks including applications to sampling molecular systems in equilibrium.
statistics
Hybrid beamforming (HBF) design is a crucial stage in millimeter wave (mmWave) multi-user multi-input multi-output (MU-MIMO) systems. However, conventional HBF methods are still with high complexity and strongly rely on the quality of channel state information. We propose an extreme learning machine (ELM) framework to jointly optimize transmitting and receiving beamformers. Specifically, to provide accurate labels for training, we first propose an factional-programming and majorization-minimization based HBF method (FP-MM-HBF). Then, an ELM based HBF (ELM-HBF) framework is proposed to increase the robustness of beamformers. Both FP-MM-HBF and ELM-HBF can provide higher system sum-rate compared with existing methods. Moreover, ELM-HBF cannot only provide robust HBF performance, but also consume very short computation time.
electrical engineering and systems science
Any gravitational scattering amplitude takes a remarkably simple factorized form at tree level in multi-Regge kinematics (MRK), where the produced particles are strongly ordered in rapidity. Very recently, it was shown that also the scattering equations have a very simple structure in MRK. In this paper we study Einstein gravity amplitudes in MRK in the framework of the scattering equations. We present a new derivation of the multi-Regge factorization of tree-level amplitudes with any number of external gravitons and any helicity configuration.
high energy physics theory
The trans-Planckian censorship conjecture implies that single-field models of inflation require an extreme fine-tuning of the initial conditions due to the very low-scale of inflation. In this work, we show how a quantum cosmological proposal -- namely the tunneling wavefunction -- naturally provides the necessary initial conditions without requiring such fine-tunings. More generally, we show how the tunneling wavefunction can provide suitable initial conditions for hilltop inflation models, the latter being typically preferred by the swampland constraints.
high energy physics theory
Single-crystal sample of chromium doped with ca.0.2 at.%119Sn was studied by means of a transmission M\"ossbauer spectroscopy in the temperature range of 310-315 K. An anomaly in the temperature behavior of the center shift was found at ca. 313 K, a temperature that coincides well with the N\'eel temperature of chromium. The anomaly gives evidence that the vibrations of atoms in the studied system are affected by the magnetism.
condensed matter
Animals having a trend to align their velocities to an average of their neighbors' may flock as illustrated by the Vicsek model and its variants. If, in addition, they feel a systematic contrarian trend, the result may be a time periodic adjustment of the flock or period doubling in time. This is demonstrated by analyzing a modified Vicsek model of self-propelled particles and its corresponding kinetic equation valid for a large number of particles. We have carried out a stability and bifurcation analysis of the order-disorder transition to spatially uniform stationary or time periodic solutions that are characterized by their complex order parameters. Direct numerical simulations differing from theoretical predictions indicate the formation of spatiotemporal structures. Strikingly, we have found that increasing the usual alignment noise may favor flocking and an optimum noise produces the strongest possible order parameter.
condensed matter
We consider the multiple products of relevant and marginal scalar composite operators at the Gaussian fixed-point in $D=4$ dimensions. This amounts to perturbative construction of the $\phi^4$ theory where the parameters of the theory are momentum dependent sources. Using the exact renormalization group (ERG) formalism, we show how the scaling properties of the sources are given by the short-distance singularities of the multiple products.
high energy physics theory
Chest x-ray imaging is widely used for the diagnosis of pneumothorax and there has been significant interest in developing automated methods to assist in image interpretation. We present an image classification pipeline which detects pneumothorax as well as the various types of chest tubes that are commonly used to treat pneumothorax. Our multi-stage algorithm is based on lung segmentation followed by pneumothorax classification, including classification of patches that are most likely to contain pneumothorax. This algorithm achieves state of the art performance for pneumothorax classification on an open-source benchmark dataset. Unlike previous work, this algorithm shows comparable performance on data with and without chest tubes and thus has an improved clinical utility. To evaluate these algorithms in a realistic clinical scenario, we demonstrate the ability to identify real cases of missed pneumothorax in a large dataset of chest x-ray studies.
electrical engineering and systems science
Given entropy's central role in multiple areas of physics and science, one important task is to develop a systematic and unifying approach to defining entropy. Games of chance become a natural candidate for characterising the uncertainty of a physical system, as a system's performance in gambling games depends solely on the uncertainty of its output. In this work, we construct families of games which induce pre-orders corresponding to majorization, conditional majorization, and channel majorization. Finally, we provide operational interpretations for all pre-orders, show the relevance of these results to dynamical resource theories, and find the only asymptotically continuous classical dynamic entropy.
quantum physics
Abuse on the Internet is an important societal problem of our time. Millions of Internet users face harassment, racism, personal attacks, and other types of abuse across various platforms. The psychological effects of abuse on individuals can be profound and lasting. Consequently, over the past few years, there has been a substantial research effort towards automated abusive language detection in the field of NLP. In this position paper, we discuss the role that modeling of users and online communities plays in abuse detection. Specifically, we review and analyze the state of the art methods that leverage user or community information to enhance the understanding and detection of abusive language. We then explore the ethical challenges of incorporating user and community information, laying out considerations to guide future research. Finally, we address the topic of explainability in abusive language detection, proposing properties that an explainable method should aim to exhibit. We describe how user and community information can facilitate the realization of these properties and discuss the effective operationalization of explainability in view of the properties.
computer science
The iterative qubit coupled cluster (iQCC) method is a systematic variational approach to solve the electronic structure problem on universal quantum computers. It is able to use arbitrarily shallow quantum circuits at expense of iterative canonical transformation of the Hamiltonian and rebuilding a circuit. Here we present a variety of a posteriori corrections to the iQCC energies to reduce the number of iterations to achieve the desired accuracy. Our energy corrections are based on a low-order perturbation theory series that can be efficiently evaluated on a classical computer. Moreover, capturing a part of the total energy perturbatively, allows us to formulate the qubit active-space concept, in which only a subset of all qubits is treated variationally. As a result, further reduction of quantum resource requirements is achieved. We demonstrate the utility and efficiency of our approach numerically on the examples of 10-qubit N$_2$ molecule dissociation, the 24-qubit H$_2$O symmetric stretch, and 56-qubit singlet-triplet gap calculations for the technologically important complex, tris-(2-phenylpyridine)iridium(III), Ir(ppy)$_3$.
quantum physics
Binary systems formed by early-type stars with strong winds are known to display variable non-thermal radio emission, thermal X-rays, and, at least in one case (Eta Carina), $\gamma$ rays. Some of these systems are quite eccentric and the conditions for efficient particle acceleration and $\gamma$-ray production might manifest only occasionally. In this paper I briefly review the physics of colliding wind binaries with emphasis on the observational signatures of non-thermal particle acceleration. I discuss, in particular, the case of the system HD 93129A which is made up by an O2 If* star (the primary) and an O3.5 V star (the secondary). The primary is among the earliest, hottest, most massive, luminous, and windy O stars in the Galaxy. The periastron passage during 2018 will offer an outstanding observational window that will be exploited by an international multi-wavelength campaign.
astrophysics
This article studies connections between group actions and their corresponding vector spaces. Given an action of a group $G$ on a nonempty set $X$, we examine the space $L(X)$ of scalar-valued functions on $X$ and its fixed subspace: $$ L^G(X) = \{f\in L(X)\colon f(a\cdot x) = f(x) \textrm{ for all }a\in G, x\in X\}. $$ In particular, we show that $L^G(X)$ is an invariant of the action of $G$ on $X$. In the case when the action is finite, we compute the dimension of $L^G(X)$ in terms of fixed points of $X$ and prove several prominent results for $L^G(X)$, including Bessel's inequality and Frobenius reciprocity.
mathematics
This document summarises the talks and discussions happened during the VBSCan Mid-Term Scientific Meeting workshop. The VBSCan COST action is dedicated to the coordinated study of vector boson scattering (VBS) from the phenomenological and experimental point of view, for the best exploitation of the data that will be delivered by existing and future particle colliders.
high energy physics phenomenology
${\bf Background}$ Details of the kinetic pathways governing enzymatic cleavage of hyaluronic acid (HA) by hyaluronidase are still widely uncharted. Capillary electrophoresis-based assays were used for accurate quantification of enzymatic products. A crowding agent was also employed to mimic excluded-volume constraints typical of in-vivo conditions. $ {\bf Scope}$ Introduce a comprehensive kinetic model describing the late-stage degradation of HA by hyaluronidase and identify the relevant kinetic pathways and the associated rates. ${\bf Major Conclusions}$ All relevant fragmentation and transglycosylation pathways and rates were identified. Two dimers forming a tetramer is the dominant recombination pathway. Macromolecular and self-crowding slow down the kinetics but do not alter the underlying mechanisms. ${\bf General Significance}$ Our results bring a novel and comprehensive quantitative insight into enzymatic HA degradation. Rationalizing the effect of crowding brings the intricate conditions of in-vivo settings a little closer, and also stands as a powerful tool to pinpoint relevant kinetic pathways in complex systems.
condensed matter
The discovery of alternating superconducting and insulating ground-states in magic angle graphene has suggested an intriguing analogy with cuprate high-$T_c$ materials. Here we argue that the network states of small angle twisted bilayer graphene (TBG) afford a further perspective on the cuprates by emulating their stripe-ordered phases, as in La$_{1.875}$Ba$_{0.125}$CuO$_4$. We show that the spin and valley quantum numbers of stripes in TBG graphene fractionalize, developing characteristic signatures in the tunneling density of states and the magnetic noise spectrum of impurity spins. By examining the coupling between the charge rivers we determine the superconducting transition temperature. Our study suggests that magic angle graphene can be used for a controlled emulation of stripe superconductivity and quantum sensing experiments of emergent anyonic excitations.
condensed matter
We investigate the properties of a quantum walk which can simulate the behavior of a spin $1/2$ particle in a model with an ordinary spatial dimension, and one extra dimension with warped geometry between two branes. Such a setup constitutes a $1+1$ dimensional version of the Randall-Sundrum model, which plays an important role in high energy physics. In the continuum spacetime limit, the quantum walk reproduces the Dirac equation corresponding to the model, which allows to anticipate some of the properties that can be reproduced by the quantum walk. In particular, we observe that the probability distribution becomes, at large time steps, concentrated near the "low energy" brane, and can be approximated as the lowest eigenstate of the continuum Hamiltonian that is compatible with the symmetries of the model. In this way, we obtain a localization effect whose strength is controlled by a warp coefficient. In other words, here localization arises from the geometry of the model, at variance with the usual effect that is originated from random irregularities, as in Anderson localization. In summary, we establish an interesting correspondence between a high energy physics model and localization in quantum walks.
quantum physics
The role of the feedback effect on physical reservoir computing is studied theoretically by solving the vortex-core dynamics in a nanostructured ferromagnet. Although the spin-transfer torque due to the feedback current makes the vortex dynamics complex, it is clarified that the feedback effect does not always contribute to the enhancement of the memory function in a physical reservoir. The memory function, characterized by the correlation coefficient between the input data and the dynamical response of the vortex core, becomes large when the delay time of the feedback current is not an integral multiple of the pulse width. On the other hand, the memory function remains small when the delay time is an integral multiple of the pulse width. As a result, a periodic behavior for the short-term memory capacity is observed with respect to the delay time, the phenomenon of which can be attributed to correlations between the virtual neurons via the feedback current.
condensed matter
In this paper, we propose a simple algorithm to cluster nonnegative data lying in disjoint subspaces. We analyze its performance in relation to a certain measure of correlation between said subspaces. We use our clustering algorithm to develop a matrix completion algorithm which can outperform standard matrix completion algorithms on data matrices satisfying certain natural conditions.
statistics
A new equation of motion, which is derived previously by imposing Neumann boundary condition on cosmological perturbation equations (Shenavar 2016 a), is investigated. By studying the precession of perihelion, it is shown that the new equation of motion suggests a small, though detectable, correction in orbits of solar system objects. Then a system of particles is surveyed to have a better understanding of galactic structures. Also the general form of the force law is introduced by which the rotation curve and mass discrepancy of axisymmetric disks of stars are derived. In addition, it is suggested that the mass discrepancy as a function of centripetal acceleration becomes significant near a constant acceleration $ 2c_{1}a_{0} $ where $c_{1}$ is the Neumann constant and $ a_{0} = 6.59 \times 10^{-10} $ $m/s^{2}$ is a fundamental acceleration. Furthermore, it is shown that a critical surface density equal to $ \sigma_{0}=a_{0}/G $, in which G is the Newton gravitational constant, has a significant role in rotation curve and mass discrepancy plots. Also, the specific form of NFW mass density profile at small radii, $ \rho \propto 1/r $, is explained too. Finally, the present model will be tested by using a sample of 39 LSB galaxies for which we will show that the rotation curve fittings are generally acceptable. The derived mass to light ratios too are found within the plausible bound except for the galaxy F571-8.
astrophysics
In this paper it is shown that $C_\beta$-smooth functions can be approximated by neural networks with parameters $\{0,\pm \frac{1}{2}, \pm 1, 2\}$. The depth, width and the number of active parameters of constructed networks have, up to a logarithimc factor, the same dependence on the approximation error as the networks with parameters in $[-1,1]$. In particular, this means that the nonparametric regression estimation with constructed networks attain the same convergence rate as with the sparse networks with parameters in $[-1,1]$.
statistics
We propose to search for millicharged particles in electron colliders operated with the center-of-mass energies at ${\cal O}$(1-10) GeV, which include Belle II, BESIII, BaBar, and also the proposed experiment STCF. We use the monophoton final state to probe the parameter space of millicharged particles at electron colliders. We find that electron colliders have sensitivity to the previously unexplored parameter space for millicharged particles with MeV-GeV mass: $\epsilon \lesssim {\cal O}(10^{-1})$ for $0.5$ GeV $\lesssim m \lesssim 3.5$ GeV in BaBar, $\epsilon \lesssim {\cal O}(10^{-3})$ for $0.1$ GeV $\lesssim m \lesssim 1.5$ GeV in BESIII, $\epsilon \lesssim 10^{-3}-10^{-2}$ for $0.1$ GeV $\lesssim m \lesssim 4$ GeV in Belle II, and $\epsilon \lesssim {\cal O}(10^{-4})$ for $1$ MeV $\lesssim m \lesssim 1$ GeV in STCF.
high energy physics phenomenology
We construct explicit complete Ricci-flat metrics on the total spaces of certain vector bundles over flag manifolds of the group $SU(n)$, for all K\"ahler classes. These metrics are natural generalizations of the metrics of Candelas-de la Ossa on the conifold, Pando Zayas-Tseytlin on the canonical bundle over $\mathbb{CP}^1\times \mathbb{CP}^1$, as well as the metrics on canonical bundles over flag manifolds, recently constructed by van Coevering.
high energy physics theory
We introduce a novel type of self-bound droplet which carries an emergent color charge. We consider a system of particles hopping on a lattice and interacting via a commensurately sign-changing potential which is attractive at a short range. The droplet formation is heralded by spontaneous crystallization into topologically distinct domains. This endows each droplet with an emergent color charge governing their mutual interactions: attractive for equal colors and repulsive otherwise. The number of allowed colors is fixed only by the discrete spatial symmetries of the sign-changing part of the interaction potential. With increasing interaction range, the droplets become progressively more mobile, with their color charge still being energetically protected, allowing for nontrivial viscous dynamics of the interacting droplet plasmas formed during cooling. Sign-changing potentials with a short-range attraction appear quite naturally for light-mediated interactions and we concretely propose a realization in state-of-the-art experiments with cold atoms in a multimode optical cavity.
condensed matter
The kinematic distributions of the lepton pairs produced in the decay of the Standard Model Higgs to ZZ and WW are related to the polarization fractions of the virtual vector bosons. The full amplitude can be decomposed analytically into a sum of polarized terms. Several observables, in particular the invariant mass of two charged leptons, one from each of the bosons, and the lepton angular distribution in the vector boson center of mass are shown to be sensitive to the boson polarizations.
high energy physics phenomenology
Magellanic Bridge C (MB-C) is a metal-poor ($\sim$1/5 $Z_{\odot}$) low-density star-forming region located 59 kpc away in the Magellanic Bridge, offering a resolved view of the star formation process in conditions different to the Galaxy. From Atacama Large Millimetre Array CO (1-0) observations, we detect molecular clumps associated to candidate young stellar objects (YSOs), pre-main sequence (PMS) stars, and filamentary structure identified in far-infrared imaging. YSOs and PMS stars form in molecular gas having densities between 17-200 $M_{\odot}$ pc$^{-2}$, and have ages between $\lesssim$0.1-3 Myr. YSO candidates in MB-C have lower extinction than their Galactic counterparts. Otherwise, our results suggest that the properties and morphologies of molecular clumps, YSOs, and PMS stars in MB-C present no patent differences with respect to their Galactic counterparts, tentatively alluding that the bottleneck to forming stars in regions similar to MB-C is the conversion of atomic gas to molecular.
astrophysics
We propose a non-universal $U(1)_{X}$ gauge extension to the Standard Model (SM) and an additional Peccei-Quinn (PQ) global symmetry to study the hierarchy mass and strong CP problem. The scheme allows us to distinguish among fermion families and to generate the fermionic mass spectrum of SM particles. The symmetry breaking is performed by two scalar Higgs doublets and two scalar Higgs singlets, where one of these has the axion which turns out to be a candidate for Cold Dark Matter. The exotic sector is composed of one up-type $T$, two down-type $J^{1,2}$ heavy quarks, two heavy charged leptons $E,\mathcal{E}$, one additional right-handed $\nu_{R}^{e,\mu,\tau}$ neutrino per family and an invisible axion $a$. In addition, the large energy scale associated with the breaking of the PQ-symmetry gives masses to the right-handed neutrinos in such manner that the active neutrinos acquire $eV$-mass values due to the see-saw mechanism. On the other hand, from the non-linear effective Lagrangian, the flavour changing of the down quarks and charged leptons with the axion are considered. The $\tau\to ea$ branching ratio is of the order of $10^{-2}$.
high energy physics phenomenology
Given a graph $G$, a 2-coloring of the edges of $K_n$ is said to contain a balanced copy of $G$ if we can find a copy of $G$ such that half of its edges is in each color class. If there exists an integer $k$ such that, for $n$ sufficiently large, every 2-coloring of $K_n$ with more than $k$ edges in each color contains a balanced copy of $G$, then we say that $G$ is balanceable. The smallest integer $k$ such that this holds is called the balancing number of $G$. In this paper, we define a more general variant of the balancing number, the list balancing number, by considering 2-list edge colorings of $K_n$, where every edge $e$ has an associated list $L(e)$ which is a nonempty subset of the color set $\{r,b\}$. In this case, edges $e$ with $L(e) = \{r,b\}$ act as jokers in the sense that their color can be chosen $r$ or $b$ as needed. In contrast to the balancing number, every graph has a list balancing number. Moreover, if the balancing number exists, then it coincides with the list balancing number. We give the exact value of the list balancing number for all cycles except for $4k$-cycles for which we give tight bounds. In addition, we give general bounds for the list balancing number of non-balanceable graphs based on the extremal number of its subgraphs, and study the list balancing number of $K_5$, which turns out to be surprisingly large.
mathematics
We study the $\mathsf{SL}(2)$ transformation properties of spherically symmetric perturbations of the Bertotti-Robinson universe and identify an invariant $\mu$ that characterizes the backreaction of these linear solutions. The only backreaction allowed by Birkhoff's theorem is one that destroys the $AdS_2\times S^2$ boundary and builds the exterior of an asymptotically flat Reissner-Nordstr\"om black hole with $Q=M\sqrt{1-\mu/4}$. We call such backreaction with boundary condition change an anabasis. We show that the addition of linear anabasis perturbations to Bertotti-Robinson may be thought of as a boundary condition that defines a connected $AdS_2\times S^2$. The connected $AdS_2$ is a nearly-$AdS_2$ with its $\mathsf{SL}(2)$ broken appropriately for it to maintain connection to the asymptotically flat region of Reissner-Nordstr\"om. We perform a backreaction calculation with matter in the connected $AdS_2\times S^2$ and show that it correctly captures the dynamics of the asymptotically flat black hole.
high energy physics theory
We present a game-based approach to teach Bell inequalities and quantum cryptography at high school. The approach is based on kinesthetic activities and allows students to experience and discover quantum features and their applications first-hand. We represent quantum states by the orientation of students, and mimic quantitative random behaviour and measurements using dice and apps.
physics
Within the machine learning community, the widely-used uniform convergence framework has been used to answer the question of how complex, over-parameterized models can generalize well to new data. This approach bounds the test error of the worst-case model one could have fit to the data, but it has fundamental limitations. Inspired by the statistical mechanics approach to learning, we formally define and develop a methodology to compute precisely the full distribution of test errors among interpolating classifiers from several model classes. We apply our method to compute this distribution for several real and synthetic datasets, with both linear and random feature classification models. We find that test errors tend to concentrate around a small typical value $\varepsilon^*$, which deviates substantially from the test error of the worst-case interpolating model on the same datasets, indicating that "bad" classifiers are extremely rare. We provide theoretical results in a simple setting in which we characterize the full asymptotic distribution of test errors, and we show that these indeed concentrate around a value $\varepsilon^*$, which we also identify exactly. We then formalize a more general conjecture supported by our empirical findings. Our results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice, and that approaches based on the statistical mechanics of learning may offer a promising alternative.
statistics
When epidemiologic studies are conducted in a subset of the population, selection bias can threaten the validity of causal inference. This bias can occur whether or not that selected population is the target population, and can occur even in the absence of exposure-outcome confounding. However, it is often difficult to quantify the extent of selection bias, and sensitivity analysis can be challenging to undertake and to understand. In this article we demonstrate that the magnitude of the bias due to selection can be bounded by simple expressions defined by parameters characterizing the relationships between unmeasured factor(s) responsible for the bias and the measured variables. No functional form assumptions are necessary about those unmeasured factors. Using knowledge about the selection mechanism, researchers can account for the possible extent of selection bias by specifying the size of the parameters in the bounds. We also show that the bounds, which differ depending on the target population, result in summary measures that can be used to calculate the minimum magnitude of the parameters required to shift a risk ratio to the null. The summary measure can be used to determine the overall strength of selection that would be necessary to explain away a result. We then show that the bounds and summary measures can be simplified in certain contexts or with certain assumptions. Using examples with varying selection mechanisms, we also demonstrate how researchers can implement these simple sensitivity analyses.
statistics
In the eighties, Schroder studied a quantum mechanical model where the stationary states of Schrodinger's equation obey nonlocal boundary conditions on a circle in the plane. For such a problem, we perform a detailed one-loop calculation for three choices of the kernel characterizing the nonlocal boundary conditions. In such cases, the zeta(0) value is found to coincide with the one resulting from Robin boundary conditions. The detailed technique here developed may be useful for studying one-loop properties of quantum field theory and quantum gravity if nonlocal boundary conditions are imposed.
high energy physics theory
Here I develop the connection between thermodynamics, entanglement, and gravity. I begin by showing that the classical null energy condition (NEC) can arise as a consequence of the second law of thermodynamics applied to local holographic screens. This is accomplished by essentially reversing the steps of Hawking's area theorem, leading to the Ricci convergence condition as an input, from which an application of Einstein's equations yields the NEC -- even in the presence of 1-loop quantum corrections to the Bekenstein-Hawking entropy formula. Then, by attributing thermodynamics to the stretched horizon of future lightcones -- a timelike hypersurface generated by a collection of radially accelerating observers with constant and uniform proper acceleration -- I derive Einstein's equations from the Clausius relation $T\Delta S_{\text{rev}}=Q$, where $\Delta S_{\text{rev}}$ is the reversible entropy change. Based on this derivation I uncover a local first law of gravity, $\Delta E=T\Delta S-W$, connecting gravitational entropy $S$ to matter energy $E$ and work $W$. I then provide an entanglement interpretation of stretched lightcone thermodynamics by extending the entanglement equilibrium proposal. Using the $\text{AdS}_{3}/\text{CFT}_{2}$ correspondence, I then provide a microscopic explanation of the `thermodynamic volume' in extended black hole thermodynamics and reveal the super-entropicity of $\text{AdS}_{3}$ black holes is due to the gravitational entropy overcounting the number of available dual $\text{CFT}_{2}$ states. Finally, I conclude by providing a recent generalization of the extended first law of entanglement, and study its non-trivial 2+1- and 1+1-dimensional limits, including an extended first law for Jackiw-Teitelboim gravity. This thesis is self-contained and pedagogical by including useful background content relevant to emergent gravity.
high energy physics theory
We examine the behavior of the Ramond ground states in the D1-D5 CFT after a deformation of the free-orbifold sigma model on target space $({\mathbb T}^4)^N / S_N$ by a marginal interaction operator. These states are compositions of Ramond ground states of the twisted and untwisted sectors. They are characterized by a conjugacy class of $S_N$ and by the set of their "spins", including both R-charge and "internal" SU(2) charge. We compute the four-point functions of an arbitrary Ramond ground state with its conjugate and two interaction operators, for genus-zero covering surfaces representing the leading orders in the large-$N$ expansion. We examine short distance limits of these four-point functions, shedding light on the dynamics of the interacting theory. We find the OPEs and a collection of structure constants of the ground states with the interaction operators and a set of resulting non-BPS twisted operators. We also calculate the integrals of the four-point functions over the positions of the interaction operators and show that they vanish. This provides an explicit demonstration that the Ramond ground states remain protected against deformations away of the free orbifold point, as expected from algebraic considerations using the spectral flow of the ${\mathcal N} = (4,4)$ superconformal algebra with central charge $c = 6N$.
high energy physics theory
Coronavirus disease (COVID-19) is a severe ongoing novel pandemic that has emerged in Wuhan, China, in December 2019. As of October 13, the outbreak has spread rapidly across the world, affecting over 38 million people, and causing over 1 million deaths. In this article, I analysed several time series forecasting methods to predict the spread of COVID-19 second wave in Italy, over the period after October 13, 2020. I used an autoregressive model (ARIMA), an exponential smoothing state space model (ETS), a neural network autoregression model (NNAR), and the following hybrid combinations of them: ARIMA-ETS, ARIMA-NNAR, ETS-NNAR, and ARIMA-ETS-NNAR. About the data, I forecasted the number of patients hospitalized with mild symptoms, and in intensive care units (ICU). The data refer to the period February 21, 2020-October 13, 2020 and are extracted from the website of the Italian Ministry of Health (www.salute.gov.it). The results show that i) the hybrid models, except for ARIMA-ETS, are better at capturing the linear and non-linear epidemic patterns, by outperforming the respective single models; and ii) the number of COVID-19-related hospitalized with mild symptoms and in ICU will rapidly increase in the next weeks, by reaching the peak in about 50-60 days, i.e. in mid-December 2020, at least. To tackle the upcoming COVID-19 second wave, on one hand, it is necessary to hire healthcare workers and implement sufficient hospital facilities, protective equipment, and ordinary and intensive care beds; and on the other hand, it may be useful to enhance social distancing by improving public transport and adopting the double-shifts schooling system, for example.
statistics
In the classical picture of insulators, electrons occupy localized Wannier (atomic-like) orbitals. However, a recent theoretical construction has identified band insulators whose electron filling conflicts with any such atomic description. The electronic wave functions of these insulators, termed filling-enforced quantum band insulators (feQBIs), display a necessary degree of quantum entanglement due to the non-atomic filling. Currently, little is known about the relation between feQBIs and conventional topological invariants. In this work, we study such relations for a particularly interesting example of a half-filling feQBI realized in space group 106 with spinless electrons. We prove that any 4-band feQBI in space group 106 with filling 2 must have a nontrivial topological invariant, namely the $\mathbb{Z}_2$ glide invariant, and thus must have a quantized magnetoelectric polarizability $\theta=\pi$. Such a locking between electron filling and a topological invariant raises intriguing questions for the classification of topological phases. In particular, it points to a 3D analog of the well-known locking between a 2D topological invariant, the Chern number, and filling of 2D electrons in a magnetic field.
condensed matter
The study of scattered polarized light has led to important advances in distinct fields such as astronomy, atmospheric sciences and bio-imaging. In random diffusing media, light disorientation and the scrambling of its polarization state appear to always occur together. Their apparent inseparability suggests a profound connection between optical transport and depolarization. Here, we present experimental evidence of their equivalence and quantify their relationship in colloidal suspensions of microscopic constituents. In particular, a proportionality relation between optical transport lengths and their depolarization counterparts is provided. This equivalence imposes depolarization whenever light traverses random media and holds for wide spectral ranges and scatterer concentrations. Our results clarify the connection between microscopic processes and measurable polarization signatures.
physics
Speech enhancement is a crucial task for several applications. Among the most explored techniques are the Wiener filter and the LogMMSE, but approaches exploring deep learning adapted to this task, such as SEGAN, have presented relevant results. This study compared the performance of the mentioned techniques in 85 noise conditions regarding quality, intelligibility, and distortion; and concluded that classical techniques continue to exhibit superior results for most scenarios, but, in severe noise scenarios, SEGAN performed better and with lower variance.
electrical engineering and systems science
We developed an optical-resolution photoacoustic microscope (OR-PAM) capable of changing focal plane with high-speed by using a tunable acoustic gradient (TAG) lens. In our system, a TAG lens is designed to continuously changing the focal plane of OR-PAM by modulating its refractive power with fast-changing ultrasonic standing wave. The raster-scanning of the microscope and laser pulses are synchronized to the same phase of the driving signal of TAG lens. By selecting the synchronized phase, arbitrary focal plane can be chosen in a range of about 1 millimeter in 1.4 microseconds in our system. A phantom composed by tungsten wires is used to verified the varifocal capability of the system. In-vivo study was carried out, to further demonstrate the large depth-of-field (DoF) achieved by our method without any moving part.
physics
We prove three conjectures, related to the paperfolding sequence, in a recent paper [arXiv:2005.04066] of P. Barry.
mathematics
We put forward the idea that classical blockchains and smart contracts are potentially useful primitives not only for classical cryptography, but for quantum cryptography as well. Abstractly, a smart contract is a functionality that allows parties to deposit funds, and release them upon fulfillment of algorithmically checkable conditions, and can thus be employed as a formal tool to enforce monetary incentives. In this work, we give the first example of the use of smart contracts in a quantum setting. We describe a simple hybrid classical-quantum payment system whose main ingredients are a classical blockchain capable of handling stateful smart contracts, and quantum lightning, a strengthening of public-key quantum money introduced by Zhandry (Eurocrypt'19). Our hybrid payment system employs quantum states as banknotes and a classical blockchain to settle disputes and to keep track of the valid serial numbers. It has several desirable properties: it is decentralized, requiring no trust in any single entity; payments are as quick as quantum communication, regardless of the total number of users; when a quantum banknote is damaged or lost, the rightful owner can recover the lost value.
quantum physics
Weakly stationary Gaussian processes (GPs) are the principal tool in the statistical approaches to the design and analysis of computer experiments (or Uncertainty Quantification). Such processes are fitted to computer model output using a set of training runs to learn the parameters of the process covariance kernel. The stationarity assumption is often adequate, yet can lead to poor predictive performance when the model response exhibits nonstationarity, for example, if its smoothness varies across the input space. In this paper, we introduce a diagnostic-led approach to fitting nonstationary GP emulators by specifying finite mixtures of region-specific covariance kernels. Our method first fits a stationary GP and, if traditional diagnostics exhibit nonstationarity, those diagnostics are used to fit appropriate mixing functions for a covariance kernel mixture designed to capture the nonstationarity, ensuring an emulator that is continuous in parameter space and readily interpretable. We compare our approach to the principal nonstationary GP models in the literature and illustrate its performance on a number of idealised test cases and in an application to modelling the cloud parameterization of the French climate model.
statistics
In this work we show that various algorithms, ubiquitous in convex optimization (e.g. proximal-gradient, alternating projections and averaged projections) generate self-contracted sequences $\{x_{k}\}_{k\in\mathbb{N}}$. As a consequence, a novel universal bound for the \emph{length} ($\sum_{k\ge 0}\Vert x_{k+1}-x_k\Vert$) can be deduced. In addition, this bound is independent of both the concrete data of the problem (sets, functions) as well as the stepsize involved, and only depends on the dimension of the space.
mathematics
Mars shares many similarities and characteristics to Earth including various geological features and planetary structure. The remarkable bimodal distribution of elevations in both planets is one of the most striking global features suggesting similar geodynamic processes of crustal differentiation on Earth and Mars. There also exist several evidences, based on geographic features resembling ancient shorelines, for existence of an ancient martian ocean in the northern hemisphere which covers nearly one third of the planet's surface. However, the interpretation of some features as ancient shorelines has been thoroughly challenged that left the existence of a primordial martian ocean controversial. Moreover, if oceans were formerly present on Mars, there is still a big ambiguity about the volume of water with the estimations ranging over $4$ orders of magnitude. Here we map the martian sea level problem onto a percolation model that provides strong evidence that the longest iso-height line on Mars that separates the northern and southern hemispheres, acts as a critical level height with divergent correlation length and plays the same role as the present mean sea level does on Earth. Our results unravel remarkable similarities between Mars and Earth, posing a testable hypothesis about the level of the ancient ocean on Mars that can be answered experimentally by the future investigations and spacecraft exploration.
astrophysics
To manage huge amount of flexible distributed energy resources (DERs) in the distribution networks, the virtual power plant (VPP) is introduced in industry. The VPP can optimally dispatch these resources in a cluster way and provide flexibility for the power system operation as a whole. Most existing works formulate the equivalent power flexibility of the aggregating DERs as deterministic optimization models without considering their uncertainties. In this paper, we introduce the stochastic power flexibility range (PFR) to describe the power flexibility of VPP, which is formulated as a chance constrained optimization model. In this model, both operational constraints and the randomness of DERs' output are incorporated, and a combined model and data-driven solution is proposed to obtain the stochastic PFR and cost function of VPP. Finally, numerical tests are conducted to verify the correctness and efficiency of the proposed method.
electrical engineering and systems science
This paper describes the demonstration of linearly polarized picosecond pulse shaping with variable profiles including symmetric and non-symmetric intensity distributions. Important characteristics such as stability and transmission were studied, resulting in highly reliable performance of this fan-type birefringent shaping system. This variable temporal shaping technique is applicable over a wide range of laser parameters and may lead to new opportunities for many potential applications. A new double-pass variable temporal shaping method that significantly reduces the required crystal quantity is also proposed in this paper.
physics
Mixed-monotone systems are separable via a decomposition function into increasing and decreasing components, and this decomposition function allows for embedding the system dynamics in a higher-order monotone embedding system. Embedding the system dynamics in this way facilitates the efficient over-approximation of reachable sets with hyperrectangles, however, unlike the monotonicity property, which can be applied to compute, e.g., the tightest hyperrectangle containing a reachable set, the application of the mixed-monotonicity property generally results in conservative reachable set approximations. In this work, explore conservatism in the method and we consider, in particular, embedding systems that are monotone with respect to an alternative partial order. This alternate embedding system is constructed with a decomposition function for a related system, formed via a linear transformation of the initial state-space. We show how these alternate embedding systems allow for computing reachable sets with improved fidelity, i.e., reduced conservatism.
electrical engineering and systems science
We question the use of triangle diagrams in the broken phase of the Standard Model Effective Field Theory (SMEFT) to derive sum-rules on certain dimension-six coefficients, as recently put forward by Cata, Kilian and Kreher in Ref. [arXiv:2011.09976]. Indeed, the aforementioned sum-rules are violated at tree level in simple consistent BSM scenarios. The underlying reason is that gauge-invariant combinations of Goldstone bosons and massive gauge fields are allowed to couple to matter currents which are not conserved. We show this in a toy model by computing the relevant triangle diagrams, as well as by working out Wess--Zumino terms in the bosonic EFT below all fermion masses. The same approach applies also to the Standard Model and it provides a convenient and unusual way to check that the SM is anomaly free. We then apply similar techniques to the SMEFT in presence of various dimension-6 operators.
high energy physics phenomenology
In this feature article we summarise and highlight aspects of the treatment of four-quark states with functional methods. Model approaches to those exotic mesons almost inevitably have to assume certain internal structures, e.g. by grouping quarks and antiquarks into (anti-)diquark clusters or heavy-light $q\bar{q}$ pairs. Functional methods using Dyson-Schwinger and Bethe-Salpeter equations can be formulated without such prejudice and therefore have the potential to put these assumptions to test and discriminate between such models. So far, functional methods have been used to study the light scalar-meson sector and the heavy-light sector with a pair of charmed and a pair of light quarks in different quantum number channels. For all these states, the dominant components in terms of internal two-body clustering have been identified. It turns out that chiral symmetry breaking plays an important role for the dominant clusters in the light meson sector (in particular for the scalar mesons) and that this property is carried over to the heavy-light sector. Diquark-antidiquark components, on the other hand, turn out to be almost negligible for most states with the exception of open-charm heavy-light exotics.
high energy physics phenomenology
Approximate Bayesian methods can mitigate overconfidence in ReLU networks. However, far away from the training data, even Bayesian neural networks (BNNs) can still underestimate uncertainty and thus be overconfident. We suggest to fix this by considering an infinite number of ReLU features over the input domain that are never part of the training process and thus remain at prior values. Perhaps surprisingly, we show that this model leads to a tractable Gaussian process (GP) term that can be added to a pre-trained BNN's posterior at test time with negligible cost overhead. The BNN then yields structured uncertainty in the proximity of training data, while the GP prior calibrates uncertainty far away from them. As a key contribution, we prove that the added uncertainty yields cubic predictive variance growth, and thus the ideal uniform (maximum entropy) confidence in multi-class classification far from the training data.
computer science
Quasicrystals are nonperiodic structures having no translational symmetry but nonetheless possessing long-range order. The material properties of quasicrystals, particularly their low-temperature behavior, defy easy description. We present a compact optical setup for creating quasicrystal optical potentials with 5-fold symmetry using interference of nearly co-propagating beams for use in ultracold atom quantum simulation experiments. We verify the optical design through numerical simulations and demonstrate a prototype system. We also discuss generating phason excitations and quantized transport in the quasicrystal through phase modulation of the beams.
physics
Recently, hollow thermoplastic microspheres, such as Expancel made by Nouryon, have emerged as an innovative filler material for use in polymer-matrix composites. The resulting all-polymer syntactic foam takes on excellent damage tolerance properties, strong recoverability under large strains, and favourable energy dissipation characteristics. Despite finding increasing usage in various industries and applications, including in coatings, films, sealants, packaging, composites for microfluidics, medical ultrasonics and cementious composites, there is a near-complete absence of statistical geometrical information for Expancel microspheres. Further, their mechanical properties have not yet been reported. In this work we characterise the geometrical quantities of two classes of Expancel thermoplastic microspheres using X-ray computed tomography, focused ion beam and electron microscopy. We also observe the spatial distribution of microspheres within a polyurethane-matrix syntactic foam. We show that the volume-weighted polydisperse shell diameter in both classes of microsphere follows a normal distribution. Interestingly, polydispersity of the shell wall thickness is not observed and in particular the shell thickness is not correlated to the shell diameter. We employ the measured geometrical information in analytical micromechanical techniques in the small strain regime to determine, for the first time, estimates of the Young's modulus and Poisson's ratio of the microsphere shell material. Our results contribute to potential future improvements in the design and fabrication of syntactic foams that employ thermoplastic microspheres. Given the breadth of fields which utilise thermoplastic microspheres, we anticipate that our results, together with the methods used, will be of use in a much broader context in future materials research.
physics
Extended defects in crystals, such as dislocations, stacking faults and grain boundaries, play a crucial role in determining a wide variety of materials properties. Extended defects can also lead to novel electronic properties in two-dimensional materials, as demonstrated by recent discoveries of emergent electronic phenomena in twisted graphene bilayers. This paper describes several approaches to construct crystallographic models of two-dimensional extended defects in crystals for first-principles electronic structure calculations, including (i) crystallographic models to parameterize generalized cohesive zone models for fracture studies and meso-scale models of dislocations and (ii) crystallographic models of twisted bilayers. The approaches are implemented in an open source software package called MultiShifter.
condensed matter
We provide a formal definition of p-brane Newton--Cartan (pNC) geometry and establish some foundational results. Our approach is the same followed in the literature for foundations of Newton--Cartan Gravity. Our results provide control of aspects of pNC geometry that are otherwise unclear when using the usual gauge language of non-relativistic theories of gravity. In particular, we obtain a set of necessary and sufficient conditions that a pNC structure must satisfy in order to admit torsion-free, compatible affine connections, and determine the space formed by the latter. Since pNC structures interpolate between Leibnizian structures for p=0 and Lorentzian structures for p=d-1 (with d the dimension of the spacetime manifold), the present work also constitutes a generalisation of results of Newton--Cartan and (pseudo-) Riemannian geometry.
high energy physics theory
A buncher cavity has been developed for the muons accelerated by a radio-frequency quadrupole linac (RFQ). The buncher cavity is designed for $\beta=v/c=0.04$ at an operational frequency of 324 MHz. It employs a double-gap structure operated in the TEM mode for the required effective voltage with compact dimensions, in order to account for the limited space of the experiment. The measured resonant frequency and unloaded quality factor are 323.95 MHz and $3.06\times10^3$, respectively. The buncher cavity was successfully operated for longitudinal bunch size measurement of the muons accelerated by the RFQ.
physics
To improve accuracy in calculating QCD effects, we propose a method for renormalon subtraction in the context of the operator-product expansion. The method enables subtracting renormalons of various powers in $\Lambda_{\rm QCD}$ efficiently and simultaneously from single-scale observables. We apply it to different observables and examine consistency with theoretical expectations.
high energy physics phenomenology
The cosmological relaxion can address the hierarchy problem, while its coherent oscillations can constitute dark matter in the present universe. We consider the possibility that the relaxion forms gravitationally bound objects that we denote as relaxion stars. The density of these stars would be higher than that of the local dark matter density, resulting in enhanced signals in table-top detectors, among others. Furthermore, we raise the possibility that these objects may be trapped by an external gravitational potential, such as that of the Earth or the Sun. This leads to formation of relaxion halos of even greater density. We discuss several interesting implications of relaxion halos, as well as detection strategies to probe them.
high energy physics phenomenology
Molecular vibrations play a critical role in the charge transport properties of weakly van der Waals bonded organic semiconductors. To understand which specific phonon modes contribute most strongly to the electron-phonon coupling and ensuing thermal energetic disorder in some of the most widely studied high mobility molecular semiconductors, state-of-the-art quantum mechanical simulations of the vibrational modes and the ensuing electron phonon coupling constants are combined with experimental measurements of the low-frequency vibrations using inelastic neutron scattering and terahertz time-domain spectroscopy. In this way, the long-axis sliding motion is identified as a killer phonon mode, which in some molecules contributes more than 80% to the total thermal disorder. Based on this insight, a way to rationalize mobility trends between different materials and derive important molecular design guidelines for new high mobility molecular semiconductors is suggested.
condensed matter
In livestock farming, animal health directly influences productivity. For dairy cows, many health conditions can be evaluated by trained observers based on visual appearance and movement. However, to manually evaluate every cow in a commercial farm is expensive and impractical. This paper introduces a video-analytic system which automatically detects the cow structure from captured video sequences. A side-view cow structural model is designed to describe the spatial positions of the joints (keypoints) of the cow, and we develop a system using deep learning to automatically extract the structural model from videos. The proposed detection system can detect multiple cows in the same frame and provide robust performance under practical challenges like obstacles (fences) and poor illumination. Compared to other object detection methods, this system provides better detection results and successfully isolates the keypoints of each cow even when they are close to each other.
electrical engineering and systems science
We introduce a two-sorted algebraic theory whose models are states of MV-algebras and, to within a categorical equivalence that extends Mundici's well-known one, states of Abelian lattice-groups with (strong order) unit. We discuss free states, and their relation to the universal state of an~MV-algebra. We clarify the relationship of such universal states with the theory of affine representations of lattice-groups. Main result: The universal state of any locally finite MV-algebra---in particular, of any Boolean algebra---has semisimple codomain.
mathematics
Reliability-based design optimization (RBDO) provides a rational and sound framework for finding the optimal design while taking uncertainties into ac-count. The main issue in implementing RBDO methods, particularly stochastic simu-lation based ones, is the computational burden arising from the evaluation of reliability constraints. In this contribution, we propose an efficient method which ap-proximates the failure probability functions (FPF) to decouple reliability. Based on the augmentation concept, the approximation of FPF is equivalent to density estimation of failure design samples. Unlike traditional density estimation schemes, where the esti-mation is conducted in the entire design space, in the proposed method we iteratively partition the design space into several subspaces according to the distribution of fail-ure design samples. Numerical results of an illustrative example indicate that the pro-posed method can improve the computational performance considerably.
statistics
We provide the reader with an accessible yet rigorous introduction to Bayesian optimisation with Gaussian processes (BOGP) for the purpose of solving a wide variety of radio resource management (RRM) problems. We believe that BOGP is a powerful tool that has been somewhat overlooked in RRM research, although it elegantly addresses pressing requirements for fast convergence, safe exploration, and interpretability. BOGP also provides a natural way to exploit prior knowledge during optimization. After explaining the nuts and bolts of BOGP, we delve into more advanced topics, such as the choice of the acquisition function and the optimization of dynamic performance functions. Finally, we put the theory into practice for the RRM problem of uplink open-loop power control (OLPC) in 5G cellular networks, for which BOGP is able to converge to almost optimal solutions in tens of iterations without significant performance drops during exploration.
computer science
Although the Navier-Stokes equation (NSE) is derived under angular-momentum conservation (AMC), numerical simulation methods often lack it. Here, we reveal that AMC violations result from implementation of the degenerated viscous terms of NSE. To maintain AMC, these degenerated terms must be separately integrated in accordance with their stress origins. As observed in particle-based hydrodynamics methods, the violation causes artificial rotations in multi-component fluids with different viscosities. At the interface between two fluids or with a mobile solid object, AMC must be satisfied, whereas AMC can be neglected in bulk fluids. We also clarify that the condition for constant fluid rotation as a rigid body in a container rotating at a constant speed is not the AMC of the stresses, but the invariance of the viscous forces under a global rotation. To confirm our theory, we simulated the circular laminar flows of single- and binary-component fluids using two-dimensional Lagrangian finite volume methods. The results show excellent agreement with the analytical predictions for fluids with and without AMC.
physics
We study active learning (AL) based on Gaussian Processes (GPs) for efficiently enumerating all of the local minimum solutions of a black-box function. This problem is challenging due to the fact that local solutions are characterized by their zero gradient and positive-definite Hessian properties, but those derivatives cannot be directly observed. We propose a new AL method in which the input points are sequentially selected such that the confidence intervals of the GP derivatives are effectively updated for enumerating local minimum solutions. We theoretically analyze the proposed method and demonstrate its usefulness through numerical experiments.
statistics
We treat the problem of color enhancement as an image translation task, which we tackle using both supervised and unsupervised learning. Unlike traditional image to image generators, our translation is performed using a global parameterized color transformation instead of learning to directly map image information. In the supervised case, every training image is paired with a desired target image and a convolutional neural network (CNN) learns from the expert retouched images the parameters of the transformation. In the unpaired case, we employ two-way generative adversarial networks (GANs) to learn these parameters and apply a circularity constraint. We achieve state-of-the-art results compared to both supervised (paired data) and unsupervised (unpaired data) image enhancement methods on the MIT-Adobe FiveK benchmark. Moreover, we show the generalization capability of our method, by applying it on photos from the early 20th century and to dark video frames.
computer science
The interplay of black hole and cosmological horizons introduces distinctive thermodynamic behavior for deSitter black holes, including well-known upper bounds for the mass and entropy. We point to a new such feature, a Schottky peak in the heat capacity of Schwarzschild-deSitter (SdS) black holes. With this behavior in mind, we explore statistical models for the underlying quantum degrees of freedom of SdS holes. While a simple two-state spin model gives Schottky behavior, in order to capture the non-equilibrium nature of the SdS system we consider a system with a large number of non-interacting spins. We examine to what extent constrained states of this system reproduce the thermodynamic properties of the black hole. We also review results of a recent study of particle production in SdS spacetimes in light of the Schottky anomaly and our spin models.
high energy physics theory
We propose the use of Monte Carlo histogram reweighting to extrapolate predictions of machine learning methods. In our approach, we treat the output from a convolutional neural network as an observable in a statistical system, enabling its extrapolation over continuous ranges in parameter space. We demonstrate our proposal using the phase transition in the two-dimensional Ising model. By interpreting the output of the neural network as an order parameter, we explore connections with known observables in the system and investigate its scaling behaviour. A finite size scaling analysis is conducted based on quantities derived from the neural network that yields accurate estimates for the critical exponents and the critical temperature. The method improves the prospects of acquiring precision measurements from machine learning in physical systems without an order parameter and those where direct sampling in regions of parameter space might not be possible.
condensed matter
We discuss different approaches to photon isolation in fixed-order calculations and present a new next-to-next-to-leading order (NNLO) QCD calculation of $R_{13/8}^\gamma$, the ratio of the inclusive isolated photon cross section at 8 TeV and 13 TeV, differential in the photon transverse momentum, which was recently measured by the ATLAS collaboration.
high energy physics phenomenology
For many causal effect parameters of interest, doubly robust machine learning (DRML) estimators $\hat{\psi}_{1}$ are the state-of-the-art, incorporating the good prediction performance of machine learning; the decreased bias of doubly robust estimators; and the analytic tractability and bias reduction of sample splitting with cross fitting. Nonetheless, even in the absence of confounding by unmeasured factors, the nominal $(1 - \alpha)$ Wald confidence interval $\hat{\psi}_{1} \pm z_{\alpha / 2} \widehat{\mathsf{se}} [\hat{\psi}_{1}]$ may still undercover even in large samples, because the bias of $\hat{\psi}_{1}$ may be of the same or even larger order than its standard error of order $n^{-1/2}$. In this paper, we introduce essentially assumption-free tests that (i) can falsify the null hypothesis that the bias of $\hat{\psi}_{1}$ is of smaller order than its standard error, (ii) can provide an upper confidence bound on the true coverage of the Wald interval, and (iii) are valid under the null under no smoothness/sparsity assumptions on the nuisance parameters. The tests, which we refer to as \underline{A}ssumption \underline{F}ree \underline{E}mpirical \underline{C}overage \underline{T}ests (AFECTs), are based on a U-statistic that estimates part of the bias of $\hat{\psi}_{1}$.
statistics
Consider a scalar conservation law with discontinuous flux \begin{equation*}\tag{1} \quad u_{t}+f(x,u)_{x}=0, \qquad f(x,u)= \begin{cases} f_l(u)\ &\text{if}\ x<0,\\ f_r(u)\ & \text{if} \ x>0, \end{cases} \end{equation*} where $u=u(x,t)$ is the state variable and $f_{l}$, $f_{r}$ are strictly convex maps. We study the Cauchy problem for (1) from the point of view of control theory regarding the initial datum as a control. Letting $u(x,t)\doteq \mathcal{S}_t^{AB} \overline u(x)$ denote the solution of the Cauchy problem for (1), with initial datum $u(\cdot,0)=\overline u$, that satisfy at $x=0$ the interface entropy condition associated to a connection $(A,B)$ (see~\cite{MR2195983}), we analyze the family of profiles that can be attained by (1) at a given time $T>0$: \begin{equation*} \mathcal{A}^{AB}(T)=\left\{\mathcal{S}_T^{AB} \,\overline u : \ \overline u\in{\bf L}^\infty(\mathbb{R})\right\}. \end{equation*} We provide a full characterization of $\mathcal{A}^{AB}(T)$ as a class of functions in $BV_{loc}(\mathbb{R}\setminus\{0\})$ that satisfy suitable Ole\v{\i}nik-type inequalities, and that admit one-sided limits at $x=0$ which satisfy specific conditions related to the interface entropy criterium. Relying on this characterisation, we establish the ${\bf L^1}_{loc}$-compactness of the set of attainable profiles when the initial data $\overline u$ vary in a given class of uniformly bounded functions, taking values in closed convex sets. We also discuss some applications of these results to optimization problems arising in porous media flow models for oil recovery and in traffic flow.
mathematics
Spatially structured light fields applied to semiconductor quantum dots yield fundamentally different absorption spectra than homogeneous beams. In this paper, we theoretically discuss the resulting spectra for different light beams using a cylindrical multipole expansion. For the description of the quantum dots we employ a model based on the effective mass approximation including Coulomb and valence band mixing. The combination of a single spatially structured light beam and state mixing allows all exciton states in the quantum dot to become optically addressable. Furthermore, we demonstrate that the beams can be tailored such that single states are selectively excited, without the need of spectral separation. Using this selectivity, we propose a method to measure the exciton wave function of the quantum dot eigenstate. The measurement goes beyond electron density measurements by revealing the spatial phase information of the exciton wave function. Thereby polarization sensitive measurements are generalized by including the infinitely large spatial degree of freedom.
condensed matter
Galaxy clusters are a promising probe of late-time structure growth, but constraints on cosmology from cluster abundances are currently limited by systematics in their inferred masses. One unmitigated systematic effect in weak-lensing mass inference is ignoring the presence of baryons and treating the entire cluster as a dark matter halo. In this work we present a new flexible model for cluster densities that captures both the baryonic and dark matter profiles, a new general technique for calculating the lensing signal of an arbitrary density profile, and a methodology for stacking those lensing signal to appropriately model stacked weak-lensing measurements of galaxy cluster catalogues. We test this model on 1400 simulated clusters. Similarly to previous studies, we find that a dark matter-only model overestimates the average mass by $7.5\%$, but including our baryonic term reduces that to $0.7\%$. Additionally, to mitigate the computational complexity of our model, we construct an emulator (surrogate model) which accurately interpolates our model for parameter inference, while being much faster to use than the raw model. We also provide an open-source software framework for our model and emulator, called maszcal, which will serve as a platform for continued efforts to improve these mass-calibration techniques. In this work, we detail our model, the construction of the emulator, and the tests which we used to validate that our model does mitigate bias. Lastly, we describe tests of the emulator's accuracy
astrophysics
Making decisions in complex environments is a key challenge in artificial intelligence (AI). Situations involving multiple decision makers are particularly complex, leading to computational intractability of principled solution methods. A body of work in AI has tried to mitigate this problem by trying to distill interaction to its essence: how does the policy of one agent influence another agent? If we can find more compact representations of such influence, this can help us deal with the complexity, for instance by searching the space of influences rather than the space of policies. However, so far these notions of influence have been restricted in their applicability to special cases of interaction. In this paper we formalize influence-based abstraction (IBA), which facilitates the elimination of latent state factors without any loss in value, for a very general class of problems described as factored partially observable stochastic games (fPOSGs). On the one hand, this generalizes existing descriptions of influence, and thus can serve as the foundation for improvements in scalability and other insights in decision making in complex multiagent settings. On the other hand, since the presence of other agents can be seen as a generalization of single agent settings, our formulation of IBA also provides a sufficient statistic for decision making under abstraction for a single agent. We also give a detailed discussion of the relations to such previous works, identifying new insights and interpretations of these approaches. In these ways, this paper deepens our understanding of abstraction in a wide range of sequential decision making settings, providing the basis for new approaches and algorithms for a large class of problems.
computer science
In visual planning (VP), an agent learns to plan goal-directed behavior from observations of a dynamical system obtained offline, e.g., images obtained from self-supervised robot interaction. Most previous works on VP approached the problem by planning in a learned latent space, resulting in low-quality visual plans, and difficult training algorithms. Here, instead, we propose a simple VP method that plans directly in image space and displays competitive performance. We build on the semi-parametric topological memory (SPTM) method: image samples are treated as nodes in a graph, the graph connectivity is learned from image sequence data, and planning can be performed using conventional graph search methods. We propose two modifications on SPTM. First, we train an energy-based graph connectivity function using contrastive predictive coding that admits stable training. Second, to allow zero-shot planning in new domains, we learn a conditional VAE model that generates images given a context of the domain, and use these hallucinated samples for building the connectivity graph and planning. We show that this simple approach significantly outperform the state-of-the-art VP methods, in terms of both plan interpretability and success rate when using the plan to guide a trajectory-following controller. Interestingly, our method can pick up non-trivial visual properties of objects, such as their geometry, and account for it in the plans.
computer science
Together with the second generation REBO reactive potential, replica-exchange molecular dynamics simulations coupled with systematic quenching were used to generate a broad set of isomers for neutral C$_n$ clusters with $n=24$, 42, and 60. All the minima were sorted in energy and analyzed using order parameters to monitor the evolution of their structural and chemical properties. The structural diversity measured by the fluctuations in these various indicators is found to increase significantly with energy, the number of carbon rings, especially 6-membered, exhibiting a monotonic decrease in favor of low-coordinated chains and branched structures. A systematic statistical analysis between the various parameters indicates that energetic stability is mainly driven by the amount of sp$^2$ hybridization, more than any geometrical parameter. The astrophysical relevance of these results is discussed in the light of the recent detection of C$_{60}$ and C$_{60}^+$ fullerenes in the interstellar medium.
physics
The thermal history of cosmic gas in the Dark Ages remains largely unknown. It is important to quantify the impact of relevant physics on the IGM temperature between $z=10$ and $z \sim 30$, in order to interpret recent and oncoming observations, including results reported by EDGES. We revisit the gas heating due to structure formation shocks in this era, using a set of fixed grid cosmological hydrodynamical simulations performed by three different codes. In all our simulations, the cosmic gas is predicted to be in multiphase state since $z>30$. The gas surrounding high density peaks gradually develops a relation more sharp than $T \propto \rho^{2/3}$, approximately $T \propto \rho^{2}$, from $z=30$ to $z=11$, might due to shock heating. Meanwhile, the gas in void region tends to have a large local mach number, and their thermal state varies significantly from code to code. In the redshift range $11-20$, the mass fraction of gas shock heated above the CMB temperature in our simulations is larger than previous semi-analytical results by a factor of 2 to 8. At $z=15$, the fraction varies from $\sim 19\%$ to $52 \%$ among different codes. Between $z=11$ and $z=20$, the gas temperature $<1/T_{\rm{K}}>_M^{-1}$ is predicted to be $\sim 10-20$ K by two codes, much higher than the adiabatic cooling model and some previous works. However, in our simulations performed by RAMSES, $<1/T_{\rm{K}}>_M^{-1}$ is predicted to be even below the temperature required to explain result of the EDGES. Given the fact that different codes give different predictions, currently, it seems a challenge to make solid prediction on the temperature of gas at $z \sim 17$ in simulations.
astrophysics
Existing trials had not taken enough consideration of their population representativeness, which can lower the effectiveness when the treatment is applied in real-world clinical practice. We analyzed the eligibility criteria of Bevacizumab colorectal cancer treatment trials, assessed their a priori generalizability, and examined how it affects patient outcomes when applied in real-world clinical settings. To do so, we extracted patient-level data from a large collection of electronic health records (EHRs) from the OneFlorida consortium. We built a zero-inflated negative binomial model using a composite patient-trial generalizability (cPTG) score to predict patients clinical outcomes (i.e., number of serious adverse events, (SAEs)). Our study results provide a body of evidence that 1) the cPTG scores can predict patient outcomes; and 2) patients who are more similar to the study population in the trials that were used to develop the treatment will have a significantly lower possibility to experience serious adverse events.
statistics
Although the quantum classical Liouville equation (QCLE) arises by cutting off the exact equation of motion for a coupled nuclear-electronic system at order 1 (1 = $\hbar^0$ ), we show that the QCLE does include Berry's phase effects and Berry's forces (which are proportional to a higher order, $\hbar$ = $\hbar^1$ ). Thus, the fundamental equation underlying mixed quantum-classical dynamics does not need a correction for Berry's phase effects and is valid for the case of complex Hamiltonians. Furthermore, we also show that, even though Tully's surface hopping model ignores Berry's phase, Berry's phase effects are included automatically within Ehrenfest dynamics. These findings should be of great importance if we seek to model coupled nuclear-electronic dynamics for systems with spin-orbit coupling, where the complex nature of the Hamiltonian is paramount.
physics
We propose the approach of model-based differentially private synthesis (modips) in the Bayesian framework for releasing individual-level surrogate/synthetic datasets with privacy guarantees given the original data. The modips technique integrates the concept of differential privacy into model-based data synthesis. We introduce several variants for the general modips approach and different procedures to obtaining privacy-preserving posterior samples, a key step in modips. The uncertainty from the sanitization and synthetic process in modips can be accounted for by releasing multiple synthetic datasets and quantified via an inferential combination rule that is proposed in this paper. We run empirical studies to examine the impacts of the number of synthetic sets and the privacy budget allocation schemes on the inference based on synthetic data.
statistics
We consider the problem of uncertainty estimation in the context of (non-Bayesian) deep neural classification. In this context, all known methods are based on extracting uncertainty signals from a trained network optimized to solve the classification problem at hand. We demonstrate that such techniques tend to introduce biased estimates for instances whose predictions are supposed to be highly confident. We argue that this deficiency is an artifact of the dynamics of training with SGD-like optimizers, and it has some properties similar to overfitting. Based on this observation, we develop an uncertainty estimation algorithm that selectively estimates the uncertainty of highly confident points, using earlier snapshots of the trained model, before their estimates are jittered (and way before they are ready for actual classification). We present extensive experiments indicating that the proposed algorithm provides uncertainty estimates that are consistently better than all known methods.
computer science
Controlling the electronic properties via bandstructure engineering is at the heart of modern semiconductor devices. Here, we extend this concept to semimetals where, utilizing LuSb as a model system, we show that quantum confinement lifts carrier compensation and differentially affects the mobility of the electron and hole-like carriers resulting in a strong modification in its large, non-saturating magnetoresistance behavior. Bonding mismatch at the heteroepitaxial interface of a semimetal (LuSb) and a semiconductor (GaSb) leads to the emergence of a novel, two-dimensional, interfacial hole gas and is accompanied by a charge transfer across the interface that provides another avenue to modify the electronic structure and magnetotransport properties in the ultra-thin limit. Our work lays out a general strategy of utilizing confined thin film geometries and heteroepitaxial interfaces to engineer electronic structure in semimetallic systems, which allows control over their magnetoresistance behavior and simultaneously, provides insights into its origin.
condensed matter
High-dimensional data are ubiquitous in contemporary science and finding methods to compress them is one of the primary goals of machine learning. Given a dataset lying in a high-dimensional space (in principle hundreds to several thousands of dimensions), it is often useful to project it onto a lower-dimensional manifold, without loss of information. Identifying the minimal dimension of such manifold is a challenging problem known in the literature as intrinsic dimension estimation (IDE). Traditionally, most IDE algorithms are either based on multiscale principal component analysis (PCA) or on the notion of correlation dimension (and more in general on k-nearest-neighbors distances). These methods are affected, in different ways, by a severe curse of dimensionality. In particular, none of the existing algorithms can provide accurate ID estimates in the extreme locally undersampled regime, i.e. in the limit where the number of samples in any local patch of the manifold is less than (or of the same order of) the ID of the dataset. Here we introduce a new ID estimator that leverages on simple properties of the tangent space of a manifold to overcome these shortcomings. The method is based on the full correlation integral, going beyond the limit of small radius used for the estimation of the correlation dimension. Our estimator alleviates the extreme undersampling problem, intractable with other methods. Based on this insight, we explore a multiscale generalization of the algorithm. We show that it is capable of (i) identifying multiple dimensionalities in a dataset, and (ii) providing accurate estimates of the ID of extremely curved manifolds. In particular, we test the method on manifolds generated from global transformations of high-contrast images, relevant for invariant object recognition and considered a challenge for state-of-the-art ID estimators.
computer science
We establish the Miyaoka-Yau inequality in terms of orbifold Chern classes for the tangent sheaf of any complex projective variety of general type with klt singularities and nef canonical divisor. In case equality is attained for a variety with at worst terminal singularities, we prove that the associated canonical model is the quotient of the unit ball by a discrete group action.
mathematics
Residuals are a key component of diagnosing model fit. The usual practice is to compute standardized residuals using expected values and standard deviations of the observed data, then use these values to detect outliers and assess model fit. Approximate normality of these residuals is key for this process to have good properties, but in many modeling contexts, especially for complex, multi-level models, normality may not hold. In these cases outlier detection and model diagnostics aren't properly calibrated. Alternatively, as we demonstrate, residuals computed from the percentile location of a datum's value in its full predictive distribution lead to well calibrated evaluations of model fit. We generalize an approach described by Dunn and Smyth (1996) and evaluate properties mathematically, via case-studies and by simulation. In addition, we show that the standard residuals can be calibrated to mimic the percentile approach, but that this extra step is avoided by directly using percentile-based residuals. For both the percentile-based residuals and the calibrated standard residuals, the use of full predictive distributions with the appropriate location, spread and shape is necessary for valid assessments.
statistics
We study the long-time behavior of the system consisting of a one-dimensional barotropic viscous compressible fluid and a freely moving point mass. In our previous paper, we showed that the velocity of the point mass $V(t)$ satisfies an upper bound $|V(t)|\leq C(t+1)^{-3/2}$ ($t\geq 0$) for some $C>1$; but we were unable to give a corresponding lower bound. In this paper, by refining the previously obtained pointwise estimates for the fluid variables, we give a necessary and sufficient condition for $V(t)$ to satisfy a lower bound $C^{-1}(t+1)^{-3/2}\leq |V(t)|$ ($t\gg 1$); when this condition is not met, we obtain a faster decay estimate $|V(t)|\leq C(t+1)^{-7/4}$ ($t\geq 0$). This work thus gives a satisfactory answer to the long-time behavior of $V(t)$ (at least for regular and small enough solutions). The main idea of the proof is to refine the leading order terms of the solutions used in the previous paper so that the behavior of the flow around the point mass can be more precisely understood.
mathematics
There exist instances of dynamical systems possessing symmetry transformations of which the conserved Noether charges generating these symmetries feature an explicit time dependence in their functional representation over phase space. The generators of such symmetries certainly do not commute with the Hamiltonian, and yet these charges are conserved observables for the classical and quantised dynamics. Furthermore within the Hamiltonian formalism and in the case of global symmetries such charges may be gauged to allow for arbitrary time dependent symmetry transformations, simply by extending the Hamiltonian to include the Noether charges as first-class constraints. An explicit illustration of these issues is presented in a simple and most familiar model that applies also to the constant gravitational force. This note draws its primary motivation from the quest towards a theory for quantum gravity, in wanting to understand better the tension existing between the local Equivalence Principle of the gravitational interaction and the fundamental principles of Quantum Mechanics by considering the formulation of quantum systems relative to reference frames that are inertial or noninertial, and thus accelerated relative to one another through arbitrary time dependent spatial translations.
high energy physics theory
We propose a scheme for preparation of entangled coherent states for the motion of an ion in a two-dimensional anisotropic trap. In the scheme, the ion is driven by four laser beams along different directions in the ion trap plane, resulting in carrier excitation and couplings between the internal and external degrees of freedom. When the total quantum number of the vibrational modes initially has a definite parity, the competition between the unitary dynamics and spontaneous emission will force the system to evolve to a steady state, where the vibrational modes are in a two-mode cat state. We show that the method can be extended to realization of entangled coherent states for three vibrational modes of an ion in a three-dimensional anisotropic trap.
quantum physics
We study the critical behavior of the nonequilibrium dynamics and of the steady states emerging from the competition between coherent and dissipative dynamics close to quantum phase transitions. The latter is induced by the coupling of the system with a Markovian bath, such that the evolution of the system's density matrix can be effectively described by a Lindblad master equation. We devise general scaling behaviors for the out-of-equilibrium evolution and the stationary states emerging in the large-time limit for generic initial conditions, in terms of the parameters of the Hamiltonian providing the coherent driving and those associated with the dissipative interactions with the environment. Our framework is supported by numerical results for the dynamics of a one-dimensional lattice fermion gas undergoing a quantum Ising transition, in the presence of dissipative mechanisms which include local pumping and decay of particles.
condensed matter
We present a novel approach to image restoration that leverages ideas from localized structured prediction and non-linear multi-task learning. We optimize a penalized energy function regularized by a sum of terms measuring the distance between patches to be restored and clean patches from an external database gathered beforehand. The resulting estimator comes with strong statistical guarantees leveraging local dependency properties of overlapping patches. We derive the corresponding algorithms for energies based on the mean-squared and Euclidean norm errors. Finally, we demonstrate the practical effectiveness of our model on different image restoration problems using standard benchmarks.
computer science
In this paper, we consider solving a distributed optimization problem (DOP) with coupling constraints in a multi-agent network based on proximal gradient method. In this problem, each agent aims to minimize an individual cost function composed of both smooth and non-smooth parts. To this end, we derive the dual problem by the concept of Fenchel conjugate, which results in two kinds of dual problems: consensus based constrained and augmented unconstrained problems. In the first scenario, we propose a fully distributed dual proximal gradient (D-DPG) algorithm, where the agents can make updates only with the dual information of their neighbours and local step-sizes. Moreover, if the non-smooth parts of the objective functions are with certain simple structures, the agents only need to update dual variables with some simple operations, which can reduce the overall computational complexity. In the second scenario, an augmented dual proximal gradient (A-DPG) algorithm is proposed, which allows for the asymmetric interpretations of the global constraints for the agents and can be more efficient than D-DGP algorithm in some special-structured DOPs. Based on A-DPG algorithm, an asynchronous dual proximal gradient (Asyn-DPG) algorithm is proposed for the asynchronous networks where each agent updates its strategy with heterogenous step-size and possible outdated dual information of others. In all the discussed scenarios, analytical (ergodic) convergence rates are derived. The effectiveness of the proposed algorithms is verified by solving a social welfare optimization problem in the electricity market.
mathematics
Anthropogenic skyglow dominates views of the natural night sky in most urban settings, and the associated emission of artificial light at night (ALAN) into the environment of cities involves a number of known and suspected negative externalities. One approach to lowering consumption of ALAN in cities is dimming or extinguishing publicly owned outdoor lighting during overnight hours; however, there are few reports in the literature about the efficacy of these programs. Here we report the results of one of the largest municipal lighting dimming experiments to date, involving $\sim$20,000 roadway luminaires owned and operated by the City of Tucson, Arizona, U.S. We analyzed both single-channel and spatially resolved ground-based measurements of broadband night sky radiance obtained during the tests, determining that the zenith sky brightness during the tests decreased by ($-5.4\pm0.9$)% near the city center and ($-3.6\pm0.9$)% at an adjacent suburban location on nights when the output of the street lighting system was dimmed from 90% of its full power draw to 30% after local midnight. Modeling these changes with a radiative transfer code yields results suggesting that street lights account for about ($14\pm1$)% of light emissions resulting in skyglow seen over the city. A separate derivation from first principles implies that street lighting contributes only 2-3% of light seen at the zenith over Tucson. We discuss this inconsistency and suggest routes for future work.
astrophysics
The North Polar Spur (NPS) is one of the largest structures observed in the Milky Way in both the radio and soft x-rays. While several predictions have been made regarding the origin of the NPS, modelling the structure is difficult without precise distance constraints. In this paper, we determine accurate distances to the southern terminus of the NPS and toward latitudes ranging up to 55$^{\circ}$. First, we fit for the distance and extinction to stars toward the NPS using optical and near-infrared photometry and Gaia DR2 astrometry. We model these per-star distance-extinction estimates as being caused by dust screens at unknown distances, which we fit for using a nested sampling algorithm. We then compare the extinction to the Spur derived from our 3D dust modelling with integrated independent measures from XMM-Newton X-ray absorption and HI column density measures. We find that we can account for nearly 100% of the total column density of the NPS as lying within 140 pc for latitudes $>26^{\circ}$ and within 700 pc for latitudes $< 11^{\circ}$. Based on the results, we conclude that the NPS is not associated with the Galactic Centre or the Fermi bubbles. Instead, it is likely associated, especially at higher latitudes, with the Sco-Cen association.
astrophysics
In this paper we prove the equivalence among (i) the weakly coupled worldsheet string theory described by the coset sigma model $\frac{SL(2,\mathbb{R})_k\times U(1)}{U(1)}\times S^3 \times T^4$ with $SL(2,\mathbb{R})$ WZW level $k\geq 2$, (ii) the full near horizon theory of the NS5 branes with $k$ NS5 branes wrapping $T^4\times S^1$, $p\gg1$ F1 strings wrapping $S^1$ and $n$ units of momentum along the $S^1$ and (iii) the single trace $T\bar{T}$ deformation of string theory in $AdS_3\times S^3\times T^4$. As a check we compute the spectrum of the spacetime theory by performing BRST quantization of the coset description of the worldsheet theory and show that it matches exactly with the one derived in the case of single trace $T\bar{T}$ deformed string theory in $AdS_3$. Secondly, we compute the two-point correlation function of local operators of the spacetime theory using the worldsheet coset approach and reproduce the same two-point function from the supergravity approach.
high energy physics theory