- Restoration-Degradation Beyond Linear Diffusions: A Non-Asymptotic Analysis For DDIM-Type Samplers We develop a framework for non-asymptotic analysis of deterministic samplers used for diffusion generative modeling. Several recent works have analyzed stochastic samplers using tools like Girsanov's theorem and a chain rule variant of the interpolation argument. Unfortunately, these techniques give vacuous bounds when applied to deterministic samplers. We give a new operational interpretation for deterministic sampling by showing that one step along the probability flow ODE can be expressed as two steps: 1) a restoration step that runs gradient ascent on the conditional log-likelihood at some infinitesimally previous time, and 2) a degradation step that runs the forward process using noise pointing back towards the current iterate. This perspective allows us to extend denoising diffusion implicit models to general, non-linear forward processes. We then develop the first polynomial convergence bounds for these samplers under mild conditions on the data distribution. 3 authors · Mar 6, 2023
1 MINDE: Mutual Information Neural Diffusion Estimation In this work we present a new method for the estimation of Mutual Information (MI) between random variables. Our approach is based on an original interpretation of the Girsanov theorem, which allows us to use score-based diffusion models to estimate the Kullback Leibler divergence between two densities as a difference between their score functions. As a by-product, our method also enables the estimation of the entropy of random variables. Armed with such building blocks, we present a general recipe to measure MI, which unfolds in two directions: one uses conditional diffusion process, whereas the other uses joint diffusion processes that allow simultaneous modelling of two random variables. Our results, which derive from a thorough experimental protocol over all the variants of our approach, indicate that our method is more accurate than the main alternatives from the literature, especially for challenging distributions. Furthermore, our methods pass MI self-consistency tests, including data processing and additivity under independence, which instead are a pain-point of existing methods. 3 authors · Oct 13, 2023
1 Continuous-Time Functional Diffusion Processes We introduce Functional Diffusion Processes (FDPs), which generalize score-based diffusion models to infinite-dimensional function spaces. FDPs require a new mathematical framework to describe the forward and backward dynamics, and several extensions to derive practical training objectives. These include infinite-dimensional versions of Girsanov theorem, in order to be able to compute an ELBO, and of the sampling theorem, in order to guarantee that functional evaluations in a countable set of points are equivalent to infinite-dimensional functions. We use FDPs to build a new breed of generative models in function spaces, which do not require specialized network architectures, and that can work with any kind of continuous data. Our results on real data show that FDPs achieve high-quality image generation, using a simple MLP architecture with orders of magnitude fewer parameters than existing diffusion models. 6 authors · Mar 1, 2023
- Block occurrences in the binary expansion The binary sum-of-digits function s returns the number of ones in the binary expansion of a nonnegative integer. Cusick's Hamming weight conjecture states that, for all integers tgeq 0, the set of nonnegative integers n such that s(n+t)geq s(n) has asymptotic density strictly larger than 1/2. We are concerned with the block-additive function r returning the number of (overlapping) occurrences of the block 11 in the binary expansion of n. The main result of this paper is a central limit-type theorem for the difference r(n+t)-r(n): the corresponding probability function is uniformly close to a Gaussian, where the uniform error tends to 0 as the number of blocks of ones in the binary expansion of t tends to infty. 2 authors · Aug 31, 2023
- Multi-layer random features and the approximation power of neural networks A neural architecture with randomly initialized weights, in the infinite width limit, is equivalent to a Gaussian Random Field whose covariance function is the so-called Neural Network Gaussian Process kernel (NNGP). We prove that a reproducing kernel Hilbert space (RKHS) defined by the NNGP contains only functions that can be approximated by the architecture. To achieve a certain approximation error the required number of neurons in each layer is defined by the RKHS norm of the target function. Moreover, the approximation can be constructed from a supervised dataset by a random multi-layer representation of an input vector, together with training of the last layer's weights. For a 2-layer NN and a domain equal to an n-1-dimensional sphere in {mathbb R}^n, we compare the number of neurons required by Barron's theorem and by the multi-layer features construction. We show that if eigenvalues of the integral operator of the NNGP decay slower than k^{-n-2{3}} where k is an order of an eigenvalue, then our theorem guarantees a more succinct neural network approximation than Barron's theorem. We also make some computational experiments to verify our theoretical findings. Our experiments show that realistic neural networks easily learn target functions even when both theorems do not give any guarantees. 1 authors · Apr 26, 2024
- The Fyodorov-Hiary-Keating Conjecture. I By analogy with conjectures for random matrices, Fyodorov-Hiary-Keating and Fyodorov-Keating proposed precise asymptotics for the maximum of the Riemann zeta function in a typical short interval on the critical line. In this paper, we settle the upper bound part of their conjecture in a strong form. More precisely, we show that the measure of those T leq t leq 2T for which $ max_{|h| leq 1} |zeta(1/2 + i t + i h)| > e^y log T {(loglog T)^{3/4}} is bounded by Cy e^{-2y} uniformly in y \geq 1. This is expected to be optimal for y= O(\log\log T). This upper bound is sharper than what is known in the context of random matrices, since it gives (uniform) decay rates in y$. In a subsequent paper we will obtain matching lower bounds. 3 authors · Jul 2, 2020
- Reverse mathematics and a Ramsey-type König's Lemma In this paper, we propose a weak regularity principle which is similar to both weak K\"onig's lemma and Ramsey's theorem. We begin by studying the computational strength of this principle in the context of reverse mathematics. We then analyze different ways of generalizing this principle. 1 authors · Nov 10, 2011
- Approximating the Convex Hull via Metric Space Magnitude Magnitude of a finite metric space and the related notion of magnitude functions on metric spaces is an active area of research in algebraic topology. Magnitude originally arose in the context of biology, where it represents the number of effective species in an environment; when applied to a one-parameter family of metric spaces tX with scale parameter t, the magnitude captures much of the underlying geometry of the space. Prior work has mostly focussed on properties of magnitude in a global sense; in this paper we restrict the sets to finite subsets of Euclidean space and investigate its individual components. We give an explicit formula for the corrected inclusion-exclusion principle, and define a quantity associated with each point, called the moment which gives an intrinsic ordering to the points. We exploit this in order to form an algorithm which approximates the convex hull. 3 authors · Aug 7, 2019