categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 2406.03919 | null | null | http://arxiv.org/pdf/2406.03919v2 | 2024-07-13T12:32:25Z | 2024-06-06T10:02:06Z | Vectorized Conditional Neural Fields: A Framework for Solving
Time-dependent Parametric Partial Differential Equations | Transformer models are increasingly used for solving Partial Differential Equations (PDEs). Several adaptations have been proposed, all of which suffer from the typical problems of Transformers, such as quadratic memory and time complexity. Furthermore, all prevalent architectures for PDE solving lack at least one of several desirable properties of an ideal surrogate model, such as (i) generalization to PDE parameters not seen during training, (ii) spatial and temporal zero-shot super-resolution, (iii) continuous temporal extrapolation, (iv) support for 1D, 2D, and 3D PDEs, and (v) efficient inference for longer temporal rollouts. To address these limitations, we propose Vectorized Conditional Neural Fields (VCNeFs), which represent the solution of time-dependent PDEs as neural fields. Contrary to prior methods, however, VCNeFs compute, for a set of multiple spatio-temporal query points, their solutions in parallel and model their dependencies through attention mechanisms. Moreover, VCNeF can condition the neural field on both the initial conditions and the parameters of the PDEs. An extensive set of experiments demonstrates that VCNeFs are competitive with and often outperform existing ML-based surrogate models. | [
"['Jan Hagnberger' 'Marimuthu Kalimuthu' 'Daniel Musekamp'\n 'Mathias Niepert']"
] |
null | null | 2406.03920 | null | null | http://arxiv.org/pdf/2406.03920v1 | 2024-06-06T10:02:49Z | 2024-06-06T10:02:49Z | Towards Physically Consistent Deep Learning For Climate Model
Parameterizations | Climate models play a critical role in understanding and projecting climate change. Due to their complexity, their horizontal resolution of ~40-100 km remains too coarse to resolve processes such as clouds and convection, which need to be approximated via parameterizations. These parameterizations are a major source of systematic errors and large uncertainties in climate projections. Deep learning (DL)-based parameterizations, trained on computationally expensive, short high-resolution simulations, have shown great promise for improving climate models in that regard. However, their lack of interpretability and tendency to learn spurious non-physical correlations result in reduced trust in the climate simulation. We propose an efficient supervised learning framework for DL-based parameterizations that leads to physically consistent models with improved interpretability and negligible computational overhead compared to standard supervised training. First, key features determining the target physical processes are uncovered. Subsequently, the neural network is fine-tuned using only those relevant features. We show empirically that our method robustly identifies a small subset of the inputs as actual physical drivers, therefore, removing spurious non-physical relationships. This results in by design physically consistent and interpretable neural networks while maintaining the predictive performance of standard black-box DL-based parameterizations. Our framework represents a crucial step in addressing a major challenge in data-driven climate model parameterizations by respecting the underlying physical processes, and may also benefit physically consistent deep learning in other research fields. | [
"['Birgit Kühbacher' 'Fernando Iglesias-Suarez' 'Niki Kilbertus'\n 'Veronika Eyring']"
] |
null | null | 2406.03923 | null | null | http://arxiv.org/pdf/2406.03923v2 | 2024-06-09T15:42:57Z | 2024-06-06T10:04:53Z | Latent Neural Operator for Solving Forward and Inverse PDE Problems | Neural operators effectively solve PDE problems from data without knowing the explicit equations, which learn the map from the input sequences of observed samples to the predicted values. Most existed works build the model in the original geometric space, leading to high computational costs when the number of sample points is large. We present the Latent Neural Operator (LNO) solving PDEs in the latent space. In particular, we first propose Physics-Cross-Attention (PhCA) transforming representation from the geometric space to the latent space, then learn the operator in the latent space, and finally recover the real-world geometric space via the inverse PhCA map. Our model retains flexibility that can decode values in any position not limited to locations defined in training set, and therefore can naturally perform interpolation and extrapolation tasks particularly useful for inverse problems. Moreover, the proposed LNO improves in both prediction accuracy and computational efficiency. Experiments show that LNO reduces the GPU memory by 50%, speeds up training 1.8 times, and reaches state-of-the-art accuracy on four out of six benchmarks for forward problems and a benchmark for inverse problem. | [
"['Tian Wang' 'Chuang Wang']"
] |
null | null | 2406.03924 | null | null | http://arxiv.org/pdf/2406.03924v1 | 2024-06-06T10:06:27Z | 2024-06-06T10:06:27Z | Statistical Multicriteria Benchmarking via the GSD-Front | Given the vast number of classifiers that have been (and continue to be) proposed, reliable methods for comparing them are becoming increasingly important. The desire for reliability is broken down into three main aspects: (1) Comparisons should allow for different quality metrics simultaneously. (2) Comparisons should take into account the statistical uncertainty induced by the choice of benchmark suite. (3) The robustness of the comparisons under small deviations in the underlying assumptions should be verifiable. To address (1), we propose to compare classifiers using a generalized stochastic dominance ordering (GSD) and present the GSD-front as an information-efficient alternative to the classical Pareto-front. For (2), we propose a consistent statistical estimator for the GSD-front and construct a statistical test for whether a (potentially new) classifier lies in the GSD-front of a set of state-of-the-art classifiers. For (3), we relax our proposed test using techniques from robust statistics and imprecise probabilities. We illustrate our concepts on the benchmark suite PMLB and on the platform OpenML. | [
"['Christoph Jansen' 'Georg Schollmeyer' 'Julian Rodemann' 'Hannah Blocher'\n 'Thomas Augustin']"
] |
null | null | 2406.03932 | null | null | http://arxiv.org/pdf/2406.03932v1 | 2024-06-06T10:17:51Z | 2024-06-06T10:17:51Z | Breeding Programs Optimization with Reinforcement Learning | Crop breeding is crucial in improving agricultural productivity while potentially decreasing land usage, greenhouse gas emissions, and water consumption. However, breeding programs are challenging due to long turnover times, high-dimensional decision spaces, long-term objectives, and the need to adapt to rapid climate change. This paper introduces the use of Reinforcement Learning (RL) to optimize simulated crop breeding programs. RL agents are trained to make optimal crop selection and cross-breeding decisions based on genetic information. To benchmark RL-based breeding algorithms, we introduce a suite of Gym environments. The study demonstrates the superiority of RL techniques over standard practices in terms of genetic gain when simulated in silico using real-world genomic maize data. | [
"['Omar G. Younis' 'Luca Corinzia' 'Ioannis N. Athanasiadis'\n 'Andreas Krause' 'Joachim M. Buhmann' 'Matteo Turchetta']"
] |
null | null | 2406.03944 | null | null | http://arxiv.org/pdf/2406.03944v1 | 2024-06-06T10:38:01Z | 2024-06-06T10:38:01Z | Provably Neural Active Learning Succeeds via Prioritizing Perplexing
Samples | Neural Network-based active learning (NAL) is a cost-effective data selection technique that utilizes neural networks to select and train on a small subset of samples. While existing work successfully develops various effective or theory-justified NAL algorithms, the understanding of the two commonly used query criteria of NAL: uncertainty-based and diversity-based, remains in its infancy. In this work, we try to move one step forward by offering a unified explanation for the success of both query criteria-based NAL from a feature learning view. Specifically, we consider a feature-noise data model comprising easy-to-learn or hard-to-learn features disrupted by noise, and conduct analysis over 2-layer NN-based NALs in the pool-based scenario. We provably show that both uncertainty-based and diversity-based NAL are inherently amenable to one and the same principle, i.e., striving to prioritize samples that contain yet-to-be-learned features. We further prove that this shared principle is the key to their success-achieve small test error within a small labeled set. Contrastingly, the strategy-free passive learning exhibits a large test error due to the inadequate learning of yet-to-be-learned features, necessitating resort to a significantly larger label complexity for a sufficient test error reduction. Experimental results validate our findings. | [
"['Dake Bu' 'Wei Huang' 'Taiji Suzuki' 'Ji Cheng' 'Qingfu Zhang'\n 'Zhiqiang Xu' 'Hau-San Wong']"
] |
null | null | 2406.03946 | null | null | http://arxiv.org/pdf/2406.03946v1 | 2024-06-06T10:45:19Z | 2024-06-06T10:45:19Z | A Probabilistic Approach to Learning the Degree of Equivariance in
Steerable CNNs | Steerable convolutional neural networks (SCNNs) enhance task performance by modelling geometric symmetries through equivariance constraints on weights. Yet, unknown or varying symmetries can lead to overconstrained weights and decreased performance. To address this, this paper introduces a probabilistic method to learn the degree of equivariance in SCNNs. We parameterise the degree of equivariance as a likelihood distribution over the transformation group using Fourier coefficients, offering the option to model layer-wise and shared equivariance. These likelihood distributions are regularised to ensure an interpretable degree of equivariance across the network. Advantages include the applicability to many types of equivariant networks through the flexible framework of SCNNs and the ability to learn equivariance with respect to any subgroup of any compact group without requiring additional layers. Our experiments reveal competitive performance on datasets with mixed symmetries, with learnt likelihood distributions that are representative of the underlying degree of equivariance. | [
"['Lars Veefkind' 'Gabriele Cesa']"
] |
null | null | 2406.03947 | null | null | http://arxiv.org/pdf/2406.03947v1 | 2024-06-06T10:46:51Z | 2024-06-06T10:46:51Z | Weight-based Decomposition: A Case for Bilinear MLPs | Gated Linear Units (GLUs) have become a common building block in modern foundation models. Bilinear layers drop the non-linearity in the "gate" but still have comparable performance to other GLUs. An attractive quality of bilinear layers is that they can be fully expressed in terms of a third-order tensor and linear operations. Leveraging this, we develop a method to decompose the bilinear tensor into a set of sparsely interacting eigenvectors that show promising interpretability properties in preliminary experiments for shallow image classifiers (MNIST) and small language models (Tiny Stories). Since the decomposition is fully equivalent to the model's original computations, bilinear layers may be an interpretability-friendly architecture that helps connect features to the model weights. Application of our method may not be limited to pretrained bilinear models since we find that language models such as TinyLlama-1.1B can be finetuned into bilinear variants. | [
"['Michael T. Pearce' 'Thomas Dooms' 'Alice Rigg']"
] |
null | null | 2406.03978 | null | null | http://arxiv.org/pdf/2406.03978v2 | 2024-06-16T12:01:11Z | 2024-06-06T11:42:33Z | Mini Honor of Kings: A Lightweight Environment for Multi-Agent
Reinforcement Learning | Games are widely used as research environments for multi-agent reinforcement learning (MARL), but they pose three significant challenges: limited customization, high computational demands, and oversimplification. To address these issues, we introduce the first publicly available map editor for the popular mobile game Honor of Kings and design a lightweight environment, Mini Honor of Kings (Mini HoK), for researchers to conduct experiments. Mini HoK is highly efficient, allowing experiments to be run on personal PCs or laptops while still presenting sufficient challenges for existing MARL algorithms. We have tested our environment on common MARL algorithms and demonstrated that these algorithms have yet to find optimal solutions within this environment. This facilitates the dissemination and advancement of MARL methods within the research community. Additionally, we hope that more researchers will leverage the Honor of Kings map editor to develop innovative and scientifically valuable new maps. Our code and user manual are available at: https://github.com/tencent-ailab/mini-hok. | [
"['Lin Liu' 'Jian Zhao' 'Cheng Hu' 'Zhengtao Cao' 'Youpeng Zhao'\n 'Zhenbin Ye' 'Meng Meng' 'Wenjun Wang' 'Zhaofeng He' 'Houqiang Li'\n 'Xia Lin' 'Lanxiao Huang']"
] |
null | null | 2406.03980 | null | null | http://arxiv.org/pdf/2406.03980v1 | 2024-06-06T11:51:12Z | 2024-06-06T11:51:12Z | Position: Embracing Negative Results in Machine Learning | Publications proposing novel machine learning methods are often primarily rated by exhibited predictive performance on selected problems. In this position paper we argue that predictive performance alone is not a good indicator for the worth of a publication. Using it as such even fosters problems like inefficiencies of the machine learning research community as a whole and setting wrong incentives for researchers. We therefore put out a call for the publication of "negative" results, which can help alleviate some of these problems and improve the scientific output of the machine learning research community. To substantiate our position, we present the advantages of publishing negative results and provide concrete measures for the community to move towards a paradigm where their publication is normalized. | [
"['Florian Karl' 'Lukas Malte Kemeter' 'Gabriel Dax' 'Paulina Sierak']"
] |
null | null | 2406.03997 | null | null | http://arxiv.org/pdf/2406.03997v1 | 2024-06-06T12:17:05Z | 2024-06-06T12:17:05Z | HackAtari: Atari Learning Environments for Robust and Continual
Reinforcement Learning | Artificial agents' adaptability to novelty and alignment with intended behavior is crucial for their effective deployment. Reinforcement learning (RL) leverages novelty as a means of exploration, yet agents often struggle to handle novel situations, hindering generalization. To address these issues, we propose HackAtari, a framework introducing controlled novelty to the most common RL benchmark, the Atari Learning Environment. HackAtari allows us to create novel game scenarios (including simplification for curriculum learning), to swap the game elements' colors, as well as to introduce different reward signals for the agent. We demonstrate that current agents trained on the original environments include robustness failures, and evaluate HackAtari's efficacy in enhancing RL agents' robustness and aligning behavior through experiments using C51 and PPO. Overall, HackAtari can be used to improve the robustness of current and future RL algorithms, allowing Neuro-Symbolic RL, curriculum RL, causal RL, as well as LLM-driven RL. Our work underscores the significance of developing interpretable in RL agents. | [
"['Quentin Delfosse' 'Jannis Blüml' 'Bjarne Gregori' 'Kristian Kersting']"
] |
null | null | 2406.03999 | null | null | http://arxiv.org/pdf/2406.03999v1 | 2024-06-06T12:17:57Z | 2024-06-06T12:17:57Z | Unveiling the Dynamics of Information Interplay in Supervised Learning | In this paper, we use matrix information theory as an analytical tool to analyze the dynamics of the information interplay between data representations and classification head vectors in the supervised learning process. Specifically, inspired by the theory of Neural Collapse, we introduce matrix mutual information ratio (MIR) and matrix entropy difference ratio (HDR) to assess the interactions of data representation and class classification heads in supervised learning, and we determine the theoretical optimal values for MIR and HDR when Neural Collapse happens. Our experiments show that MIR and HDR can effectively explain many phenomena occurring in neural networks, for example, the standard supervised training dynamics, linear mode connectivity, and the performance of label smoothing and pruning. Additionally, we use MIR and HDR to gain insights into the dynamics of grokking, which is an intriguing phenomenon observed in supervised training, where the model demonstrates generalization capabilities long after it has learned to fit the training data. Furthermore, we introduce MIR and HDR as loss terms in supervised and semi-supervised learning to optimize the information interactions among samples and classification heads. The empirical results provide evidence of the method's effectiveness, demonstrating that the utilization of MIR and HDR not only aids in comprehending the dynamics throughout the training process but can also enhances the training procedure itself. | [
"['Kun Song' 'Zhiquan Tan' 'Bochao Zou' 'Huimin Ma' 'Weiran Huang']"
] |
null | null | 2406.04012 | null | null | http://arxiv.org/pdf/2406.04012v2 | 2024-06-10T09:32:49Z | 2024-06-06T12:38:59Z | Theoretical Guarantees for Variational Inference with Fixed-Variance
Mixture of Gaussians | Variational inference (VI) is a popular approach in Bayesian inference, that looks for the best approximation of the posterior distribution within a parametric family, minimizing a loss that is typically the (reverse) Kullback-Leibler (KL) divergence. Despite its empirical success, the theoretical properties of VI have only received attention recently, and mostly when the parametric family is the one of Gaussians. This work aims to contribute to the theoretical study of VI in the non-Gaussian case by investigating the setting of Mixture of Gaussians with fixed covariance and constant weights. In this view, VI over this specific family can be casted as the minimization of a Mollified relative entropy, i.e. the KL between the convolution (with respect to a Gaussian kernel) of an atomic measure supported on Diracs, and the target distribution. The support of the atomic measure corresponds to the localization of the Gaussian components. Hence, solving variational inference becomes equivalent to optimizing the positions of the Diracs (the particles), which can be done through gradient descent and takes the form of an interacting particle system. We study two sources of error of variational inference in this context when optimizing the mollified relative entropy. The first one is an optimization result, that is a descent lemma establishing that the algorithm decreases the objective at each iteration. The second one is an approximation error, that upper bounds the objective between an optimal finite mixture and the target distribution. | [
"['Tom Huix' 'Anna Korba' 'Alain Durmus' 'Eric Moulines']"
] |
null | null | 2406.04029 | null | null | http://arxiv.org/pdf/2406.04029v1 | 2024-06-06T12:59:46Z | 2024-06-06T12:59:46Z | Pre-trained Transformer Uncovers Meaningful Patterns in Human Mobility
Data | We empirically demonstrate that a transformer pre-trained on country-scale unlabeled human mobility data learns embeddings capable, through fine-tuning, of developing a deep understanding of the target geography and its corresponding mobility patterns. Utilizing an adaptation framework, we evaluate the performance of our pre-trained embeddings in encapsulating a broad spectrum of concepts directly and indirectly related to human mobility. This includes basic notions, such as geographic location and distance, and extends to more complex constructs, such as administrative divisions and land cover. Our extensive empirical analysis reveals a substantial performance boost gained from pre-training, reaching up to 38% in tasks such as tree-cover regression. We attribute this result to the ability of the pre-training to uncover meaningful patterns hidden in the raw data, beneficial for modeling relevant high-level concepts. The pre-trained embeddings emerge as robust representations of regions and trajectories, potentially valuable for a wide range of downstream applications. | [
"['Alameen Najjar']"
] |
null | null | 2406.04035 | null | null | http://arxiv.org/pdf/2406.04035v3 | 2024-06-18T09:16:33Z | 2024-06-06T13:03:51Z | STEMO: Early Spatio-temporal Forecasting with Multi-Objective
Reinforcement Learning | Accuracy and timeliness are indeed often conflicting goals in prediction tasks. Premature predictions may yield a higher rate of false alarms, whereas delaying predictions to gather more information can render them too late to be useful. In applications such as wildfires, crimes, and traffic jams, timely forecasting are vital for safeguarding human life and property. Consequently, finding a balance between accuracy and timeliness is crucial. In this paper, we propose an early spatio-temporal forecasting model based on Multi-Objective reinforcement learning that can either implement an optimal policy given a preference or infer the preference based on a small number of samples. The model addresses two primary challenges: 1) enhancing the accuracy of early forecasting and 2) providing the optimal policy for determining the most suitable prediction time for each area. Our method demonstrates superior performance on three large-scale real-world datasets, surpassing existing methods in early spatio-temporal forecasting tasks. | [
"['Wei Shao' 'Yufan Kang' 'Ziyan Peng' 'Xiao Xiao' 'Lei Wang' 'Yuhui Yang'\n 'Flora D Salim']"
] |
null | null | 2406.04038 | null | null | http://arxiv.org/pdf/2406.04038v1 | 2024-06-06T13:04:43Z | 2024-06-06T13:04:43Z | Road Network Representation Learning with the Third Law of Geography | Road network representation learning aims to learn compressed and effective vectorized representations for road segments that are applicable to numerous tasks. In this paper, we identify the limitations of existing methods, particularly their overemphasis on the distance effect as outlined in the First Law of Geography. In response, we propose to endow road network representation with the principles of the recent Third Law of Geography. To this end, we propose a novel graph contrastive learning framework that employs geographic configuration-aware graph augmentation and spectral negative sampling, ensuring that road segments with similar geographic configurations yield similar representations, and vice versa, aligning with the principles stated in the Third Law. The framework further fuses the Third Law with the First Law through a dual contrastive learning objective to effectively balance the implications of both laws. We evaluate our framework on two real-world datasets across three downstream tasks. The results show that the integration of the Third Law significantly improves the performance of road segment representations in downstream tasks. | [
"['Haicang Zhou' 'Weiming Huang' 'Yile Chen' 'Tiantian He' 'Gao Cong'\n 'Yew-Soon Ong']"
] |
null | null | 2406.04039 | null | null | http://arxiv.org/pdf/2406.04039v1 | 2024-06-06T13:05:32Z | 2024-06-06T13:05:32Z | Shaping History: Advanced Machine Learning Techniques for the Analysis
and Dating of Cuneiform Tablets over Three Millennia | Cuneiform tablets, emerging in ancient Mesopotamia around the late fourth millennium BCE, represent one of humanity's earliest writing systems. Characterized by wedge-shaped marks on clay tablets, these artifacts provided insight into Mesopotamian civilization across various domains. Traditionally, the analysis and dating of these tablets rely on subjective assessment of shape and writing style, leading to uncertainties in pinpointing their exact temporal origins. Recent advances in digitization have revolutionized the study of cuneiform by enhancing accessibility and analytical capabilities. Our research uniquely focuses on the silhouette of tablets as significant indicators of their historical periods, diverging from most studies that concentrate on textual content. Utilizing an unprecedented dataset of over 94,000 images from the Cuneiform Digital Library Initiative collection, we apply deep learning methods to classify cuneiform tablets, covering over 3,000 years of history. By leveraging statistical, computational techniques, and generative modeling through Variational Auto-Encoders (VAEs), we achieve substantial advancements in the automatic classification of these ancient documents, focusing on the tablets' silhouettes as key predictors. Our classification approach begins with a Decision Tree using height-to-width ratios and culminates with a ResNet50 model, achieving a 61% macro F1-score for tablet silhouettes. Moreover, we introduce novel VAE-powered tools to enhance explainability and enable researchers to explore changes in tablet shapes across different eras and genres. This research contributes to document analysis and diplomatics by demonstrating the value of large-scale data analysis combined with statistical methods. These insights offer valuable tools for historians and epigraphists, enriching our understanding of cuneiform tablets and the cultures that produced them. | [
"['Danielle Kapon' 'Michael Fire' 'Shai Gordin']"
] |
null | null | 2406.04041 | null | null | http://arxiv.org/pdf/2406.04041v1 | 2024-06-06T13:10:37Z | 2024-06-06T13:10:37Z | Linear Opinion Pooling for Uncertainty Quantification on Graphs | We address the problem of uncertainty quantification for graph-structured data, or, more specifically, the problem to quantify the predictive uncertainty in (semi-supervised) node classification. Key questions in this regard concern the distinction between two different types of uncertainty, aleatoric and epistemic, and how to support uncertainty quantification by leveraging the structural information provided by the graph topology. Challenging assumptions and postulates of state-of-the-art methods, we propose a novel approach that represents (epistemic) uncertainty in terms of mixtures of Dirichlet distributions and refers to the established principle of linear opinion pooling for propagating information between neighbored nodes in the graph. The effectiveness of this approach is demonstrated in a series of experiments on a variety of graph-structured datasets. | [
"['Clemens Damke' 'Eyke Hüllermeier']"
] |
null | null | 2406.04043 | null | null | http://arxiv.org/pdf/2406.04043v2 | 2024-07-01T11:56:17Z | 2024-06-06T13:13:29Z | Energy-based Epistemic Uncertainty for Graph Neural Networks | In domains with interdependent data, such as graphs, quantifying the epistemic uncertainty of a Graph Neural Network (GNN) is challenging as uncertainty can arise at different structural scales. Existing techniques neglect this issue or only distinguish between structure-aware and structure-agnostic uncertainty without combining them into a single measure. We propose GEBM, an energy-based model (EBM) that provides high-quality uncertainty estimates by aggregating energy at different structural levels that naturally arise from graph diffusion. In contrast to logit-based EBMs, we provably induce an integrable density in the data space by regularizing the energy function. We introduce an evidential interpretation of our EBM that significantly improves the predictive robustness of the GNN. Our framework is a simple and effective post hoc method applicable to any pre-trained GNN that is sensitive to various distribution shifts. It consistently achieves the best separation of in-distribution and out-of-distribution data on 6 out of 7 anomaly types while having the best average rank over shifts on emph{all} datasets. | [
"['Dominik Fuchsgruber' 'Tom Wollschläger' 'Stephan Günnemann']"
] |
null | null | 2406.04047 | null | null | http://arxiv.org/pdf/2406.04047v1 | 2024-06-06T13:15:37Z | 2024-06-06T13:15:37Z | Slicing Mutual Information Generalization Bounds for Neural Networks | The ability of machine learning (ML) algorithms to generalize well to unseen data has been studied through the lens of information theory, by bounding the generalization error with the input-output mutual information (MI), i.e., the MI between the training data and the learned hypothesis. Yet, these bounds have limited practicality for modern ML applications (e.g., deep learning), due to the difficulty of evaluating MI in high dimensions. Motivated by recent findings on the compressibility of neural networks, we consider algorithms that operate by slicing the parameter space, i.e., trained on random lower-dimensional subspaces. We introduce new, tighter information-theoretic generalization bounds tailored for such algorithms, demonstrating that slicing improves generalization. Our bounds offer significant computational and statistical advantages over standard MI bounds, as they rely on scalable alternative measures of dependence, i.e., disintegrated mutual information and $k$-sliced mutual information. Then, we extend our analysis to algorithms whose parameters do not need to exactly lie on random subspaces, by leveraging rate-distortion theory. This strategy yields generalization bounds that incorporate a distortion term measuring model compressibility under slicing, thereby tightening existing bounds without compromising performance or requiring model compression. Building on this, we propose a regularization scheme enabling practitioners to control generalization through compressibility. Finally, we empirically validate our results and achieve the computation of non-vacuous information-theoretic generalization bounds for neural networks, a task that was previously out of reach. | [
"['Kimia Nadjahi' 'Kristjan Greenewald' 'Rickard Brüel Gabrielsson'\n 'Justin Solomon']"
] |
null | null | 2406.04052 | null | null | http://arxiv.org/pdf/2406.04052v2 | 2024-07-10T11:24:42Z | 2024-06-06T13:17:44Z | Multivector Neurons: Better and Faster O(n)-Equivariant Clifford Graph
Neural Networks | Most current deep learning models equivariant to $O(n)$ or $SO(n)$ either consider mostly scalar information such as distances and angles or have a very high computational complexity. In this work, we test a few novel message passing graph neural networks (GNNs) based on Clifford multivectors, structured similarly to other prevalent equivariant models in geometric deep learning. Our approach leverages efficient invariant scalar features while simultaneously performing expressive learning on multivector representations, particularly through the use of the equivariant geometric product operator. By integrating these elements, our methods outperform established efficient baseline models on an N-Body simulation task and protein denoising task while maintaining a high efficiency. In particular, we push the state-of-the-art error on the N-body dataset to 0.0035 (averaged over 3 runs); an 8% improvement over recent methods. Our implementation is available on Github. | [
"['Cong Liu' 'David Ruhe' 'Patrick Forré']"
] |
null | null | 2406.04055 | null | null | http://arxiv.org/pdf/2406.04055v1 | 2024-06-06T13:21:28Z | 2024-06-06T13:21:28Z | Leveraging SPD Matrices on Riemannian Manifolds in Quantum Classical
Hybrid Models for Structural Health Monitoring | Realtime finite element modeling of bridges assists modern structural health monitoring systems by providing comprehensive insights into structural integrity. This capability is essential for ensuring the safe operation of bridges and preventing sudden catastrophic failures. However, FEM computational cost and the need for realtime analysis pose significant challenges. Additionally, the input data is a 7 dimensional vector, while the output is a 1017 dimensional vector, making accurate and efficient analysis particularly difficult. In this study, we propose a novel hybrid quantum classical Multilayer Perceptron pipeline leveraging Symmetric Positive Definite matrices and Riemannian manifolds for effective data representation. To maintain the integrity of the qubit structure, we utilize SPD matrices, ensuring data representation is well aligned with the quantum computational framework. Additionally, the method leverages polynomial feature expansion to capture nonlinear relationships within the data. The proposed pipeline combines classical fully connected neural network layers with quantum circuit layers to enhance model performance and efficiency. Our experiments focused on various configurations of such hybrid models to identify the optimal structure for accurate and efficient realtime analysis. The best performing model achieved a Mean Squared Error of 0.00031, significantly outperforming traditional methods. | [
"['Azadeh Alavi' 'Sanduni Jayasinghe']"
] |
null | null | 2406.04056 | null | null | http://arxiv.org/pdf/2406.04056v2 | 2024-06-11T20:53:28Z | 2024-06-06T13:25:14Z | Bisimulation Metrics are Optimal Transport Distances, and Can be
Computed Efficiently | We propose a new framework for formulating optimal transport distances between Markov chains. Previously known formulations studied couplings between the entire joint distribution induced by the chains, and derived solutions via a reduction to dynamic programming (DP) in an appropriately defined Markov decision process. This formulation has, however, not led to particularly efficient algorithms so far, since computing the associated DP operators requires fully solving a static optimal transport problem, and these operators need to be applied numerous times during the overall optimization process. In this work, we develop an alternative perspective by considering couplings between a flattened version of the joint distributions that we call discounted occupancy couplings, and show that calculating optimal transport distances in the full space of joint distributions can be equivalently formulated as solving a linear program (LP) in this reduced space. This LP formulation allows us to port several algorithmic ideas from other areas of optimal transport theory. In particular, our formulation makes it possible to introduce an appropriate notion of entropy regularization into the optimization problem, which in turn enables us to directly calculate optimal transport distances via a Sinkhorn-like method we call Sinkhorn Value Iteration (SVI). We show both theoretically and empirically that this method converges quickly to an optimal coupling, essentially at the same computational cost of running vanilla Sinkhorn in each pair of states. Along the way, we point out that our optimal transport distance exactly matches the common notion of bisimulation metrics between Markov chains, and thus our results also apply to computing such metrics, and in fact our algorithm turns out to be significantly more efficient than the best known methods developed so far for this purpose. | [
"['Sergio Calo' 'Anders Jonsson' 'Gergely Neu' 'Ludovic Schwartz'\n 'Javier Segovia-Aguas']"
] |
null | null | 2406.04068 | null | null | http://arxiv.org/pdf/2406.04068v1 | 2024-06-06T13:33:45Z | 2024-06-06T13:33:45Z | Reassessing How to Compare and Improve the Calibration of Machine
Learning Models | A machine learning model is calibrated if its predicted probability for an outcome matches the observed frequency for that outcome conditional on the model prediction. This property has become increasingly important as the impact of machine learning models has continued to spread to various domains. As a result, there are now a dizzying number of recent papers on measuring and improving the calibration of (specifically deep learning) models. In this work, we reassess the reporting of calibration metrics in the recent literature. We show that there exist trivial recalibration approaches that can appear seemingly state-of-the-art unless calibration and prediction metrics (i.e. test accuracy) are accompanied by additional generalization metrics such as negative log-likelihood. We then derive a calibration-based decomposition of Bregman divergences that can be used to both motivate a choice of calibration metric based on a generalization metric, and to detect trivial calibration. Finally, we apply these ideas to develop a new extension to reliability diagrams that can be used to jointly visualize calibration as well as the estimated generalization error of a model. | [
"['Muthu Chidambaram' 'Rong Ge']"
] |
null | null | 2406.04070 | null | null | http://arxiv.org/pdf/2406.04070v1 | 2024-06-06T13:34:43Z | 2024-06-06T13:34:43Z | Batch-in-Batch: a new adversarial training framework for initial
perturbation and sample selection | Adversarial training methods commonly generate independent initial perturbation for adversarial samples from a simple uniform distribution, and obtain the training batch for the classifier without selection. In this work, we propose a simple yet effective training framework called Batch-in-Batch (BB) to enhance models robustness. It involves specifically a joint construction of initial values that could simultaneously generates $m$ sets of perturbations from the original batch set to provide more diversity for adversarial samples; and also includes various sample selection strategies that enable the trained models to have smoother losses and avoid overconfident outputs. Through extensive experiments on three benchmark datasets (CIFAR-10, SVHN, CIFAR-100) with two networks (PreActResNet18 and WideResNet28-10) that are used in both the single-step (Noise-Fast Gradient Sign Method, N-FGSM) and multi-step (Projected Gradient Descent, PGD-10) adversarial training, we show that models trained within the BB framework consistently have higher adversarial accuracy across various adversarial settings, notably achieving over a 13% improvement on the SVHN dataset with an attack radius of 8/255 compared to the N-FGSM baseline model. Furthermore, experimental analysis of the efficiency of both the proposed initial perturbation method and sample selection strategies validates our insights. Finally, we show that our framework is cost-effective in terms of computational resources, even with a relatively large value of $m$. | [
"['Yinting Wu' 'Pai Peng' 'Bo Cai' 'Le Li' '.']"
] |
null | null | 2406.04071 | null | null | http://arxiv.org/pdf/2406.04071v1 | 2024-06-06T13:36:41Z | 2024-06-06T13:36:41Z | Dynamic angular synchronization under smoothness constraints | Given an undirected measurement graph $mathcal{H} = ([n], mathcal{E})$, the classical angular synchronization problem consists of recovering unknown angles $theta_1^*,dots,theta_n^*$ from a collection of noisy pairwise measurements of the form $(theta_i^* - theta_j^*) mod 2pi$, for all ${i,j} in mathcal{E}$. This problem arises in a variety of applications, including computer vision, time synchronization of distributed networks, and ranking from pairwise comparisons. In this paper, we consider a dynamic version of this problem where the angles, and also the measurement graphs evolve over $T$ time points. Assuming a smoothness condition on the evolution of the latent angles, we derive three algorithms for joint estimation of the angles over all time points. Moreover, for one of the algorithms, we establish non-asymptotic recovery guarantees for the mean-squared error (MSE) under different statistical models. In particular, we show that the MSE converges to zero as $T$ increases under milder conditions than in the static setting. This includes the setting where the measurement graphs are highly sparse and disconnected, and also when the measurement noise is large and can potentially increase with $T$. We complement our theoretical results with experiments on synthetic data. | [
"['Ernesto Araya' 'Mihai Cucuringu' 'Hemant Tyagi']"
] |
null | null | 2406.04081 | null | null | http://arxiv.org/pdf/2406.04081v1 | 2024-06-06T13:51:39Z | 2024-06-06T13:51:39Z | Bootstrapping Expectiles in Reinforcement Learning | Many classic Reinforcement Learning (RL) algorithms rely on a Bellman operator, which involves an expectation over the next states, leading to the concept of bootstrapping. To introduce a form of pessimism, we propose to replace this expectation with an expectile. In practice, this can be very simply done by replacing the $L_2$ loss with a more general expectile loss for the critic. Introducing pessimism in RL is desirable for various reasons, such as tackling the overestimation problem (for which classic solutions are double Q-learning or the twin-critic approach of TD3) or robust RL (where transitions are adversarial). We study empirically these two cases. For the overestimation problem, we show that the proposed approach, ExpectRL, provides better results than a classic twin-critic. On robust RL benchmarks, involving changes of the environment, we show that our approach is more robust than classic RL algorithms. We also introduce a variation of ExpectRL combined with domain randomization which is competitive with state-of-the-art robust RL agents. Eventually, we also extend ExpectRL with a mechanism for choosing automatically the expectile value, that is the degree of pessimism | [
"['Pierre Clavier' 'Emmanuel Rachelson' 'Erwan Le Pennec' 'Matthieu Geist']"
] |
null | null | 2406.04088 | null | null | http://arxiv.org/pdf/2406.04088v1 | 2024-06-06T13:58:41Z | 2024-06-06T13:58:41Z | Deterministic Uncertainty Propagation for Improved Model-Based Offline
Reinforcement Learning | Current approaches to model-based offline Reinforcement Learning (RL) often incorporate uncertainty-based reward penalization to address the distributional shift problem. While these approaches have achieved some success, we argue that this penalization introduces excessive conservatism, potentially resulting in suboptimal policies through underestimation. We identify as an important cause of over-penalization the lack of a reliable uncertainty estimator capable of propagating uncertainties in the Bellman operator. The common approach to calculating the penalty term relies on sampling-based uncertainty estimation, resulting in high variance. To address this challenge, we propose a novel method termed Moment Matching Offline Model-Based Policy Optimization (MOMBO). MOMBO learns a Q-function using moment matching, which allows us to deterministically propagate uncertainties through the Q-function. We evaluate MOMBO's performance across various environments and demonstrate empirically that MOMBO is a more stable and sample-efficient approach. | [
"['Abdullah Akgül' 'Manuel Haußmann' 'Melih Kandemir']"
] |
null | null | 2406.04089 | null | null | http://arxiv.org/pdf/2406.04089v1 | 2024-06-06T13:59:51Z | 2024-06-06T13:59:51Z | On Limitation of Transformer for Learning HMMs | Despite the remarkable success of Transformer-based architectures in various sequential modeling tasks, such as natural language processing, computer vision, and robotics, their ability to learn basic sequential models, like Hidden Markov Models (HMMs), is still unclear. This paper investigates the performance of Transformers in learning HMMs and their variants through extensive experimentation and compares them to Recurrent Neural Networks (RNNs). We show that Transformers consistently underperform RNNs in both training speed and testing accuracy across all tested HMM models. There are even challenging HMM instances where Transformers struggle to learn, while RNNs can successfully do so. Our experiments further reveal the relation between the depth of Transformers and the longest sequence length it can effectively learn, based on the types and the complexity of HMMs. To address the limitation of transformers in modeling HMMs, we demonstrate that a variant of the Chain-of-Thought (CoT), called $textit{block CoT}$ in the training phase, can help transformers to reduce the evaluation error and to learn longer sequences at a cost of increasing the training time. Finally, we complement our empirical findings by theoretical results proving the expressiveness of transformers in approximating HMMs with logarithmic depth. | [
"['Jiachen Hu' 'Qinghua Liu' 'Chi Jin']"
] |
null | null | 2406.04090 | null | null | http://arxiv.org/pdf/2406.04090v1 | 2024-06-06T14:01:28Z | 2024-06-06T14:01:28Z | Interpretable Lightweight Transformer via Unrolling of Learned Graph
Smoothness Priors | We build interpretable and lightweight transformer-like neural networks by unrolling iterative optimization algorithms that minimize graph smoothness priors -- the quadratic graph Laplacian regularizer (GLR) and the $ell_1$-norm graph total variation (GTV) -- subject to an interpolation constraint. The crucial insight is that a normalized signal-dependent graph learning module amounts to a variant of the basic self-attention mechanism in conventional transformers. Unlike "black-box" transformers that require learning of large key, query and value matrices to compute scaled dot products as affinities and subsequent output embeddings, resulting in huge parameter sets, our unrolled networks employ shallow CNNs to learn low-dimensional features per node to establish pairwise Mahalanobis distances and construct sparse similarity graphs. At each layer, given a learned graph, the target interpolated signal is simply a low-pass filtered output derived from the minimization of an assumed graph smoothness prior, leading to a dramatic reduction in parameter count. Experiments for two image interpolation applications verify the restoration performance, parameter efficiency and robustness to covariate shift of our graph-based unrolled networks compared to conventional transformers. | [
"['Tam Thuc Do' 'Parham Eftekhar' 'Seyed Alireza Hosseini' 'Gene Cheung'\n 'Philip Chou']"
] |
null | null | 2406.04093 | null | null | http://arxiv.org/pdf/2406.04093v1 | 2024-06-06T14:10:12Z | 2024-06-06T14:10:12Z | Scaling and evaluating sparse autoencoders | Sparse autoencoders provide a promising unsupervised approach for extracting interpretable features from a language model by reconstructing activations from a sparse bottleneck layer. Since language models learn many concepts, autoencoders need to be very large to recover all relevant features. However, studying the properties of autoencoder scaling is difficult due to the need to balance reconstruction and sparsity objectives and the presence of dead latents. We propose using k-sparse autoencoders [Makhzani and Frey, 2013] to directly control sparsity, simplifying tuning and improving the reconstruction-sparsity frontier. Additionally, we find modifications that result in few dead latents, even at the largest scales we tried. Using these techniques, we find clean scaling laws with respect to autoencoder size and sparsity. We also introduce several new metrics for evaluating feature quality based on the recovery of hypothesized features, the explainability of activation patterns, and the sparsity of downstream effects. These metrics all generally improve with autoencoder size. To demonstrate the scalability of our approach, we train a 16 million latent autoencoder on GPT-4 activations for 40 billion tokens. We release training code and autoencoders for open-source models, as well as a visualizer. | [
"['Leo Gao' 'Tom Dupré la Tour' 'Henk Tillman' 'Gabriel Goh' 'Rajan Troll'\n 'Alec Radford' 'Ilya Sutskever' 'Jan Leike' 'Jeffrey Wu']"
] |
null | null | 2406.04098 | null | null | http://arxiv.org/pdf/2406.04098v1 | 2024-06-06T14:13:38Z | 2024-06-06T14:13:38Z | A Large-Scale Neutral Comparison Study of Survival Models on
Low-Dimensional Data | This work presents the first large-scale neutral benchmark experiment focused on single-event, right-censored, low-dimensional survival data. Benchmark experiments are essential in methodological research to scientifically compare new and existing model classes through proper empirical evaluation. Existing benchmarks in the survival literature are often narrow in scope, focusing, for example, on high-dimensional data. Additionally, they may lack appropriate tuning or evaluation procedures, or are qualitative reviews, rather than quantitative comparisons. This comprehensive study aims to fill the gap by neutrally evaluating a broad range of methods and providing generalizable conclusions. We benchmark 18 models, ranging from classical statistical approaches to many common machine learning methods, on 32 publicly available datasets. The benchmark tunes for both a discrimination measure and a proper scoring rule to assess performance in different settings. Evaluating on 8 survival metrics, we assess discrimination, calibration, and overall predictive performance of the tested models. Using discrimination measures, we find that no method significantly outperforms the Cox model. However, (tuned) Accelerated Failure Time models were able to achieve significantly better results with respect to overall predictive performance as measured by the right-censored log-likelihood. Machine learning methods that performed comparably well include Oblique Random Survival Forests under discrimination, and Cox-based likelihood-boosting under overall predictive performance. We conclude that for predictive purposes in the standard survival analysis setting of low-dimensional, right-censored data, the Cox Proportional Hazards model remains a simple and robust method, sufficient for practitioners. | [
"['Lukas Burk' 'John Zobolas' 'Bernd Bischl' 'Andreas Bender'\n 'Marvin N. Wright' 'Raphael Sonabend']"
] |
null | null | 2406.04099 | null | null | http://arxiv.org/pdf/2406.04099v1 | 2024-06-06T14:15:12Z | 2024-06-06T14:15:12Z | Enhancing Weather Predictions: Super-Resolution via Deep Diffusion
Models | This study investigates the application of deep-learning diffusion models for the super-resolution of weather data, a novel approach aimed at enhancing the spatial resolution and detail of meteorological variables. Leveraging the capabilities of diffusion models, specifically the SR3 and ResDiff architectures, we present a methodology for transforming low-resolution weather data into high-resolution outputs. Our experiments, conducted using the WeatherBench dataset, focus on the super-resolution of the two-meter temperature variable, demonstrating the models' ability to generate detailed and accurate weather maps. The results indicate that the ResDiff model, further improved by incorporating physics-based modifications, significantly outperforms traditional SR3 methods in terms of Mean Squared Error (MSE), Structural Similarity Index (SSIM), and Peak Signal-to-Noise Ratio (PSNR). This research highlights the potential of diffusion models in meteorological applications, offering insights into their effectiveness, challenges, and prospects for future advancements in weather prediction and climate analysis. | [
"['Jan Martinů' 'Petr Šimánek']"
] |
null | null | 2406.04103 | null | null | http://arxiv.org/pdf/2406.04103v1 | 2024-06-06T14:20:21Z | 2024-06-06T14:20:21Z | Multistep Distillation of Diffusion Models via Moment Matching | We present a new method for making diffusion models faster to sample. The method distills many-step diffusion models into few-step models by matching conditional expectations of the clean data given noisy data along the sampling trajectory. Our approach extends recently proposed one-step methods to the multi-step case, and provides a new perspective by interpreting these approaches in terms of moment matching. By using up to 8 sampling steps, we obtain distilled models that outperform not only their one-step versions but also their original many-step teacher models, obtaining new state-of-the-art results on the Imagenet dataset. We also show promising results on a large text-to-image model where we achieve fast generation of high resolution images directly in image space, without needing autoencoders or upsamplers. | [
"['Tim Salimans' 'Thomas Mensink' 'Jonathan Heek' 'Emiel Hoogeboom']"
] |
null | null | 2406.04105 | null | null | http://arxiv.org/pdf/2406.04105v1 | 2024-06-06T14:21:15Z | 2024-06-06T14:21:15Z | From Tissue Plane to Organ World: A Benchmark Dataset for Multimodal
Biomedical Image Registration using Deep Co-Attention Networks | Correlating neuropathology with neuroimaging findings provides a multiscale view of pathologic changes in the human organ spanning the meso- to micro-scales, and is an emerging methodology expected to shed light on numerous disease states. To gain the most information from this multimodal, multiscale approach, it is desirable to identify precisely where a histologic tissue section was taken from within the organ in order to correlate with the tissue features in exactly the same organ region. Histology-to-organ registration poses an extra challenge, as any given histologic section can capture only a small portion of a human organ. Making use of the capabilities of state-of-the-art deep learning models, we unlock the potential to address and solve such intricate challenges. Therefore, we create the ATOM benchmark dataset, sourced from diverse institutions, with the primary objective of transforming this challenge into a machine learning problem and delivering outstanding outcomes that enlighten the biomedical community. The performance of our RegisMCAN model demonstrates the potential of deep learning to accurately predict where a subregion extracted from an organ image was obtained from within the overall 3D volume. The code and dataset can be found at: https://github.com/haizailache999/Image-Registration/tree/main | [
"['Yifeng Wang' 'Weipeng Li' 'Thomas Pearce' 'Haohan Wang']"
] |
null | null | 2406.04112 | null | null | http://arxiv.org/pdf/2406.04112v2 | 2024-06-10T02:05:26Z | 2024-06-06T14:29:49Z | Compressible Dynamics in Deep Overparameterized Low-Rank Learning &
Adaptation | While overparameterization in machine learning models offers great benefits in terms of optimization and generalization, it also leads to increased computational requirements as model sizes grow. In this work, we show that by leveraging the inherent low-dimensional structures of data and compressible dynamics within the model parameters, we can reap the benefits of overparameterization without the computational burdens. In practice, we demonstrate the effectiveness of this approach for deep low-rank matrix completion as well as fine-tuning language models. Our approach is grounded in theoretical findings for deep overparameterized low-rank matrix recovery, where we show that the learning dynamics of each weight matrix are confined to an invariant low-dimensional subspace. Consequently, we can construct and train compact, highly compressed factorizations possessing the same benefits as their overparameterized counterparts. In the context of deep matrix completion, our technique substantially improves training efficiency while retaining the advantages of overparameterization. For language model fine-tuning, we propose a method called "Deep LoRA", which improves the existing low-rank adaptation (LoRA) technique, leading to reduced overfitting and a simplified hyperparameter setup, while maintaining comparable efficiency. We validate the effectiveness of Deep LoRA on natural language tasks, particularly when fine-tuning with limited data. Our code is available at https://github.com/cjyaras/deep-lora-transformers. | [
"['Can Yaras' 'Peng Wang' 'Laura Balzano' 'Qing Qu']"
] |
null | null | 2406.04136 | null | null | http://arxiv.org/pdf/2406.04136v1 | 2024-06-06T14:57:48Z | 2024-06-06T14:57:48Z | Legal Judgment Reimagined: PredEx and the Rise of Intelligent AI
Interpretation in Indian Courts | In the era of Large Language Models (LLMs), predicting judicial outcomes poses significant challenges due to the complexity of legal proceedings and the scarcity of expert-annotated datasets. Addressing this, we introduce textbf{Pred}iction with textbf{Ex}planation (texttt{PredEx}), the largest expert-annotated dataset for legal judgment prediction and explanation in the Indian context, featuring over 15,000 annotations. This groundbreaking corpus significantly enhances the training and evaluation of AI models in legal analysis, with innovations including the application of instruction tuning to LLMs. This method has markedly improved the predictive accuracy and explanatory depth of these models for legal judgments. We employed various transformer-based models, tailored for both general and Indian legal contexts. Through rigorous lexical, semantic, and expert assessments, our models effectively leverage texttt{PredEx} to provide precise predictions and meaningful explanations, establishing it as a valuable benchmark for both the legal profession and the NLP community. | [
"['Shubham Kumar Nigam' 'Anurag Sharma' 'Danush Khanna' 'Noel Shallum'\n 'Kripabandhu Ghosh' 'Arnab Bhattacharya']"
] |
null | null | 2406.04137 | null | null | http://arxiv.org/pdf/2406.04137v1 | 2024-06-06T14:57:52Z | 2024-06-06T14:57:52Z | Optimal Batched Linear Bandits | We introduce the E$^4$ algorithm for the batched linear bandit problem, incorporating an Explore-Estimate-Eliminate-Exploit framework. With a proper choice of exploration rate, we prove E$^4$ achieves the finite-time minimax optimal regret with only $O(loglog T)$ batches, and the asymptotically optimal regret with only $3$ batches as $Trightarrowinfty$, where $T$ is the time horizon. We further prove a lower bound on the batch complexity of linear contextual bandits showing that any asymptotically optimal algorithm must require at least $3$ batches in expectation as $Trightarrowinfty$, which indicates E$^4$ achieves the asymptotic optimality in regret and batch complexity simultaneously. To the best of our knowledge, E$^4$ is the first algorithm for linear bandits that simultaneously achieves the minimax and asymptotic optimality in regret with the corresponding optimal batch complexities. In addition, we show that with another choice of exploration rate E$^4$ achieves an instance-dependent regret bound requiring at most $O(log T)$ batches, and maintains the minimax optimality and asymptotic optimality. We conduct thorough experiments to evaluate our algorithm on randomly generated instances and the challenging textit{End of Optimism} instances citep{lattimore2017end} which were shown to be hard to learn for optimism based algorithms. Empirical results show that E$^4$ consistently outperforms baseline algorithms with respect to regret minimization, batch complexity, and computational efficiency. | [
"['Xuanfei Ren' 'Tianyuan Jin' 'Pan Xu']"
] |
null | null | 2406.04142 | null | null | http://arxiv.org/pdf/2406.04142v1 | 2024-06-06T15:08:06Z | 2024-06-06T15:08:06Z | Stochastic Polyak Step-sizes and Momentum: Convergence Guarantees and
Practical Performance | Stochastic gradient descent with momentum, also known as Stochastic Heavy Ball method (SHB), is one of the most popular algorithms for solving large-scale stochastic optimization problems in various machine learning tasks. In practical scenarios, tuning the step-size and momentum parameters of the method is a prohibitively expensive and time-consuming process. In this work, inspired by the recent advantages of stochastic Polyak step-size in the performance of stochastic gradient descent (SGD), we propose and explore new Polyak-type variants suitable for the update rule of the SHB method. In particular, using the Iterate Moving Average (IMA) viewpoint of SHB, we propose and analyze three novel step-size selections: MomSPS$_{max}$, MomDecSPS, and MomAdaSPS. For MomSPS$_{max}$, we provide convergence guarantees for SHB to a neighborhood of the solution for convex and smooth problems (without assuming interpolation). If interpolation is also satisfied, then using MomSPS$_{max}$, SHB converges to the true solution at a fast rate matching the deterministic HB. The other two variants, MomDecSPS and MomAdaSPS, are the first adaptive step-sizes for SHB that guarantee convergence to the exact minimizer without prior knowledge of the problem parameters and without assuming interpolation. The convergence analysis of SHB is tight and obtains the convergence guarantees of SGD with stochastic Polyak step-sizes as a special case. We supplement our analysis with experiments that validate the theory and demonstrate the effectiveness and robustness of the new algorithms. | [
"['Dimitris Oikonomou' 'Nicolas Loizou']"
] |
null | null | 2406.04143 | null | null | http://arxiv.org/pdf/2406.04143v1 | 2024-06-06T15:08:16Z | 2024-06-06T15:08:16Z | Do Language Models Understand Morality? Towards a Robust Detection of
Moral Content | The task of detecting moral values in text has significant implications in various fields, including natural language processing, social sciences, and ethical decision-making. Previously proposed supervised models often suffer from overfitting, leading to hyper-specialized moral classifiers that struggle to perform well on data from different domains. To address this issue, we introduce novel systems that leverage abstract concepts and common-sense knowledge acquired from Large Language Models and Natural Language Inference models during previous stages of training on multiple data sources. By doing so, we aim to develop versatile and robust methods for detecting moral values in real-world scenarios. Our approach uses the GPT 3.5 model as a zero-shot ready-made unsupervised multi-label classifier for moral values detection, eliminating the need for explicit training on labeled data. We compare it with a smaller NLI-based zero-shot model. The results show that the NLI approach achieves competitive results compared to the Davinci model. Furthermore, we conduct an in-depth investigation of the performance of supervised systems in the context of cross-domain multi-label moral value detection. This involves training supervised models on different domains to explore their effectiveness in handling data from different sources and comparing their performance with the unsupervised methods. Our contributions encompass a thorough analysis of both supervised and unsupervised methodologies for cross-domain value detection. We introduce the Davinci model as a state-of-the-art zero-shot unsupervised moral values classifier, pushing the boundaries of moral value detection without the need for explicit training on labeled data. Additionally, we perform a comparative evaluation of our approach with the supervised models, shedding light on their respective strengths and weaknesses. | [
"['Luana Bulla' 'Aldo Gangemi' 'Misael Mongiovì']"
] |
null | null | 2406.04144 | null | null | http://arxiv.org/pdf/2406.04144v1 | 2024-06-06T15:08:41Z | 2024-06-06T15:08:41Z | Redundancy-aware Action Spaces for Robot Learning | Joint space and task space control are the two dominant action modes for controlling robot arms within the robot learning literature. Actions in joint space provide precise control over the robot's pose, but tend to suffer from inefficient training; actions in task space boast data-efficient training but sacrifice the ability to perform tasks in confined spaces due to limited control over the full joint configuration. This work analyses the criteria for designing action spaces for robot manipulation and introduces ER (End-effector Redundancy), a novel action space formulation that, by addressing the redundancies present in the manipulator, aims to combine the advantages of both joint and task spaces, offering fine-grained comprehensive control with overactuated robot arms whilst achieving highly efficient robot learning. We present two implementations of ER, ERAngle (ERA) and ERJoint (ERJ), and we show that ERJ in particular demonstrates superior performance across multiple settings, especially when precise control over the robot configuration is required. We validate our results both in simulated and real robotic environments. | [
"['Pietro Mazzaglia' 'Nicholas Backshall' 'Xiao Ma' 'Stephen James']"
] |
null | null | 2406.04148 | null | null | http://arxiv.org/pdf/2406.04148v1 | 2024-06-06T15:13:48Z | 2024-06-06T15:13:48Z | Fast Redescription Mining Using Locality-Sensitive Hashing | Redescription mining is a data analysis technique that has found applications in diverse fields. The most used redescription mining approaches involve two phases: finding matching pairs among data attributes and extending the pairs. This process is relatively efficient when the number of attributes remains limited and when the attributes are Boolean, but becomes almost intractable when the data consist of many numerical attributes. In this paper, we present new algorithms that perform the matching and extension orders of magnitude faster than the existing approaches. Our algorithms are based on locality-sensitive hashing with a tailored approach to handle the discretisation of numerical attributes as used in redescription mining. | [
"['Maiju Karjalainen' 'Esther Galbrun' 'Pauli Miettinen']"
] |
null | null | 2406.04153 | null | null | http://arxiv.org/pdf/2406.04153v1 | 2024-06-06T15:17:00Z | 2024-06-06T15:17:00Z | Learned Feature Importance Scores for Automated Feature Engineering | Feature engineering has demonstrated substantial utility for many machine learning workflows, such as in the small data regime or when distribution shifts are severe. Thus automating this capability can relieve much manual effort and improve model performance. Towards this, we propose AutoMAN, or Automated Mask-based Feature Engineering, an automated feature engineering framework that achieves high accuracy, low latency, and can be extended to heterogeneous and time-varying data. AutoMAN is based on effectively exploring the candidate transforms space, without explicitly manifesting transformed features. This is achieved by learning feature importance masks, which can be extended to support other modalities such as time series. AutoMAN learns feature transform importance end-to-end, incorporating a dataset's task target directly into feature engineering, resulting in state-of-the-art performance with significantly lower latency compared to alternatives. | [
"['Yihe Dong' 'Sercan Arik' 'Nathanael Yoder' 'Tomas Pfister']"
] |
null | null | 2406.04155 | null | null | http://arxiv.org/pdf/2406.04155v1 | 2024-06-06T15:17:33Z | 2024-06-06T15:17:33Z | Improving Physics-Augmented Continuum Neural Radiance Field-Based
Geometry-Agnostic System Identification with Lagrangian Particle Optimization | Geometry-agnostic system identification is a technique for identifying the geometry and physical properties of an object from video sequences without any geometric assumptions. Recently, physics-augmented continuum neural radiance fields (PAC-NeRF) has demonstrated promising results for this technique by utilizing a hybrid Eulerian-Lagrangian representation, in which the geometry is represented by the Eulerian grid representations of NeRF, the physics is described by a material point method (MPM), and they are connected via Lagrangian particles. However, a notable limitation of PAC-NeRF is that its performance is sensitive to the learning of the geometry from the first frames owing to its two-step optimization. First, the grid representations are optimized with the first frames of video sequences, and then the physical properties are optimized through video sequences utilizing the fixed first-frame grid representations. This limitation can be critical when learning of the geometric structure is difficult, for example, in a few-shot (sparse view) setting. To overcome this limitation, we propose Lagrangian particle optimization (LPO), in which the positions and features of particles are optimized through video sequences in Lagrangian space. This method allows for the optimization of the geometric structure across the entire video sequence within the physical constraints imposed by the MPM. The experimental results demonstrate that the LPO is useful for geometric correction and physical identification in sparse-view settings. | [
"['Takuhiro Kaneko']"
] |
null | null | 2406.04156 | null | null | http://arxiv.org/pdf/2406.04156v1 | 2024-06-06T15:17:51Z | 2024-06-06T15:17:51Z | Pointer-Guided Pre-Training: Infusing Large Language Models with
Paragraph-Level Contextual Awareness | We introduce "pointer-guided segment ordering" (SO), a novel pre-training technique aimed at enhancing the contextual understanding of paragraph-level text representations in large language models. Our methodology leverages a self-attention-driven pointer network to restore the original sequence of shuffled text segments, addressing the challenge of capturing the structural coherence and contextual dependencies within documents. This pre-training approach is complemented by a fine-tuning methodology that incorporates dynamic sampling, augmenting the diversity of training instances and improving sample efficiency for various downstream applications. We evaluate our method on a diverse set of datasets, demonstrating its efficacy in tasks requiring sequential text classification across scientific literature and financial reporting domains. Our experiments show that pointer-guided pre-training significantly enhances the model's ability to understand complex document structures, leading to state-of-the-art performance in downstream classification tasks. | [
"['Lars Hillebrand' 'Prabhupad Pradhan' 'Christian Bauckhage' 'Rafet Sifa']"
] |
null | null | 2406.04163 | null | null | http://arxiv.org/pdf/2406.04163v2 | 2024-06-25T10:26:49Z | 2024-06-06T15:20:37Z | Essentially Sharp Estimates on the Entropy Regularization Error in
Discrete Discounted Markov Decision Processes | We study the error introduced by entropy regularization of infinite-horizon discrete discounted Markov decision processes. We show that this error decreases exponentially in the inverse regularization strength both in a weighted KL-divergence and in value with a problem-specific exponent. We provide a lower bound matching our upper bound up to a polynomial factor. Our proof relies on the correspondence of the solutions of entropy-regularized Markov decision processes with gradient flows of the unregularized reward with respect to a Riemannian metric common in natural policy gradient methods. Further, this correspondence allows us to identify the limit of the gradient flow as the generalized maximum entropy optimal policy, thereby characterizing the implicit bias of the Kakade gradient flow which corresponds to a time-continuous version of the natural policy gradient method. We use this to show that for entropy-regularized natural policy gradient methods the overall error decays exponentially in the square root of the number of iterations improving existing sublinear guarantees. | [
"['Johannes Müller' 'Semih Cayci']"
] |
null | null | 2406.04165 | null | null | http://arxiv.org/pdf/2406.04165v1 | 2024-06-06T15:22:33Z | 2024-06-06T15:22:33Z | Repurposing Language Models into Embedding Models: Finding the
Compute-Optimal Recipe | Text embeddings are essential for many tasks, such as document retrieval, clustering, and semantic similarity assessment. In this paper, we study how to contrastively train text embedding models in a compute-optimal fashion, given a suite of pre-trained decoder-only language models. Our innovation is an algorithm that produces optimal configurations of model sizes, data quantities, and fine-tuning methods for text-embedding models at different computational budget levels. The resulting recipe, which we obtain through extensive experiments, can be used by practitioners to make informed design choices for their embedding models. Specifically, our findings suggest that full fine-tuning and low-rank adaptation fine-tuning produce optimal models at lower and higher computational budgets respectively. | [
"['Alicja Ziarko' 'Albert Q. Jiang' 'Bartosz Piotrowski' 'Wenda Li'\n 'Mateja Jamnik' 'Piotr Miłoś']"
] |
null | null | 2406.04170 | null | null | http://arxiv.org/pdf/2406.04170v2 | 2024-06-16T20:15:40Z | 2024-06-06T15:27:52Z | Element-wise Multiplication Based Physics-informed Neural Networks | As a promising framework for resolving partial differential equations (PDEs), physics-informed neural networks (PINNs) have received widespread attention from industrial and scientific fields. However, lack of expressive ability and initialization pathology issues are found to prevent the application of PINNs in complex PDEs. In this work, we propose Element-wise Multiplication Based Physics-informed Neural Networks (EM-PINNs) to resolve these issues. The element-wise multiplication operation is adopted to transform features into high-dimensional, non-linear spaces, which effectively enhance the expressive capability of PINNs. Benefiting from element-wise multiplication operation, EM-PINNs can eliminate the initialization pathologies of PINNs. The proposed structure is verified on various benchmarks. The results show that EM-PINNs have strong expressive ability. | [
"['Feilong Jiang' 'Xiaonan Hou' 'Min Xia']"
] |
null | null | 2406.04184 | null | null | http://arxiv.org/pdf/2406.04184v1 | 2024-06-06T15:40:29Z | 2024-06-06T15:40:29Z | Shield Synthesis for LTL Modulo Theories | In recent years, Machine Learning (ML) models have achieved remarkable success in various domains. However, these models also tend to demonstrate unsafe behaviors, precluding their deployment in safety-critical systems. To cope with this issue, ample research focuses on developing methods that guarantee the safe behaviour of a given ML model. A prominent example is shielding which incorporates an external component (a "shield") that blocks unwanted behavior. Despite significant progress, shielding suffers from a main setback: it is currently geared towards properties encoded solely in propositional logics (e.g., LTL) and is unsuitable for richer logics. This, in turn, limits the widespread applicability of shielding in many real-world systems. In this work, we address this gap, and extend shielding to LTL modulo theories, by building upon recent advances in reactive synthesis modulo theories. This allowed us to develop a novel approach for generating shields conforming to complex safety specifications in these more expressive, logics. We evaluated our shields and demonstrate their ability to handle rich data with temporal dynamics. To the best of our knowledge, this is the first approach for synthesizing shields for such expressivity. | [
"['Andoni Rodriguez' 'Guy Amir' 'Davide Corsi' 'Cesar Sanchez' 'Guy Katz']"
] |
null | null | 2406.04201 | null | null | http://arxiv.org/pdf/2406.04201v1 | 2024-06-06T15:59:17Z | 2024-06-06T15:59:17Z | Towards Principled Superhuman AI for Multiplayer Symmetric Games | Multiplayer games, when the number of players exceeds two, present unique challenges that fundamentally distinguish them from the extensively studied two-player zero-sum games. These challenges arise from the non-uniqueness of equilibria and the risk of agents performing highly suboptimally when adopting equilibrium strategies. While a line of recent works developed learning systems successfully achieving human-level or even superhuman performance in popular multiplayer games such as Mahjong, Poker, and Diplomacy, two critical questions remain unaddressed: (1) What is the correct solution concept that AI agents should find? and (2) What is the general algorithmic framework that provably solves all games within this class? This paper takes the first step towards solving these unique challenges of multiplayer games by provably addressing both questions in multiplayer symmetric normal-form games. We also demonstrate that many meta-algorithms developed in prior practical systems for multiplayer games can fail to achieve even the basic goal of obtaining agent's equal share of the total reward. | [
"['Jiawei Ge' 'Yuanhao Wang' 'Wenzhe Li' 'Chi Jin']"
] |
null | null | 2406.04208 | null | null | http://arxiv.org/pdf/2406.04208v1 | 2024-06-06T16:05:45Z | 2024-06-06T16:05:45Z | Aligning Agents like Large Language Models | Training agents to behave as desired in complex 3D environments from high-dimensional sensory information is challenging. Imitation learning from diverse human behavior provides a scalable approach for training an agent with a sensible behavioral prior, but such an agent may not perform the specific behaviors of interest when deployed. To address this issue, we draw an analogy between the undesirable behaviors of imitation learning agents and the unhelpful responses of unaligned large language models (LLMs). We then investigate how the procedure for aligning LLMs can be applied to aligning agents in a 3D environment from pixels. For our analysis, we utilize an academically illustrative part of a modern console game in which the human behavior distribution is multi-modal, but we want our agent to imitate a single mode of this behavior. We demonstrate that we can align our agent to consistently perform the desired mode, while providing insights and advice for successfully applying this approach to training agents. Project webpage at https://adamjelley.github.io/aligning-agents-like-llms . | [
"['Adam Jelley' 'Yuhan Cao' 'Dave Bignell' 'Sam Devlin' 'Tabish Rashid']"
] |
null | null | 2406.04215 | null | null | http://arxiv.org/pdf/2406.04215v1 | 2024-06-06T16:14:54Z | 2024-06-06T16:14:54Z | mCSQA: Multilingual Commonsense Reasoning Dataset with Unified Creation
Strategy by Language Models and Humans | It is very challenging to curate a dataset for language-specific knowledge and common sense in order to evaluate natural language understanding capabilities of language models. Due to the limitation in the availability of annotators, most current multilingual datasets are created through translation, which cannot evaluate such language-specific aspects. Therefore, we propose Multilingual CommonsenseQA (mCSQA) based on the construction process of CSQA but leveraging language models for a more efficient construction, e.g., by asking LM to generate questions/answers, refine answers and verify QAs followed by reduced human efforts for verification. Constructed dataset is a benchmark for cross-lingual language-transfer capabilities of multilingual LMs, and experimental results showed high language-transfer capabilities for questions that LMs could easily solve, but lower transfer capabilities for questions requiring deep knowledge or commonsense. This highlights the necessity of language-specific datasets for evaluation and training. Finally, our method demonstrated that multilingual LMs could create QA including language-specific knowledge, significantly reducing the dataset creation cost compared to manual creation. The datasets are available at https://huggingface.co/datasets/yusuke1997/mCSQA. | [
"['Yusuke Sakai' 'Hidetaka Kamigaito' 'Taro Watanabe']"
] |
null | null | 2406.04216 | null | null | http://arxiv.org/pdf/2406.04216v2 | 2024-06-08T11:59:08Z | 2024-06-06T16:15:34Z | What Do Language Models Learn in Context? The Structured Task Hypothesis | Large language models (LLMs) exhibit an intriguing ability to learn a novel task from in-context examples presented in a demonstration, termed in-context learning (ICL). Understandably, a swath of research has been dedicated to uncovering the theories underpinning ICL. One popular hypothesis explains ICL by task selection. LLMs identify the task based on the demonstration and generalize it to the prompt. Another popular hypothesis is that ICL is a form of meta-learning, i.e., the models learn a learning algorithm at pre-training time and apply it to the demonstration. Finally, a third hypothesis argues that LLMs use the demonstration to select a composition of tasks learned during pre-training to perform ICL. In this paper, we empirically explore these three hypotheses that explain LLMs' ability to learn in context with a suite of experiments derived from common text classification tasks. We invalidate the first two hypotheses with counterexamples and provide evidence in support of the last hypothesis. Our results suggest an LLM could learn a novel task in context via composing tasks learned during pre-training. | [
"['Jiaoda Li' 'Yifan Hou' 'Mrinmaya Sachan' 'Ryan Cotterell']"
] |
null | null | 2406.04219 | null | null | http://arxiv.org/pdf/2406.04219v2 | 2024-06-26T03:39:31Z | 2024-06-06T16:18:20Z | Multi-Agent Imitation Learning: Value is Easy, Regret is Hard | We study a multi-agent imitation learning (MAIL) problem where we take the perspective of a learner attempting to coordinate a group of agents based on demonstrations of an expert doing so. Most prior work in MAIL essentially reduces the problem to matching the behavior of the expert within the support of the demonstrations. While doing so is sufficient to drive the value gap between the learner and the expert to zero under the assumption that agents are non-strategic, it does not guarantee robustness to deviations by strategic agents. Intuitively, this is because strategic deviations can depend on a counterfactual quantity: the coordinator's recommendations outside of the state distribution their recommendations induce. In response, we initiate the study of an alternative objective for MAIL in Markov Games we term the regret gap that explicitly accounts for potential deviations by agents in the group. We first perform an in-depth exploration of the relationship between the value and regret gaps. First, we show that while the value gap can be efficiently minimized via a direct extension of single-agent IL algorithms, even value equivalence can lead to an arbitrarily large regret gap. This implies that achieving regret equivalence is harder than achieving value equivalence in MAIL. We then provide a pair of efficient reductions to no-regret online convex optimization that are capable of minimizing the regret gap (a) under a coverage assumption on the expert (MALICE) or (b) with access to a queryable expert (BLADES). | [
"['Jingwu Tang' 'Gokul Swamy' 'Fei Fang' 'Zhiwei Steven Wu']"
] |
null | null | 2406.04227 | null | null | http://arxiv.org/pdf/2406.04227v1 | 2024-06-06T16:28:04Z | 2024-06-06T16:28:04Z | R-CONV: An Analytical Approach for Efficient Data Reconstruction via
Convolutional Gradients | In the effort to learn from extensive collections of distributed data, federated learning has emerged as a promising approach for preserving privacy by using a gradient-sharing mechanism instead of exchanging raw data. However, recent studies show that private training data can be leaked through many gradient attacks. While previous analytical-based attacks have successfully reconstructed input data from fully connected layers, their effectiveness diminishes when applied to convolutional layers. This paper introduces an advanced data leakage method to efficiently exploit convolutional layers' gradients. We present a surprising finding: even with non-fully invertible activation functions, such as ReLU, we can analytically reconstruct training samples from the gradients. To the best of our knowledge, this is the first analytical approach that successfully reconstructs convolutional layer inputs directly from the gradients, bypassing the need to reconstruct layers' outputs. Prior research has mainly concentrated on the weight constraints of convolution layers, overlooking the significance of gradient constraints. Our findings demonstrate that existing analytical methods used to estimate the risk of gradient attacks lack accuracy. In some layers, attacks can be launched with less than 5% of the reported constraints. | [
"['Tamer Ahmed Eltaras' 'Qutaibah Malluhi' 'Alessandro Savino'\n 'Stefano Di Carlo' 'Adnan Qayyum' 'Junaid Qadir']"
] |
null | null | 2406.04229 | null | null | http://arxiv.org/pdf/2406.04229v1 | 2024-06-06T16:29:25Z | 2024-06-06T16:29:25Z | The CLRS-Text Algorithmic Reasoning Language Benchmark | Eliciting reasoning capabilities from language models (LMs) is a critical direction on the path towards building intelligent systems. Most recent studies dedicated to reasoning focus on out-of-distribution performance on procedurally-generated synthetic benchmarks, bespoke-built to evaluate specific skills only. This trend makes results hard to transfer across publications, slowing down progress. Three years ago, a similar issue was identified and rectified in the field of neural algorithmic reasoning, with the advent of the CLRS benchmark. CLRS is a dataset generator comprising graph execution traces of classical algorithms from the Introduction to Algorithms textbook. Inspired by this, we propose CLRS-Text -- a textual version of these algorithmic traces. Out of the box, CLRS-Text is capable of procedurally generating trace data for thirty diverse, challenging algorithmic tasks across any desirable input distribution, while offering a standard pipeline in which any additional algorithmic tasks may be created in the benchmark. We fine-tune and evaluate various LMs as generalist executors on this benchmark, validating prior work and revealing a novel, interesting challenge for the LM reasoning community. Our code is available at https://github.com/google-deepmind/clrs/tree/master/clrs/_src/clrs_text. | [
"['Larisa Markeeva' 'Sean McLeish' 'Borja Ibarz' 'Wilfried Bounsi'\n 'Olga Kozlova' 'Alex Vitvitskyi' 'Charles Blundell' 'Tom Goldstein'\n 'Avi Schwarzschild' 'Petar Veličković']"
] |
null | null | 2406.04239 | null | null | http://arxiv.org/pdf/2406.04239v1 | 2024-06-06T16:38:53Z | 2024-06-06T16:38:53Z | Solving Inverse Problems in Protein Space Using Diffusion-Based Priors | The interaction of a protein with its environment can be understood and controlled via its 3D structure. Experimental methods for protein structure determination, such as X-ray crystallography or cryogenic electron microscopy, shed light on biological processes but introduce challenging inverse problems. Learning-based approaches have emerged as accurate and efficient methods to solve these inverse problems for 3D structure determination, but are specialized for a predefined type of measurement. Here, we introduce a versatile framework to turn raw biophysical measurements of varying types into 3D atomic models. Our method combines a physics-based forward model of the measurement process with a pretrained generative model providing a task-agnostic, data-driven prior. Our method outperforms posterior sampling baselines on both linear and non-linear inverse problems. In particular, it is the first diffusion-based method for refining atomic models from cryo-EM density maps. | [
"['Axel Levy' 'Eric R. Chan' 'Sara Fridovich-Keil' 'Frédéric Poitevin'\n 'Ellen D. Zhong' 'Gordon Wetzstein']"
] |
null | null | 2406.04240 | null | null | http://arxiv.org/pdf/2406.04240v4 | 2024-07-02T19:51:54Z | 2024-06-06T16:39:00Z | Hypernetworks for Personalizing ASR to Atypical Speech | Parameter-efficient fine-tuning (PEFT) for personalizing automatic speech recognition (ASR) has recently shown promise for adapting general population models to atypical speech. However, these approaches assume a priori knowledge of the atypical speech disorder being adapted for -- the diagnosis of which requires expert knowledge that is not always available. Even given this knowledge, data scarcity and high inter/intra-speaker variability further limit the effectiveness of traditional fine-tuning. To circumvent these challenges, we first identify the minimal set of model parameters required for ASR adaptation. Our analysis of each individual parameter's effect on adaptation performance allows us to reduce Word Error Rate (WER) by half while adapting 0.03% of all weights. Alleviating the need for cohort-specific models, we next propose the novel use of a meta-learned hypernetwork to generate highly individualized, utterance-level adaptations on-the-fly for a diverse set of atypical speech characteristics. Evaluating adaptation at the global, cohort and individual-level, we show that hypernetworks generalize better to out-of-distribution speakers, while maintaining an overall relative WER reduction of 75.2% using 0.1% of the full parameter budget. | [
"['Max Müller-Eberstein' 'Dianna Yee' 'Karren Yang' 'Gautam Varma Mantena'\n 'Colin Lea']"
] |
null | null | 2406.04245 | null | null | http://arxiv.org/pdf/2406.04245v1 | 2024-06-06T16:44:08Z | 2024-06-06T16:44:08Z | Online learning of a panoply of quantum objects | In many quantum tasks, there is an unknown quantum object that one wishes to learn. An online strategy for this task involves adaptively refining a hypothesis to reproduce such an object or its measurement statistics. A common evaluation metric for such a strategy is its regret, or roughly the accumulated errors in hypothesis statistics. We prove a sublinear regret bound for learning over general subsets of positive semidefinite matrices via the regularized-follow-the-leader algorithm and apply it to various settings where one wishes to learn quantum objects. For concrete applications, we present a sublinear regret bound for learning quantum states, effects, channels, interactive measurements, strategies, co-strategies, and the collection of inner products of pure states. Our bound applies to many other quantum objects with compact, convex representations. In proving our regret bound, we establish various matrix analysis results useful in quantum information theory. This includes a generalization of Pinsker's inequality for arbitrary positive semidefinite operators with possibly different traces, which may be of independent interest and applicable to more general classes of divergences. | [
"['Akshay Bansal' 'Ian George' 'Soumik Ghosh' 'Jamie Sikora' 'Alice Zheng']"
] |
null | null | 2406.04250 | null | null | http://arxiv.org/pdf/2406.04250v1 | 2024-06-06T16:54:20Z | 2024-06-06T16:54:20Z | Online learning of quantum processes | Among recent insights into learning quantum states, online learning and shadow tomography procedures are notable for their ability to accurately predict expectation values even of adaptively chosen observables. In contrast to the state case, quantum process learning tasks with a similarly adaptive nature have received little attention. In this work, we investigate online learning tasks for quantum processes. Whereas online learning is infeasible for general quantum channels, we show that channels of bounded gate complexity as well as Pauli channels can be online learned in the regret and mistake-bounded models of online learning. In fact, we can online learn probabilistic mixtures of any exponentially large set of known channels. We also provide a provably sample-efficient shadow tomography procedure for Pauli channels. Our results extend beyond quantum channels to non-Markovian multi-time processes, with favorable regret and mistake bounds, as well as a shadow tomography procedure. We complement our online learning upper bounds with mistake as well as computational lower bounds. On the technical side, we make use of the multiplicative weights update algorithm, classical adaptive data analysis, and Bell sampling, as well as tools from the theory of quantum combs for multi-time quantum processes. Our work initiates a study of online learning for classes of quantum channels and, more generally, non-Markovian quantum processes. Given the importance of online learning for state shadow tomography, this may serve as a step towards quantum channel variants of adaptive shadow tomography. | [
"['Asad Raza' 'Matthias C. Caro' 'Jens Eisert' 'Sumeet Khatri']"
] |
null | null | 2406.04257 | null | null | http://arxiv.org/pdf/2406.04257v1 | 2024-06-06T17:03:51Z | 2024-06-06T17:03:51Z | Data Measurements for Decentralized Data Markets | Decentralized data markets can provide more equitable forms of data acquisition for machine learning. However, to realize practical marketplaces, efficient techniques for seller selection need to be developed. We propose and benchmark federated data measurements to allow a data buyer to find sellers with relevant and diverse datasets. Diversity and relevance measures enable a buyer to make relative comparisons between sellers without requiring intermediate brokers and training task-dependent models. | [
"['Charles Lu' 'Mohammad Mohammadi Amiri' 'Ramesh Raskar']"
] |
null | null | 2406.04261 | null | null | http://arxiv.org/pdf/2406.04261v1 | 2024-06-06T17:05:09Z | 2024-06-06T17:05:09Z | Simulating, Fast and Slow: Learning Policies for Black-Box Optimization | In recent years, solving optimization problems involving black-box simulators has become a point of focus for the machine learning community due to their ubiquity in science and engineering. The simulators describe a forward process $f_{mathrm{sim}}: (psi, x) rightarrow y$ from simulation parameters $psi$ and input data $x$ to observations $y$, and the goal of the optimization problem is to find parameters $psi$ that minimize a desired loss function. Sophisticated optimization algorithms typically require gradient information regarding the forward process, $f_{mathrm{sim}}$, with respect to the parameters $psi$. However, obtaining gradients from black-box simulators can often be prohibitively expensive or, in some cases, impossible. Furthermore, in many applications, practitioners aim to solve a set of related problems. Thus, starting the optimization ``ab initio", i.e. from scratch, each time might be inefficient if the forward model is expensive to evaluate. To address those challenges, this paper introduces a novel method for solving classes of similar black-box optimization problems by learning an active learning policy that guides a differentiable surrogate's training and uses the surrogate's gradients to optimize the simulation parameters with gradient descent. After training the policy, downstream optimization of problems involving black-box simulators requires up to $sim$90% fewer expensive simulator calls compared to baselines such as local surrogate-based approaches, numerical optimization, and Bayesian methods. | [
"['Fabio Valerio Massoli' 'Tim Bakker' 'Thomas Hehn'\n 'Tribhuvanesh Orekondy' 'Arash Behboodi']"
] |
null | null | 2406.04267 | null | null | http://arxiv.org/pdf/2406.04267v1 | 2024-06-06T17:14:44Z | 2024-06-06T17:14:44Z | Transformers need glasses! Information over-squashing in language tasks | We study how information propagates in decoder-only Transformers, which are the architectural backbone of most existing frontier large language models (LLMs). We rely on a theoretical signal propagation analysis -- specifically, we analyse the representations of the last token in the final layer of the Transformer, as this is the representation used for next-token prediction. Our analysis reveals a representational collapse phenomenon: we prove that certain distinct sequences of inputs to the Transformer can yield arbitrarily close representations in the final token. This effect is exacerbated by the low-precision floating-point formats frequently used in modern LLMs. As a result, the model is provably unable to respond to these sequences in different ways -- leading to errors in, e.g., tasks involving counting or copying. Further, we show that decoder-only Transformer language models can lose sensitivity to specific tokens in the input, which relates to the well-known phenomenon of over-squashing in graph neural networks. We provide empirical evidence supporting our claims on contemporary LLMs. Our theory also points to simple solutions towards ameliorating these issues. | [
"['Federico Barbero' 'Andrea Banino' 'Steven Kapturowski'\n 'Dharshan Kumaran' 'João G. M. Araújo' 'Alex Vitvitskyi' 'Razvan Pascanu'\n 'Petar Veličković']"
] |
null | null | 2406.04268 | null | null | http://arxiv.org/pdf/2406.04268v1 | 2024-06-06T17:15:02Z | 2024-06-06T17:15:02Z | Open-Endedness is Essential for Artificial Superhuman Intelligence | In recent years there has been a tremendous surge in the general capabilities of AI systems, mainly fuelled by training foundation models on internetscale data. Nevertheless, the creation of openended, ever self-improving AI remains elusive. In this position paper, we argue that the ingredients are now in place to achieve openendedness in AI systems with respect to a human observer. Furthermore, we claim that such open-endedness is an essential property of any artificial superhuman intelligence (ASI). We begin by providing a concrete formal definition of open-endedness through the lens of novelty and learnability. We then illustrate a path towards ASI via open-ended systems built on top of foundation models, capable of making novel, humanrelevant discoveries. We conclude by examining the safety implications of generally-capable openended AI. We expect that open-ended foundation models will prove to be an increasingly fertile and safety-critical area of research in the near future. | [
"['Edward Hughes' 'Michael Dennis' 'Jack Parker-Holder' 'Feryal Behbahani'\n 'Aditi Mavalankar' 'Yuge Shi' 'Tom Schaul' 'Tim Rocktaschel']"
] |
null | null | 2406.04274 | null | null | http://arxiv.org/pdf/2406.04274v1 | 2024-06-06T17:23:49Z | 2024-06-06T17:23:49Z | Self-Play with Adversarial Critic: Provable and Scalable Offline
Alignment for Language Models | This work studies the challenge of aligning large language models (LLMs) with offline preference data. We focus on alignment by Reinforcement Learning from Human Feedback (RLHF) in particular. While popular preference optimization methods exhibit good empirical performance in practice, they are not theoretically guaranteed to converge to the optimal policy and can provably fail when the data coverage is sparse by classical offline reinforcement learning (RL) results. On the other hand, a recent line of work has focused on theoretically motivated preference optimization methods with provable guarantees, but these are not computationally efficient for large-scale applications like LLM alignment. To bridge this gap, we propose SPAC, a new offline preference optimization method with self-play, inspired by the on-average pessimism technique from the offline RL literature, to be the first provable and scalable approach to LLM alignment. We both provide theoretical analysis for its convergence under single-policy concentrability for the general function approximation setting and demonstrate its competitive empirical performance for LLM alignment on a 7B Mistral model with Open LLM Leaderboard evaluations. | [
"['Xiang Ji' 'Sanjeev Kulkarni' 'Mengdi Wang' 'Tengyang Xie']"
] |
null | null | 2406.04276 | null | null | http://arxiv.org/pdf/2406.04276v1 | 2024-06-06T17:25:07Z | 2024-06-06T17:25:07Z | Generative AI-in-the-loop: Integrating LLMs and GPTs into the Next
Generation Networks | In recent years, machine learning (ML) techniques have created numerous opportunities for intelligent mobile networks and have accelerated the automation of network operations. However, complex network tasks may involve variables and considerations even beyond the capacity of traditional ML algorithms. On the other hand, large language models (LLMs) have recently emerged, demonstrating near-human-level performance in cognitive tasks across various fields. However, they remain prone to hallucinations and often lack common sense in basic tasks. Therefore, they are regarded as assistive tools for humans. In this work, we propose the concept of "generative AI-in-the-loop" and utilize the semantic understanding, context awareness, and reasoning abilities of LLMs to assist humans in handling complex or unforeseen situations in mobile communication networks. We believe that combining LLMs and ML models allows both to leverage their respective capabilities and achieve better results than either model alone. To support this idea, we begin by analyzing the capabilities of LLMs and compare them with traditional ML algorithms. We then explore potential LLM-based applications in line with the requirements of next-generation networks. We further examine the integration of ML and LLMs, discussing how they can be used together in mobile networks. Unlike existing studies, our research emphasizes the fusion of LLMs with traditional ML-driven next-generation networks and serves as a comprehensive refinement of existing surveys. Finally, we provide a case study to enhance ML-based network intrusion detection with synthesized data generated by LLMs. Our case study further demonstrates the advantages of our proposed idea. | [
"['Han Zhang' 'Akram Bin Sediq' 'Ali Afana' 'Melike Erol-Kantarci']"
] |
null | null | 2406.04280 | null | null | http://arxiv.org/pdf/2406.04280v1 | 2024-06-06T17:26:40Z | 2024-06-06T17:26:40Z | xMIL: Insightful Explanations for Multiple Instance Learning in
Histopathology | Multiple instance learning (MIL) is an effective and widely used approach for weakly supervised machine learning. In histopathology, MIL models have achieved remarkable success in tasks like tumor detection, biomarker prediction, and outcome prognostication. However, MIL explanation methods are still lagging behind, as they are limited to small bag sizes or disregard instance interactions. We revisit MIL through the lens of explainable AI (XAI) and introduce xMIL, a refined framework with more general assumptions. We demonstrate how to obtain improved MIL explanations using layer-wise relevance propagation (LRP) and conduct extensive evaluation experiments on three toy settings and four real-world histopathology datasets. Our approach consistently outperforms previous explanation attempts with particularly improved faithfulness scores on challenging biomarker prediction tasks. Finally, we showcase how xMIL explanations enable pathologists to extract insights from MIL models, representing a significant advance for knowledge discovery and model debugging in digital histopathology. | [
"['Julius Hense' 'Mina Jamshidi Idaji' 'Oliver Eberle' 'Thomas Schnake'\n 'Jonas Dippel' 'Laure Ciernik' 'Oliver Buchstab' 'Andreas Mock'\n 'Frederick Klauschen' 'Klaus-Robert Müller']"
] |
null | null | 2406.04284 | null | null | http://arxiv.org/pdf/2406.04284v1 | 2024-06-06T17:28:56Z | 2024-06-06T17:28:56Z | What is Dataset Distillation Learning? | Dataset distillation has emerged as a strategy to overcome the hurdles associated with large datasets by learning a compact set of synthetic data that retains essential information from the original dataset. While distilled data can be used to train high performing models, little is understood about how the information is stored. In this study, we posit and answer three questions about the behavior, representativeness, and point-wise information content of distilled data. We reveal distilled data cannot serve as a substitute for real data during training outside the standard evaluation setting for dataset distillation. Additionally, the distillation process retains high task performance by compressing information related to the early training dynamics of real models. Finally, we provide an framework for interpreting distilled data and reveal that individual distilled data points contain meaningful semantic information. This investigation sheds light on the intricate nature of distilled data, providing a better understanding on how they can be effectively utilized. | [
"['William Yang' 'Ye Zhu' 'Zhiwei Deng' 'Olga Russakovsky']"
] |
null | null | 2406.04291 | null | null | http://arxiv.org/pdf/2406.04291v1 | 2024-06-06T17:37:39Z | 2024-06-06T17:37:39Z | Stratified Prediction-Powered Inference for Hybrid Language Model
Evaluation | Prediction-powered inference (PPI) is a method that improves statistical estimates based on limited human-labeled data. PPI achieves this by combining small amounts of human-labeled data with larger amounts of data labeled by a reasonably accurate -- but potentially biased -- automatic system, in a way that results in tighter confidence intervals for certain parameters of interest (e.g., the mean performance of a language model). In this paper, we propose a method called Stratified Prediction-Powered Inference (StratPPI), in which we show that the basic PPI estimates can be considerably improved by employing simple data stratification strategies. Without making any assumptions on the underlying automatic labeling system or data distribution, we derive an algorithm for computing provably valid confidence intervals for population parameters (such as averages) that is based on stratified sampling. In particular, we show both theoretically and empirically that, with appropriate choices of stratification and sample allocation, our approach can provide substantially tighter confidence intervals than unstratified approaches. Specifically, StratPPI is expected to improve in cases where the performance of the autorater varies across different conditional distributions of the target data. | [
"['Adam Fisch' 'Joshua Maynez' 'R. Alex Hofer' 'Bhuwan Dhingra'\n 'Amir Globerson' 'William W. Cohen']"
] |
null | null | 2406.04299 | null | null | http://arxiv.org/pdf/2406.04299v2 | 2024-06-07T03:09:35Z | 2024-06-06T17:45:00Z | NoisyGL: A Comprehensive Benchmark for Graph Neural Networks under Label
Noise | Graph Neural Networks (GNNs) exhibit strong potential in node classification task through a message-passing mechanism. However, their performance often hinges on high-quality node labels, which are challenging to obtain in real-world scenarios due to unreliable sources or adversarial attacks. Consequently, label noise is common in real-world graph data, negatively impacting GNNs by propagating incorrect information during training. To address this issue, the study of Graph Neural Networks under Label Noise (GLN) has recently gained traction. However, due to variations in dataset selection, data splitting, and preprocessing techniques, the community currently lacks a comprehensive benchmark, which impedes deeper understanding and further development of GLN. To fill this gap, we introduce NoisyGL in this paper, the first comprehensive benchmark for graph neural networks under label noise. NoisyGL enables fair comparisons and detailed analyses of GLN methods on noisy labeled graph data across various datasets, with unified experimental settings and interface. Our benchmark has uncovered several important insights that were missed in previous research, and we believe these findings will be highly beneficial for future studies. We hope our open-source benchmark library will foster further advancements in this field. The code of the benchmark can be found in https://github.com/eaglelab-zju/NoisyGL. | [
"['Zhonghao Wang' 'Danyu Sun' 'Sheng Zhou' 'Haobo Wang' 'Jiapei Fan'\n 'Longtao Huang' 'Jiajun Bu']"
] |
null | null | 2406.04302 | null | null | http://arxiv.org/pdf/2406.04302v1 | 2024-06-06T17:48:24Z | 2024-06-06T17:48:24Z | Representational Alignment Supports Effective Machine Teaching | A good teacher should not only be knowledgeable; but should be able to communicate in a way that the student understands -- to share the student's representation of the world. In this work, we integrate insights from machine teaching and pragmatic communication with the burgeoning literature on representational alignment to characterize a utility curve defining a relationship between representational alignment and teacher capability for promoting student learning. To explore the characteristics of this utility curve, we design a supervised learning environment that disentangles representational alignment from teacher accuracy. We conduct extensive computational experiments with machines teaching machines, complemented by a series of experiments in which machines teach humans. Drawing on our findings that improved representational alignment with a student improves student learning outcomes (i.e., task accuracy), we design a classroom matching procedure that assigns students to teachers based on the utility curve. If we are to design effective machine teachers, it is not enough to build teachers that are accurate -- we want teachers that can align, representationally, to their students too. | [
"['Ilia Sucholutsky' 'Katherine M. Collins' 'Maya Malaviya' 'Nori Jacoby'\n 'Weiyang Liu' 'Theodore R. Sumers' 'Michalis Korakakis' 'Umang Bhatt'\n 'Mark Ho' 'Joshua B. Tenenbaum' 'Brad Love' 'Zachary A. Pardos'\n 'Adrian Weller' 'Thomas L. Griffiths']"
] |
null | null | 2406.04303 | null | null | http://arxiv.org/pdf/2406.04303v2 | 2024-07-02T12:39:46Z | 2024-06-06T17:49:21Z | Vision-LSTM: xLSTM as Generic Vision Backbone | Transformers are widely used as generic backbones in computer vision, despite initially introduced for natural language processing. Recently, the Long Short-Term Memory (LSTM) has been extended to a scalable and performant architecture - the xLSTM - which overcomes long-standing LSTM limitations via exponential gating and parallelizable matrix memory structure. In this report, we introduce Vision-LSTM (ViL), an adaption of the xLSTM building blocks to computer vision. ViL comprises a stack of xLSTM blocks where odd blocks process the sequence of patch tokens from top to bottom while even blocks go from bottom to top. Experiments show that ViL holds promise to be further deployed as new generic backbone for computer vision architectures. | [
"['Benedikt Alkin' 'Maximilian Beck' 'Korbinian Pöppel' 'Sepp Hochreiter'\n 'Johannes Brandstetter']"
] |
null | null | 2406.04306 | null | null | http://arxiv.org/pdf/2406.04306v1 | 2024-06-06T17:53:34Z | 2024-06-06T17:53:34Z | Semantically Diverse Language Generation for Uncertainty Estimation in
Language Models | Large language models (LLMs) can suffer from hallucinations when generating text. These hallucinations impede various applications in society and industry by making LLMs untrustworthy. Current LLMs generate text in an autoregressive fashion by predicting and appending text tokens. When an LLM is uncertain about the semantic meaning of the next tokens to generate, it is likely to start hallucinating. Thus, it has been suggested that hallucinations stem from predictive uncertainty. We introduce Semantically Diverse Language Generation (SDLG) to quantify predictive uncertainty in LLMs. SDLG steers the LLM to generate semantically diverse yet likely alternatives for an initially generated text. This approach provides a precise measure of aleatoric semantic uncertainty, detecting whether the initial text is likely to be hallucinated. Experiments on question-answering tasks demonstrate that SDLG consistently outperforms existing methods while being the most computationally efficient, setting a new standard for uncertainty estimation in LLMs. | [
"['Lukas Aichberger' 'Kajetan Schweighofer' 'Mykyta Ielanskyi'\n 'Sepp Hochreiter']"
] |
null | null | 2406.04308 | null | null | http://arxiv.org/pdf/2406.04308v1 | 2024-06-06T17:55:02Z | 2024-06-06T17:55:02Z | Approximation-Aware Bayesian Optimization | High-dimensional Bayesian optimization (BO) tasks such as molecular design often require 10,000 function evaluations before obtaining meaningful results. While methods like sparse variational Gaussian processes (SVGPs) reduce computational requirements in these settings, the underlying approximations result in suboptimal data acquisitions that slow the progress of optimization. In this paper we modify SVGPs to better align with the goals of BO: targeting informed data acquisition rather than global posterior fidelity. Using the framework of utility-calibrated variational inference, we unify GP approximation and data acquisition into a joint optimization problem, thereby ensuring optimal decisions under a limited computational budget. Our approach can be used with any decision-theoretic acquisition function and is compatible with trust region methods like TuRBO. We derive efficient joint objectives for the expected improvement and knowledge gradient acquisition functions in both the standard and batch BO settings. Our approach outperforms standard SVGPs on high-dimensional benchmark tasks in control and molecular design. | [
"['Natalie Maus' 'Kyurae Kim' 'Geoff Pleiss' 'David Eriksson'\n 'John P. Cunningham' 'Jacob R. Gardner']"
] |
null | null | 2406.04309 | null | null | http://arxiv.org/pdf/2406.04309v1 | 2024-06-06T17:55:34Z | 2024-06-06T17:55:34Z | ReFiNe: Recursive Field Networks for Cross-modal Multi-scene
Representation | The common trade-offs of state-of-the-art methods for multi-shape representation (a single model "packing" multiple objects) involve trading modeling accuracy against memory and storage. We show how to encode multiple shapes represented as continuous neural fields with a higher degree of precision than previously possible and with low memory usage. Key to our approach is a recursive hierarchical formulation that exploits object self-similarity, leading to a highly compressed and efficient shape latent space. Thanks to the recursive formulation, our method supports spatial and global-to-local latent feature fusion without needing to initialize and maintain auxiliary data structures, while still allowing for continuous field queries to enable applications such as raytracing. In experiments on a set of diverse datasets, we provide compelling qualitative results and demonstrate state-of-the-art multi-scene reconstruction and compression results with a single network per dataset. | [
"['Sergey Zakharov' 'Katherine Liu' 'Adrien Gaidon' 'Rares Ambrus']"
] |
null | null | 2406.04313 | null | null | http://arxiv.org/pdf/2406.04313v4 | 2024-07-12T16:51:07Z | 2024-06-06T17:57:04Z | Improving Alignment and Robustness with Circuit Breakers | AI systems can take harmful actions and are highly vulnerable to adversarial attacks. We present an approach, inspired by recent advances in representation engineering, that interrupts the models as they respond with harmful outputs with "circuit breakers." Existing techniques aimed at improving alignment, such as refusal training, are often bypassed. Techniques such as adversarial training try to plug these holes by countering specific attacks. As an alternative to refusal training and adversarial training, circuit-breaking directly controls the representations that are responsible for harmful outputs in the first place. Our technique can be applied to both text-only and multimodal language models to prevent the generation of harmful outputs without sacrificing utility -- even in the presence of powerful unseen attacks. Notably, while adversarial robustness in standalone image recognition remains an open challenge, circuit breakers allow the larger multimodal system to reliably withstand image "hijacks" that aim to produce harmful content. Finally, we extend our approach to AI agents, demonstrating considerable reductions in the rate of harmful actions when they are under attack. Our approach represents a significant step forward in the development of reliable safeguards to harmful behavior and adversarial attacks. | [
"['Andy Zou' 'Long Phan' 'Justin Wang' 'Derek Duenas' 'Maxwell Lin'\n 'Maksym Andriushchenko' 'Rowan Wang' 'Zico Kolter' 'Matt Fredrikson'\n 'Dan Hendrycks']"
] |
null | null | 2406.04317 | null | null | http://arxiv.org/pdf/2406.04317v1 | 2024-06-06T17:57:49Z | 2024-06-06T17:57:49Z | Regularized KL-Divergence for Well-Defined Function-Space Variational
Inference in Bayesian neural networks | Bayesian neural networks (BNN) promise to combine the predictive performance of neural networks with principled uncertainty modeling important for safety-critical systems and decision making. However, posterior uncertainty estimates depend on the choice of prior, and finding informative priors in weight-space has proven difficult. This has motivated variational inference (VI) methods that pose priors directly on the function generated by the BNN rather than on weights. In this paper, we address a fundamental issue with such function-space VI approaches pointed out by Burt et al. (2020), who showed that the objective function (ELBO) is negative infinite for most priors of interest. Our solution builds on generalized VI (Knoblauch et al., 2019) with the regularized KL divergence (Quang, 2019) and is, to the best of our knowledge, the first well-defined variational objective for function-space inference in BNNs with Gaussian process (GP) priors. Experiments show that our method incorporates the properties specified by the GP prior on synthetic and small real-world data sets, and provides competitive uncertainty estimates for regression, classification and out-of-distribution detection compared to BNN baselines with both function and weight-space priors. | [
"['Tristan Cinquin' 'Robert Bamler']"
] |
null | null | 2406.04318 | null | null | http://arxiv.org/pdf/2406.04318v1 | 2024-06-06T17:58:00Z | 2024-06-06T17:58:00Z | Adaptive Sampling of k-Space in Magnetic Resonance for Rapid Pathology
Prediction | Magnetic Resonance (MR) imaging, despite its proven diagnostic utility, remains an inaccessible imaging modality for disease surveillance at the population level. A major factor rendering MR inaccessible is lengthy scan times. An MR scanner collects measurements associated with the underlying anatomy in the Fourier space, also known as the k-space. Creating a high-fidelity image requires collecting large quantities of such measurements, increasing the scan time. Traditionally to accelerate an MR scan, image reconstruction from under-sampled k-space data is the method of choice. However, recent works show the feasibility of bypassing image reconstruction and directly learning to detect disease directly from a sparser learned subset of the k-space measurements. In this work, we propose Adaptive Sampling for MR (ASMR), a sampling method that learns an adaptive policy to sequentially select k-space samples to optimize for target disease detection. On 6 out of 8 pathology classification tasks spanning the Knee, Brain, and Prostate MR scans, ASMR reaches within 2% of the performance of a fully sampled classifier while using only 8% of the k-space, as well as outperforming prior state-of-the-art work in k-space sampling such as EMRT, LOUPE, and DPS. | [
"['Chen-Yu Yen' 'Raghav Singhal' 'Umang Sharma' 'Rajesh Ranganath'\n 'Sumit Chopra' 'Lerrel Pinto']"
] |
null | null | 2406.04320 | null | null | http://arxiv.org/pdf/2406.04320v1 | 2024-06-06T17:58:09Z | 2024-06-06T17:58:09Z | Chimera: Effectively Modeling Multivariate Time Series with
2-Dimensional State Space Models | Modeling multivariate time series is a well-established problem with a wide range of applications from healthcare to financial markets. Traditional State Space Models (SSMs) are classical approaches for univariate time series modeling due to their simplicity and expressive power to represent linear dependencies. They, however, have fundamentally limited expressive power to capture non-linear dependencies, are slow in practice, and fail to model the inter-variate information flow. Despite recent attempts to improve the expressive power of SSMs by using deep structured SSMs, the existing methods are either limited to univariate time series, fail to model complex patterns (e.g., seasonal patterns), fail to dynamically model the dependencies of variate and time dimensions, and/or are input-independent. We present Chimera that uses two input-dependent 2-D SSM heads with different discretization processes to learn long-term progression and seasonal patterns. To improve the efficiency of complex 2D recurrence, we present a fast training using a new 2-dimensional parallel selective scan. We further present and discuss 2-dimensional Mamba and Mamba-2 as the spacial cases of our 2D SSM. Our experimental evaluation shows the superior performance of Chimera on extensive and diverse benchmarks, including ECG and speech time series classification, long-term and short-term time series forecasting, and time series anomaly detection. | [
"['Ali Behrouz' 'Michele Santacatterina' 'Ramin Zabih']"
] |
null | null | 2406.04321 | null | null | http://arxiv.org/pdf/2406.04321v1 | 2024-06-06T17:58:11Z | 2024-06-06T17:58:11Z | VidMuse: A Simple Video-to-Music Generation Framework with
Long-Short-Term Modeling | In this work, we systematically study music generation conditioned solely on the video. First, we present a large-scale dataset comprising 190K video-music pairs, including various genres such as movie trailers, advertisements, and documentaries. Furthermore, we propose VidMuse, a simple framework for generating music aligned with video inputs. VidMuse stands out by producing high-fidelity music that is both acoustically and semantically aligned with the video. By incorporating local and global visual cues, VidMuse enables the creation of musically coherent audio tracks that consistently match the video content through Long-Short-Term modeling. Through extensive experiments, VidMuse outperforms existing models in terms of audio quality, diversity, and audio-visual alignment. The code and datasets will be available at https://github.com/ZeyueT/VidMuse/. | [
"['Zeyue Tian' 'Zhaoyang Liu' 'Ruibin Yuan' 'Jiahao Pan' 'Xiaoqiang Huang'\n 'Qifeng Liu' 'Xu Tan' 'Qifeng Chen' 'Wei Xue' 'Yike Guo']"
] |
null | null | 2406.04323 | null | null | http://arxiv.org/pdf/2406.04323v1 | 2024-06-06T17:58:15Z | 2024-06-06T17:58:15Z | ATraDiff: Accelerating Online Reinforcement Learning with Imaginary
Trajectories | Training autonomous agents with sparse rewards is a long-standing problem in online reinforcement learning (RL), due to low data efficiency. Prior work overcomes this challenge by extracting useful knowledge from offline data, often accomplished through the learning of action distribution from offline data and utilizing the learned distribution to facilitate online RL. However, since the offline data are given and fixed, the extracted knowledge is inherently limited, making it difficult to generalize to new tasks. We propose a novel approach that leverages offline data to learn a generative diffusion model, coined as Adaptive Trajectory Diffuser (ATraDiff). This model generates synthetic trajectories, serving as a form of data augmentation and consequently enhancing the performance of online RL methods. The key strength of our diffuser lies in its adaptability, allowing it to effectively handle varying trajectory lengths and mitigate distribution shifts between online and offline data. Because of its simplicity, ATraDiff seamlessly integrates with a wide spectrum of RL methods. Empirical evaluation shows that ATraDiff consistently achieves state-of-the-art performance across a variety of environments, with particularly pronounced improvements in complicated settings. Our code and demo video are available at https://atradiff.github.io . | [
"['Qianlan Yang' 'Yu-Xiong Wang']"
] |
null | null | 2406.04327 | null | null | http://arxiv.org/pdf/2406.04327v1 | 2024-06-06T17:59:09Z | 2024-06-06T17:59:09Z | Causal Estimation of Memorisation Profiles | Understanding memorisation in language models has practical and societal implications, e.g., studying models' training dynamics or preventing copyright infringements. Prior work defines memorisation as the causal effect of training with an instance on the model's ability to predict that instance. This definition relies on a counterfactual: the ability to observe what would have happened had the model not seen that instance. Existing methods struggle to provide computationally efficient and accurate estimates of this counterfactual. Further, they often estimate memorisation for a model architecture rather than for a specific model instance. This paper fills an important gap in the literature, proposing a new, principled, and efficient method to estimate memorisation based on the difference-in-differences design from econometrics. Using this method, we characterise a model's memorisation profile--its memorisation trends across training--by only observing its behaviour on a small set of instances throughout training. In experiments with the Pythia model suite, we find that memorisation (i) is stronger and more persistent in larger models, (ii) is determined by data order and learning rate, and (iii) has stable trends across model sizes, thus making memorisation in larger models predictable from smaller ones. | [
"['Pietro Lesci' 'Clara Meister' 'Thomas Hofmann' 'Andreas Vlachos'\n 'Tiago Pimentel']"
] |
null | null | 2406.04328 | null | null | http://arxiv.org/pdf/2406.04328v2 | 2024-07-02T22:08:03Z | 2024-06-06T17:59:09Z | The Brain's Bitter Lesson: Scaling Speech Decoding With Self-Supervised
Learning | The past few years have produced a series of spectacular advances in the decoding of speech from brain activity. The engine of these advances has been the acquisition of labelled data, with increasingly large datasets acquired from single subjects. However, participants exhibit anatomical and other individual differences, and datasets use varied scanners and task designs. As a result, prior work has struggled to leverage data from multiple subjects, multiple datasets, multiple tasks, and unlabelled datasets. In turn, the field has not benefited from the rapidly growing number of open neural data repositories to exploit large-scale data and deep learning. To address this, we develop an initial set of neuroscience-inspired self-supervised objectives, together with a neural architecture, for representation learning from heterogeneous and unlabelled neural recordings. Experimental results show that representations learned with these objectives scale with data, generalise across subjects, datasets, and tasks, and are also learned faster than using only labelled data. In addition, we set new benchmarks for two foundational speech decoding tasks. Taken together, these methods now unlock the potential for training speech decoding models with orders of magnitude more existing data. | [
"['Dulhan Jayalath' 'Gilad Landau' 'Brendan Shillingford' 'Mark Woolrich'\n 'Oiwi Parker Jones']"
] |
null | null | 2406.04329 | null | null | http://arxiv.org/pdf/2406.04329v1 | 2024-06-06T17:59:10Z | 2024-06-06T17:59:10Z | Simplified and Generalized Masked Diffusion for Discrete Data | Masked (or absorbing) diffusion is actively explored as an alternative to autoregressive models for generative modeling of discrete data. However, existing work in this area has been hindered by unnecessarily complex model formulations and unclear relationships between different perspectives, leading to suboptimal parameterization, training objectives, and ad hoc adjustments to counteract these issues. In this work, we aim to provide a simple and general framework that unlocks the full potential of masked diffusion models. We show that the continuous-time variational objective of masked diffusion models is a simple weighted integral of cross-entropy losses. Our framework also enables training generalized masked diffusion models with state-dependent masking schedules. When evaluated by perplexity, our models trained on OpenWebText surpass prior diffusion language models at GPT-2 scale and demonstrate superior performance on 4 out of 5 zero-shot language modeling tasks. Furthermore, our models vastly outperform previous discrete diffusion models on pixel-level image modeling, achieving 2.78~(CIFAR-10) and 3.42 (ImageNet 64$times$64) bits per dimension that are comparable or better than autoregressive models of similar sizes. | [
"['Jiaxin Shi' 'Kehang Han' 'Zhe Wang' 'Arnaud Doucet'\n 'Michalis K. Titsias']"
] |
null | null | 2406.04331 | null | null | http://arxiv.org/pdf/2406.04331v1 | 2024-06-06T17:59:10Z | 2024-06-06T17:59:10Z | PaCE: Parsimonious Concept Engineering for Large Language Models | Large Language Models (LLMs) are being used for a wide variety of tasks. While they are capable of generating human-like responses, they can also produce undesirable output including potentially harmful information, racist or sexist language, and hallucinations. Alignment methods are designed to reduce such undesirable output, via techniques such as fine-tuning, prompt engineering, and representation engineering. However, existing methods face several challenges: some require costly fine-tuning for every alignment task; some do not adequately remove undesirable concepts, failing alignment; some remove benign concepts, lowering the linguistic capabilities of LLMs. To address these issues, we propose Parsimonious Concept Engineering (PaCE), a novel activation engineering framework for alignment. First, to sufficiently model the concepts, we construct a large-scale concept dictionary in the activation space, in which each atom corresponds to a semantic concept. Then, given any alignment task, we instruct a concept partitioner to efficiently annotate the concepts as benign or undesirable. Finally, at inference time, we decompose the LLM activations along the concept dictionary via sparse coding, to accurately represent the activation as a linear combination of the benign and undesirable components. By removing the latter ones from the activation, we reorient the behavior of LLMs towards alignment goals. We conduct experiments on tasks such as response detoxification, faithfulness enhancement, and sentiment revising, and show that PaCE achieves state-of-the-art alignment performance while maintaining linguistic capabilities. | [
"['Jinqi Luo' 'Tianjiao Ding' 'Kwan Ho Ryan Chan' 'Darshan Thaker'\n 'Aditya Chattopadhyay' 'Chris Callison-Burch' 'René Vidal']"
] |
null | null | 2406.04332 | null | null | http://arxiv.org/pdf/2406.04332v1 | 2024-06-06T17:59:23Z | 2024-06-06T17:59:23Z | Coarse-To-Fine Tensor Trains for Compact Visual Representations | The ability to learn compact, high-quality, and easy-to-optimize representations for visual data is paramount to many applications such as novel view synthesis and 3D reconstruction. Recent work has shown substantial success in using tensor networks to design such compact and high-quality representations. However, the ability to optimize tensor-based representations, and in particular, the highly compact tensor train representation, is still lacking. This has prevented practitioners from deploying the full potential of tensor networks for visual data. To this end, we propose 'Prolongation Upsampling Tensor Train (PuTT)', a novel method for learning tensor train representations in a coarse-to-fine manner. Our method involves the prolonging or `upsampling' of a learned tensor train representation, creating a sequence of 'coarse-to-fine' tensor trains that are incrementally refined. We evaluate our representation along three axes: (1). compression, (2). denoising capability, and (3). image completion capability. To assess these axes, we consider the tasks of image fitting, 3D fitting, and novel view synthesis, where our method shows an improved performance compared to state-of-the-art tensor-based methods. For full results see our project webpage: https://sebulo.github.io/PuTT_website/ | [
"['Sebastian Loeschcke' 'Dan Wang' 'Christian Leth-Espensen'\n 'Serge Belongie' 'Michael J. Kastoryano' 'Sagie Benaim']"
] |
null | null | 2406.04336 | null | null | http://arxiv.org/pdf/2406.04336v1 | 2024-06-06T17:59:41Z | 2024-06-06T17:59:41Z | On the Expressive Power of Spectral Invariant Graph Neural Networks | Incorporating spectral information to enhance Graph Neural Networks (GNNs) has shown promising results but raises a fundamental challenge due to the inherent ambiguity of eigenvectors. Various architectures have been proposed to address this ambiguity, referred to as spectral invariant architectures. Notable examples include GNNs and Graph Transformers that use spectral distances, spectral projection matrices, or other invariant spectral features. However, the potential expressive power of these spectral invariant architectures remains largely unclear. The goal of this work is to gain a deep theoretical understanding of the expressive power obtainable when using spectral features. We first introduce a unified message-passing framework for designing spectral invariant GNNs, called Eigenspace Projection GNN (EPNN). A comprehensive analysis shows that EPNN essentially unifies all prior spectral invariant architectures, in that they are either strictly less expressive or equivalent to EPNN. A fine-grained expressiveness hierarchy among different architectures is also established. On the other hand, we prove that EPNN itself is bounded by a recently proposed class of Subgraph GNNs, implying that all these spectral invariant architectures are strictly less expressive than 3-WL. Finally, we discuss whether using spectral features can gain additional expressiveness when combined with more expressive GNNs. | [
"['Bohang Zhang' 'Lingxiao Zhao' 'Haggai Maron']"
] |
null | null | 2406.04344 | null | null | http://arxiv.org/pdf/2406.04344v1 | 2024-06-06T17:59:56Z | 2024-06-06T17:59:56Z | Verbalized Machine Learning: Revisiting Machine Learning with Language
Models | Motivated by the large progress made by large language models (LLMs), we introduce the framework of verbalized machine learning (VML). In contrast to conventional machine learning models that are typically optimized over a continuous parameter space, VML constrains the parameter space to be human-interpretable natural language. Such a constraint leads to a new perspective of function approximation, where an LLM with a text prompt can be viewed as a function parameterized by the text prompt. Guided by this perspective, we revisit classical machine learning problems, such as regression and classification, and find that these problems can be solved by an LLM-parameterized learner and optimizer. The major advantages of VML include (1) easy encoding of inductive bias: prior knowledge about the problem and hypothesis class can be encoded in natural language and fed into the LLM-parameterized learner; (2) automatic model class selection: the optimizer can automatically select a concrete model class based on data and verbalized prior knowledge, and it can update the model class during training; and (3) interpretable learner updates: the LLM-parameterized optimizer can provide explanations for why each learner update is performed. We conduct several studies to empirically evaluate the effectiveness of VML, and hope that VML can serve as a stepping stone to stronger interpretability and trustworthiness in ML. | [
"['Tim Z. Xiao' 'Robert Bamler' 'Bernhard Schölkopf' 'Weiyang Liu']"
] |
null | null | 2406.04348 | null | null | http://arxiv.org/abs/2406.04348v1 | 2024-05-07T14:19:11Z | 2024-05-07T14:19:11Z | Gaining Insights into Group-Level Course Difficulty via Differential
Course Functioning | Curriculum Analytics (CA) studies curriculum structure and student data to ensure the quality of educational programs. One desirable property of courses within curricula is that they are not unexpectedly more difficult for students of different backgrounds. While prior work points to likely variations in course difficulty across student groups, robust methodologies for capturing such variations are scarce, and existing approaches do not adequately decouple course-specific difficulty from students' general performance levels. The present study introduces Differential Course Functioning (DCF) as an Item Response Theory (IRT)-based CA methodology. DCF controls for student performance levels and examines whether significant differences exist in how distinct student groups succeed in a given course. Leveraging data from over 20,000 students at a large public university, we demonstrate DCF's ability to detect inequities in undergraduate course difficulty across student groups described by grade achievement. We compare major pairs with high co-enrollment and transfer students to their non-transfer peers. For the former, our findings suggest a link between DCF effect sizes and the alignment of course content to student home department motivating interventions targeted towards improving course preparedness. For the latter, results suggest minor variations in course-specific difficulty between transfer and non-transfer students. While this is desirable, it also suggests that interventions targeted toward mitigating grade achievement gaps in transfer students should encompass comprehensive support beyond enhancing preparedness for individual courses. By providing more nuanced and equitable assessments of academic performance and difficulties experienced by diverse student populations, DCF could support policymakers, course articulation officers, and student advisors. | [
"['Frederik Baucks' 'Robin Schmucker' 'Conrad Borchers' 'Zachary A. Pardos'\n 'Laurenz Wiskott']"
] |
null | null | 2406.04350 | null | null | http://arxiv.org/pdf/2406.04350v1 | 2024-05-11T07:41:27Z | 2024-05-11T07:41:27Z | Prompt-guided Precise Audio Editing with Diffusion Models | Audio editing involves the arbitrary manipulation of audio content through precise control. Although text-guided diffusion models have made significant advancements in text-to-audio generation, they still face challenges in finding a flexible and precise way to modify target events within an audio track. We present a novel approach, referred to as PPAE, which serves as a general module for diffusion models and enables precise audio editing. The editing is based on the input textual prompt only and is entirely training-free. We exploit the cross-attention maps of diffusion models to facilitate accurate local editing and employ a hierarchical local-global pipeline to ensure a smoother editing process. Experimental results highlight the effectiveness of our method in various editing tasks. | [
"['Manjie Xu' 'Chenxing Li' 'Duzhen zhang' 'Dan Su' 'Wei Liang' 'Dong Yu']"
] |
null | null | 2406.04364 | null | null | http://arxiv.org/abs/2406.04364v1 | 2024-05-30T11:39:15Z | 2024-05-30T11:39:15Z | Use of a Multiscale Vision Transformer to predict Nursing Activities
Score from Low Resolution Thermal Videos in an Intensive Care Unit | Excessive caregiver workload in hospital nurses has been implicated in poorer patient care and increased worker burnout. Measurement of this workload in the Intensive Care Unit (ICU) is often done using the Nursing Activities Score (NAS), but this is usually recorded manually and sporadically. Previous work has made use of Ambient Intelligence (AmI) by using computer vision to passively derive caregiver-patient interaction times to monitor staff workload. In this letter, we propose using a Multiscale Vision Transformer (MViT) to passively predict the NAS from low-resolution thermal videos recorded in an ICU. 458 videos were obtained from an ICU in Melbourne, Australia and used to train a MViTv2 model using an indirect prediction and a direct prediction method. The indirect method predicted 1 of 8 potentially identifiable NAS activities from the video before inferring the NAS. The direct method predicted the NAS score immediately from the video. The indirect method yielded an average 5-fold accuracy of 57.21%, an area under the receiver operating characteristic curve (ROC AUC) of 0.865, a F1 score of 0.570 and a mean squared error (MSE) of 28.16. The direct method yielded a MSE of 18.16. We also showed that the MViTv2 outperforms similar models such as R(2+1)D and ResNet50-LSTM under identical settings. This study shows the feasibility of using a MViTv2 to passively predict the NAS in an ICU and monitor staff workload automatically. Our results above also show an increased accuracy in predicting NAS directly versus predicting NAS indirectly. We hope that our study can provide a direction for future work and further improve the accuracy of passive NAS monitoring. | [
"['Isaac YL Lee' 'Thanh Nguyen-Duc' 'Ryo Ueno' 'Jesse Smith' 'Peter Y Chan']"
] |
null | null | 2406.04367 | null | null | http://arxiv.org/pdf/2406.04367v1 | 2024-05-31T20:05:17Z | 2024-05-31T20:05:17Z | Physics-enhanced Neural Operator for Simulating Turbulent Transport | The precise simulation of turbulent flows is of immense importance in a variety of scientific and engineering fields, including climate science, freshwater science, and the development of energy-efficient manufacturing processes. Within the realm of turbulent flow simulation, direct numerical simulation (DNS) is widely considered to be the most reliable approach, but it is prohibitively expensive for long-term simulation at fine spatial scales. Given the pressing need for efficient simulation, there is an increasing interest in building machine learning models for turbulence, either by reconstructing DNS from alternative low-fidelity simulations or by predicting DNS based on the patterns learned from historical data. However, standard machine learning techniques remain limited in capturing complex spatio-temporal characteristics of turbulent flows, resulting in limited performance and generalizability. This paper presents a novel physics-enhanced neural operator (PENO) that incorporates physical knowledge of partial differential equations (PDEs) to accurately model flow dynamics. The model is further refined by a self-augmentation mechanism to reduce the accumulated error in long-term simulations. The proposed method is evaluated through its performance on two distinct sets of 3D turbulent flow data, showcasing the model's capability to reconstruct high-resolution DNS data, maintain the inherent physical properties of flow transport, and generate flow simulations across various resolutions. Additionally, experimental results on multiple 2D vorticity flow series, generated by different PDEs, highlight the transferability and generalizability of the proposed method. This confirms its applicability to a wide range of real-world scenarios in which extensive simulations are needed under diverse settings. | [
"['Shengyu Chen' 'Peyman Givi' 'Can Zheng' 'Xiaowei Jia']"
] |
null | null | 2406.04370 | null | null | http://arxiv.org/pdf/2406.04370v1 | 2024-06-01T02:08:44Z | 2024-06-01T02:08:44Z | Large Language Model Confidence Estimation via Black-Box Access | Estimating uncertainty or confidence in the responses of a model can be significant in evaluating trust not only in the responses, but also in the model as a whole. In this paper, we explore the problem of estimating confidence for responses of large language models (LLMs) with simply black-box or query access to them. We propose a simple and extensible framework where, we engineer novel features and train a (interpretable) model (viz. logistic regression) on these features to estimate the confidence. We empirically demonstrate that our simple framework is effective in estimating confidence of flan-ul2, llama-13b and mistral-7b with it consistently outperforming existing black-box confidence estimation approaches on benchmark datasets such as TriviaQA, SQuAD, CoQA and Natural Questions by even over $10%$ (on AUROC) in some cases. Additionally, our interpretable approach provides insight into features that are predictive of confidence, leading to the interesting and useful discovery that our confidence models built for one LLM generalize zero-shot across others on a given dataset. | [
"['Tejaswini Pedapati' 'Amit Dhurandhar' 'Soumya Ghosh' 'Soham Dan'\n 'Prasanna Sattigeri']"
] |
null | null | 2406.04374 | null | null | http://arxiv.org/pdf/2406.04374v1 | 2024-06-04T23:46:10Z | 2024-06-04T23:46:10Z | Dynamic Online Recommendation for Two-Sided Market with Bayesian
Incentive Compatibility | Recommender systems play a crucial role in internet economies by connecting users with relevant products or services. However, designing effective recommender systems faces two key challenges: (1) the exploration-exploitation tradeoff in balancing new product exploration against exploiting known preferences, and (2) dynamic incentive compatibility in accounting for users' self-interested behaviors and heterogeneous preferences. This paper formalizes these challenges into a Dynamic Bayesian Incentive-Compatible Recommendation Protocol (DBICRP). To address the DBICRP, we propose a two-stage algorithm (RCB) that integrates incentivized exploration with an efficient offline learning component for exploitation. In the first stage, our algorithm explores available products while maintaining dynamic incentive compatibility to determine sufficient sample sizes. The second stage employs inverse proportional gap sampling integrated with an arbitrary machine learning method to ensure sublinear regret. Theoretically, we prove that RCB achieves $O(sqrt{KdT})$ regret and satisfies Bayesian incentive compatibility (BIC) under a Gaussian prior assumption. Empirically, we validate RCB's strong incentive gain, sublinear regret, and robustness through simulations and a real-world application on personalized warfarin dosing. Our work provides a principled approach for incentive-aware recommendation in online preference learning settings. | [
"['Yuantong Li' 'Guang Cheng' 'Xiaowu Dai']"
] |
null | null | 2406.04377 | null | null | http://arxiv.org/pdf/2406.04377v1 | 2024-06-05T22:06:57Z | 2024-06-05T22:06:57Z | Combining Graph Neural Network and Mamba to Capture Local and Global
Tissue Spatial Relationships in Whole Slide Images | In computational pathology, extracting spatial features from gigapixel whole slide images (WSIs) is a fundamental task, but due to their large size, WSIs are typically segmented into smaller tiles. A critical aspect of this analysis is aggregating information from these tiles to make predictions at the WSI level. We introduce a model that combines a message-passing graph neural network (GNN) with a state space model (Mamba) to capture both local and global spatial relationships among the tiles in WSIs. The model's effectiveness was demonstrated in predicting progression-free survival among patients with early-stage lung adenocarcinomas (LUAD). We compared the model with other state-of-the-art methods for tile-level information aggregation in WSIs, including tile-level information summary statistics-based aggregation, multiple instance learning (MIL)-based aggregation, GNN-based aggregation, and GNN-transformer-based aggregation. Additional experiments showed the impact of different types of node features and different tile sampling strategies on the model performance. This work can be easily extended to any WSI-based analysis. Code: https://github.com/rina-ding/gat-mamba. | [
"['Ruiwen Ding' 'Kha-Dinh Luong' 'Erika Rodriguez'\n 'Ana Cristina Araujo Lemos da Silva' 'William Hsu']"
] |
null | null | 2406.04378 | null | null | http://arxiv.org/pdf/2406.04378v1 | 2024-06-05T22:18:36Z | 2024-06-05T22:18:36Z | TIDMAD: Time Series Dataset for Discovering Dark Matter with AI
Denoising | Dark matter makes up approximately 85% of total matter in our universe, yet it has never been directly observed in any laboratory on Earth. The origin of dark matter is one of the most important questions in contemporary physics, and a convincing detection of dark matter would be a Nobel-Prize-level breakthrough in fundamental science. The ABRACADABRA experiment was specifically designed to search for dark matter. Although it has not yet made a discovery, ABRACADABRA has produced several dark matter search results widely endorsed by the physics community. The experiment generates ultra-long time-series data at a rate of 10 million samples per second, where the dark matter signal would manifest itself as a sinusoidal oscillation mode within the ultra-long time series. In this paper, we present the TIDMAD -- a comprehensive data release from the ABRACADABRA experiment including three key components: an ultra-long time series dataset divided into training, validation, and science subsets; a carefully-designed denoising score for direct model benchmarking; and a complete analysis framework which produces a community-standard dark matter search result suitable for publication as a physics paper. This data release enables core AI algorithms to extract the signal and produce real physics results thereby advancing fundamental science. The data downloading and associated analysis scripts are available at https://github.com/jessicafry/TIDMAD | [
"['J. T. Fry' 'Aobo Li' 'Lindley Winslow' 'Xinyi Hope Fu' 'Zhenghao Fu'\n 'Kaliroe M. W. Pappas']"
] |
null | null | 2406.04382 | null | null | http://arxiv.org/pdf/2406.04382v2 | 2024-06-13T17:53:01Z | 2024-06-06T04:05:23Z | Improving the Fairness of Deep-Learning, Short-term Crime Prediction
with Under-reporting-aware Models | Deep learning crime predictive tools use past crime data and additional behavioral datasets to forecast future crimes. Nevertheless, these tools have been shown to suffer from unfair predictions across minority racial and ethnic groups. Current approaches to address this unfairness generally propose either pre-processing methods that mitigate the bias in the training datasets by applying corrections to crime counts based on domain knowledge or in-processing methods that are implemented as fairness regularizers to optimize for both accuracy and fairness. In this paper, we propose a novel deep learning architecture that combines the power of these two approaches to increase prediction fairness. Our results show that the proposed model improves the fairness of crime predictions when compared to models with in-processing de-biasing approaches and with models without any type of bias correction, albeit at the cost of reducing accuracy. | [
"['Jiahui Wu' 'Vanessa Frias-Martinez']"
] |
null | null | 2406.04384 | null | null | http://arxiv.org/pdf/2406.04384v1 | 2024-06-06T06:52:25Z | 2024-06-06T06:52:25Z | Innovations in Cover Song Detection: A Lyrics-Based Approach | Cover songs are alternate versions of a song by a different artist. Long being a vital part of the music industry, cover songs significantly influence music culture and are commonly heard in public venues. The rise of online music platforms has further increased their prevalence, often as background music or video soundtracks. While current automatic identification methods serve adequately for original songs, they are less effective with cover songs, primarily because cover versions often significantly deviate from the original compositions. In this paper, we propose a novel method for cover song detection that utilizes the lyrics of a song. We introduce a new dataset for cover songs and their corresponding originals. The dataset contains 5078 cover songs and 2828 original songs. In contrast to other cover song datasets, it contains the annotated lyrics for the original song and the cover song. We evaluate our method on this dataset and compare it with multiple baseline approaches. Our results show that our method outperforms the baseline approaches. | [
"['Maximilian Balluff' 'Peter Mandl' 'Christian Wolff']"
] |
null | null | 2406.04391 | null | null | http://arxiv.org/pdf/2406.04391v1 | 2024-06-06T17:46:56Z | 2024-06-06T17:46:56Z | Why Has Predicting Downstream Capabilities of Frontier AI Models with
Scale Remained Elusive? | Predictable behavior from scaling advanced AI systems is an extremely desirable property. Although a well-established literature exists on how pretraining performance scales, the literature on how particular downstream capabilities scale is significantly muddier. In this work, we take a step back and ask: why has predicting specific downstream capabilities with scale remained elusive? While many factors are certainly responsible, we identify a new factor that makes modeling scaling behavior on widely used multiple-choice question-answering benchmarks challenging. Using five model families and twelve well-established multiple-choice benchmarks, we show that downstream performance is computed from negative log likelihoods via a sequence of transformations that progressively degrade the statistical relationship between performance and scale. We then reveal the mechanism causing this degradation: downstream metrics require comparing the correct choice against a small number of specific incorrect choices, meaning accurately predicting downstream capabilities requires predicting not just how probability mass concentrates on the correct choice with scale, but also how probability mass fluctuates on specific incorrect choices with scale. We empirically study how probability mass on the correct choice co-varies with probability mass on incorrect choices with increasing compute, suggesting that scaling laws for incorrect choices might be achievable. Our work also explains why pretraining scaling laws are commonly regarded as more predictable than downstream capabilities and contributes towards establishing scaling-predictable evaluations of frontier AI models. | [
"['Rylan Schaeffer' 'Hailey Schoelkopf' 'Brando Miranda' 'Gabriel Mukobi'\n 'Varun Madan' 'Adam Ibrahim' 'Herbie Bradley' 'Stella Biderman'\n 'Sanmi Koyejo']"
] |
null | null | 2406.04412 | null | null | http://arxiv.org/pdf/2406.04412v1 | 2024-06-06T18:01:02Z | 2024-06-06T18:01:02Z | Aligning Large Language Models with Self-generated Preference Data | Aligning large language models (LLMs) with human preferences becomes a key component to obtaining state-of-the-art performance, but it yields a huge cost to construct a large human-annotated preference dataset. To tackle this problem, we propose a new framework that boosts the alignment of LLMs through Self-generated Preference data (Selfie) using only a very small amount of human-annotated preference data. Our key idea is leveraging the human prior knowledge within the small (seed) data and progressively improving the alignment of LLM, by iteratively generating the responses and learning from them with the self-annotated preference data. To be specific, we propose to derive the preference label from the logits of LLM to explicitly extract the model's inherent preference. Compared to the previous approaches using external reward models or implicit in-context learning, we observe that the proposed approach is significantly more effective. In addition, we introduce a noise-aware preference learning algorithm to mitigate the risk of low quality within generated preference data. Our experimental results demonstrate that the proposed framework significantly boosts the alignment of LLMs. For example, we achieve superior alignment performance on AlpacaEval 2.0 with only 3.3% of the ground-truth preference labels in the Ultrafeedback data compared to the cases using the entire data or state-of-the-art baselines. | [
"['Dongyoung Kim' 'Kimin Lee' 'Jinwoo Shin' 'Jaehyung Kim']"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.