categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
null | null |
2402.07543
| null | null |
http://arxiv.org/pdf/2402.07543v1
|
2024-02-12T10:11:50Z
|
2024-02-12T10:11:50Z
|
Show Me How It's Done: The Role of Explanations in Fine-Tuning Language
Models
|
Our research demonstrates the significant benefits of using fine-tuning with explanations to enhance the performance of language models. Unlike prompting, which maintains the model's parameters, fine-tuning allows the model to learn and update its parameters during a training phase. In this study, we applied fine-tuning to various sized language models using data that contained explanations of the output rather than merely presenting the answers. We found that even smaller language models with as few as 60 million parameters benefited substantially from this approach. Interestingly, our results indicated that the detailed explanations were more beneficial to smaller models than larger ones, with the latter gaining nearly the same advantage from any form of explanation, irrespective of its length. Additionally, we demonstrate that the inclusion of explanations enables the models to solve tasks that they were not able to solve without explanations. Lastly, we argue that despite the challenging nature of adding explanations, samples that contain explanations not only reduce the volume of data required for training but also promote a more effective generalization by the model. In essence, our findings suggest that fine-tuning with explanations significantly bolsters the performance of large language models.
|
[
"['Mohamad Ballout' 'Ulf Krumnack' 'Gunther Heidemann'\n 'Kai-Uwe Kuehnberger']"
] |
null | null |
2402.07545
| null | null |
http://arxiv.org/pdf/2402.07545v1
|
2024-02-12T10:16:05Z
|
2024-02-12T10:16:05Z
|
TransAxx: Efficient Transformers with Approximate Computing
|
Vision Transformer (ViT) models which were recently introduced by the transformer architecture have shown to be very competitive and often become a popular alternative to Convolutional Neural Networks (CNNs). However, the high computational requirements of these models limit their practical applicability especially on low-power devices. Current state-of-the-art employs approximate multipliers to address the highly increased compute demands of DNN accelerators but no prior research has explored their use on ViT models. In this work we propose TransAxx, a framework based on the popular PyTorch library that enables fast inherent support for approximate arithmetic to seamlessly evaluate the impact of approximate computing on DNNs such as ViT models. Using TransAxx we analyze the sensitivity of transformer models on the ImageNet dataset to approximate multiplications and perform approximate-aware finetuning to regain accuracy. Furthermore, we propose a methodology to generate approximate accelerators for ViT models. Our approach uses a Monte Carlo Tree Search (MCTS) algorithm to efficiently search the space of possible configurations using a hardware-driven hand-crafted policy. Our evaluation demonstrates the efficacy of our methodology in achieving significant trade-offs between accuracy and power, resulting in substantial gains without compromising on performance.
|
[
"['Dimitrios Danopoulos' 'Georgios Zervakis' 'Dimitrios Soudris'\n 'Jörg Henkel']"
] |
null | null |
2402.07549
| null | null |
http://arxiv.org/abs/2402.07549v1
|
2024-02-12T10:30:45Z
|
2024-02-12T10:30:45Z
|
A Precision-Optimized Fixed-Point Near-Memory Digital Processing Unit
for Analog In-Memory Computing
|
Analog In-Memory Computing (AIMC) is an emerging technology for fast and energy-efficient Deep Learning (DL) inference. However, a certain amount of digital post-processing is required to deal with circuit mismatches and non-idealities associated with the memory devices. Efficient near-memory digital logic is critical to retain the high area/energy efficiency and low latency of AIMC. Existing systems adopt Floating Point 16 (FP16) arithmetic with limited parallelization capability and high latency. To overcome these limitations, we propose a Near-Memory digital Processing Unit (NMPU) based on fixed-point arithmetic. It achieves competitive accuracy and higher computing throughput than previous approaches while minimizing the area overhead. Moreover, the NMPU supports standard DL activation steps, such as ReLU and Batch Normalization. We perform a physical implementation of the NMPU design in a 14 nm CMOS technology and provide detailed performance, power, and area assessments. We validate the efficacy of the NMPU by using data from an AIMC chip and demonstrate that a simulated AIMC system with the proposed NMPU outperforms existing FP16-based implementations, providing 139$times$ speed-up, 7.8$times$ smaller area, and a competitive power consumption. Additionally, our approach achieves an inference accuracy of 86.65 %/65.06 %, with an accuracy drop of just 0.12 %/0.4 % compared to the FP16 baseline when benchmarked with ResNet9/ResNet32 networks trained on the CIFAR10/CIFAR100 datasets, respectively.
|
[
"['Elena Ferro' 'Athanasios Vasilopoulos' 'Corey Lammie' 'Manuel Le Gallo'\n 'Luca Benini' 'Irem Boybat' 'Abu Sebastian']"
] |
null | null |
2402.07568
| null | null |
http://arxiv.org/pdf/2402.07568v2
|
2024-05-28T15:52:02Z
|
2024-02-12T11:03:52Z
|
Weisfeiler-Leman at the margin: When more expressivity matters
|
The Weisfeiler-Leman algorithm ($1$-WL) is a well-studied heuristic for the graph isomorphism problem. Recently, the algorithm has played a prominent role in understanding the expressive power of message-passing graph neural networks (MPNNs) and being effective as a graph kernel. Despite its success, $1$-WL faces challenges in distinguishing non-isomorphic graphs, leading to the development of more expressive MPNN and kernel architectures. However, the relationship between enhanced expressivity and improved generalization performance remains unclear. Here, we show that an architecture's expressivity offers limited insights into its generalization performance when viewed through graph isomorphism. Moreover, we focus on augmenting $1$-WL and MPNNs with subgraph information and employ classical margin theory to investigate the conditions under which an architecture's increased expressivity aligns with improved generalization performance. In addition, we show that gradient flow pushes the MPNN's weights toward the maximum margin solution. Further, we introduce variations of expressive $1$-WL-based kernel and MPNN architectures with provable generalization properties. Our empirical study confirms the validity of our theoretical findings.
|
[
"['Billy J. Franks' 'Christopher Morris' 'Ameya Velingker' 'Floris Geerts']"
] |
null | null |
2402.07570
| null | null |
http://arxiv.org/pdf/2402.07570v2
|
2024-02-19T03:21:01Z
|
2024-02-12T11:04:14Z
|
Only the Curve Shape Matters: Training Foundation Models for Zero-Shot
Multivariate Time Series Forecasting through Next Curve Shape Prediction
|
We present General Time Transformer (GTT), an encoder-only style foundation model for zero-shot multivariate time series forecasting. GTT is pretrained on a large dataset of 200M high-quality time series samples spanning diverse domains. In our proposed framework, the task of multivariate time series forecasting is formulated as a channel-wise next curve shape prediction problem, where each time series sample is represented as a sequence of non-overlapping curve shapes with a unified numerical magnitude. GTT is trained to predict the next curve shape based on a window of past curve shapes in a channel-wise manner. Experimental results demonstrate that GTT exhibits superior zero-shot multivariate forecasting capabilities on unseen time series datasets, even surpassing state-of-the-art supervised baselines. Additionally, we investigate the impact of varying GTT model parameters and training dataset scales, observing that the scaling law also holds in the context of zero-shot multivariate time series forecasting.
|
[
"['Cheng Feng' 'Long Huang' 'Denis Krompass']"
] |
null | null |
2402.07585
| null | null |
http://arxiv.org/abs/2402.07585v1
|
2024-02-12T11:35:04Z
|
2024-02-12T11:35:04Z
|
Identifying architectural design decisions for achieving green ML
serving
|
The growing use of large machine learning models highlights concerns about their increasing computational demands. While the energy consumption of their training phase has received attention, fewer works have considered the inference phase. For ML inference, the binding of ML models to the ML system for user access, known as ML serving, is a critical yet understudied step for achieving efficiency in ML applications. We examine the literature in ML architectural design decisions and Green AI, with a special focus on ML serving. The aim is to analyze ML serving architectural design decisions for the purpose of understanding and identifying them with respect to quality characteristics from the point of view of researchers and practitioners in the context of ML serving literature. Our results (i) identify ML serving architectural design decisions along with their corresponding components and associated technological stack, and (ii) provide an overview of the quality characteristics studied in the literature, including energy efficiency. This preliminary study is the first step in our goal to achieve green ML serving. Our analysis may aid ML researchers and practitioners in making green-aware architecture design decisions when serving their models.
|
[
"['Francisco Durán' 'Silverio Martínez-Fernández' 'Matias Martinez'\n 'Patricia Lago']"
] |
null | null |
2402.07586
| null | null |
http://arxiv.org/pdf/2402.07586v3
|
2024-06-13T14:37:15Z
|
2024-02-12T11:35:25Z
|
Unveiling Group-Specific Distributed Concept Drift: A Fairness
Imperative in Federated Learning
|
In the evolving field of machine learning, ensuring fairness has become a critical concern, prompting the development of algorithms designed to mitigate discriminatory outcomes in decision-making processes. However, achieving fairness in the presence of group-specific concept drift remains an unexplored frontier, and our research represents pioneering efforts in this regard. Group-specific concept drift refers to situations where one group experiences concept drift over time while another does not, leading to a decrease in fairness even if accuracy remains fairly stable. Within the framework of federated learning, where clients collaboratively train models, its distributed nature further amplifies these challenges since each client can experience group-specific concept drift independently while still sharing the same underlying concept, creating a complex and dynamic environment for maintaining fairness. One of the significant contributions of our research is the formalization and introduction of the problem of group-specific concept drift and its distributed counterpart, shedding light on its critical importance in the realm of fairness. In addition, leveraging insights from prior research, we adapt an existing distributed concept drift adaptation algorithm to tackle group-specific distributed concept drift which utilizes a multi-model approach, a local group-specific drift detection mechanism, and continuous clustering of models over time. The findings from our experiments highlight the importance of addressing group-specific concept drift and its distributed counterpart to advance fairness in machine learning.
|
[
"['Teresa Salazar' 'João Gama' 'Helder Araújo' 'Pedro Henriques Abreu']"
] |
null | null |
2402.07588
| null | null |
http://arxiv.org/pdf/2402.07588v3
|
2024-06-01T23:16:05Z
|
2024-02-12T11:41:42Z
|
Understanding Model Selection For Learning In Strategic Environments
|
The deployment of ever-larger machine learning models reflects a growing consensus that the more expressive the model class one optimizes over$unicode{x2013}$and the more data one has access to$unicode{x2013}$the more one can improve performance. As models get deployed in a variety of real-world scenarios, they inevitably face strategic environments. In this work, we consider the natural question of how the interplay of models and strategic interactions affects the relationship between performance at equilibrium and the expressivity of model classes. We find that strategic interactions can break the conventional view$unicode{x2013}$meaning that performance does not necessarily monotonically improve as model classes get larger or more expressive (even with infinite data). We show the implications of this result in several contexts including strategic regression, strategic classification, and multi-agent reinforcement learning. In particular, we show that each of these settings admits a Braess' paradox-like phenomenon in which optimizing over less expressive model classes allows one to achieve strictly better equilibrium outcomes. Motivated by these examples, we then propose a new paradigm for model selection in games wherein an agent seeks to choose amongst different model classes to use as their action set in a game.
|
[
"['Tinashe Handina' 'Eric Mazumdar']"
] |
null | null |
2402.07594
| null | null |
http://arxiv.org/pdf/2402.07594v1
|
2024-02-12T11:48:54Z
|
2024-02-12T11:48:54Z
|
Foundational Inference Models for Dynamical Systems
|
Ordinary differential equations (ODEs) underlie dynamical systems which serve as models for a vast number of natural and social phenomena. Yet inferring the ODE that best describes a set of noisy observations on one such phenomenon can be remarkably challenging, and the models available to achieve it tend to be highly specialized and complex too. In this work we propose a novel supervised learning framework for zero-shot inference of ODEs from noisy data. We first generate large datasets of one-dimensional ODEs, by sampling distributions over the space of initial conditions, and the space of vector fields defining them. We then learn neural maps between noisy observations on the solutions of these equations, and their corresponding initial condition and vector fields. The resulting models, which we call foundational inference models (FIM), can be (i) copied and matched along the time dimension to increase their resolution; and (ii) copied and composed to build inference models of any dimensionality, without the need of any finetuning. We use FIM to model both ground-truth dynamical systems of different dimensionalities and empirical time series data in a zero-shot fashion, and outperform state-of-the-art models which are finetuned to these systems. Our (pretrained) FIMs are available online
|
[
"['Patrick Seifner' 'Kostadin Cvejoski' 'Ramses J. Sanchez']"
] |
null | null |
2402.07595
| null | null |
http://arxiv.org/pdf/2402.07595v2
|
2024-02-13T15:39:11Z
|
2024-02-12T11:49:08Z
|
Comparative Analysis of ImageNet Pre-Trained Deep Learning Models and
DINOv2 in Medical Imaging Classification
|
Medical image analysis frequently encounters data scarcity challenges. Transfer learning has been effective in addressing this issue while conserving computational resources. The recent advent of foundational models like the DINOv2, which uses the vision transformer architecture, has opened new opportunities in the field and gathered significant interest. However, DINOv2's performance on clinical data still needs to be verified. In this paper, we performed a glioma grading task using three clinical modalities of brain MRI data. We compared the performance of various pre-trained deep learning models, including those based on ImageNet and DINOv2, in a transfer learning context. Our focus was on understanding the impact of the freezing mechanism on performance. We also validated our findings on three other types of public datasets: chest radiography, fundus radiography, and dermoscopy. Our findings indicate that in our clinical dataset, DINOv2's performance was not as strong as ImageNet-based pre-trained models, whereas in public datasets, DINOv2 generally outperformed other models, especially when using the frozen mechanism. Similar performance was observed with various sizes of DINOv2 models across different tasks. In summary, DINOv2 is viable for medical image classification tasks, particularly with data resembling natural images. However, its effectiveness may vary with data that significantly differs from natural images such as MRI. In addition, employing smaller versions of the model can be adequate for medical task, offering resource-saving benefits. Our codes are available at https://github.com/GuanghuiFU/medical_DINOv2_eval.
|
[
"['Yuning Huang' 'Jingchen Zou' 'Lanxi Meng' 'Xin Yue' 'Qing Zhao'\n 'Jianqiang Li' 'Changwei Song' 'Gabriel Jimenez' 'Shaowu Li'\n 'Guanghui Fu']"
] |
null | null |
2402.07598
| null | null |
http://arxiv.org/pdf/2402.07598v1
|
2024-02-12T11:58:18Z
|
2024-02-12T11:58:18Z
|
Near-Minimax-Optimal Distributional Reinforcement Learning with a
Generative Model
|
We propose a new algorithm for model-based distributional reinforcement learning (RL), and prove that it is minimax-optimal for approximating return distributions with a generative model (up to logarithmic factors), resolving an open question of Zhang et al. (2023). Our analysis provides new theoretical results on categorical approaches to distributional RL, and also introduces a new distributional Bellman equation, the stochastic categorical CDF Bellman equation, which we expect to be of independent interest. We also provide an experimental study comparing several model-based distributional RL algorithms, with several takeaways for practitioners.
|
[
"['Mark Rowland' 'Li Kevin Wenliang' 'Rémi Munos' 'Clare Lyle'\n 'Yunhao Tang' 'Will Dabney']"
] |
null | null |
2402.07613
| null | null |
http://arxiv.org/pdf/2402.07613v1
|
2024-02-12T12:38:20Z
|
2024-02-12T12:38:20Z
|
Global optimality under amenable symmetry constraints
|
We ask whether there exists a function or measure that (1) minimizes a given convex functional or risk and (2) satisfies a symmetry property specified by an amenable group of transformations. Examples of such symmetry properties are invariance, equivariance, or quasi-invariance. Our results draw on old ideas of Stein and Le Cam and on approximate group averages that appear in ergodic theorems for amenable groups. A class of convex sets known as orbitopes in convex analysis emerges as crucial, and we establish properties of such orbitopes in nonparametric settings. We also show how a simple device called a cocycle can be used to reduce different forms of symmetry to a single problem. As applications, we obtain results on invariant kernel mean embeddings and a Monge-Kantorovich theorem on optimality of transport plans under symmetry constraints. We also explain connections to the Hunt-Stein theorem on invariant tests.
|
[
"['Peter Orbanz']"
] |
null | null |
2402.07621
| null | null |
http://arxiv.org/pdf/2402.07621v1
|
2024-02-12T12:55:35Z
|
2024-02-12T12:55:35Z
|
Correctness Verification of Neural Networks Approximating Differential
Equations
|
Verification of Neural Networks (NNs) that approximate the solution of Partial Differential Equations (PDEs) is a major milestone towards enhancing their trustworthiness and accelerating their deployment, especially for safety-critical systems. If successful, such NNs can become integral parts of simulation software tools which can accelerate the simulation of complex dynamic systems more than 100 times. However, the verification of these functions poses major challenges; it is not straightforward how to efficiently bound them or how to represent the derivative of the NN. This work addresses both these problems. First, we define the NN derivative as a finite difference approximation. Then, we formulate the PDE residual bounding problem alongside the Initial Value Problem's error propagation. Finally, for the first time, we tackle the problem of bounding an NN function without a priori knowledge of the output domain. For this, we build a parallel branching algorithm that combines the incomplete CROWN solver and Gradient Attack for termination and domain rejection conditions. We demonstrate the strengths and weaknesses of the proposed framework, and we suggest further work to enhance its efficiency.
|
[
"['Petros Ellinas' 'Rahul Nellikath' 'Ignasi Ventura' 'Jochen Stiasny'\n 'Spyros Chatzivasileiadis']"
] |
null | null |
2402.07625
| null | null |
http://arxiv.org/pdf/2402.07625v2
|
2024-04-02T04:17:30Z
|
2024-02-12T13:09:21Z
|
Autonomous Data Selection with Language Models for Mathematical Texts
|
To improve language models' proficiency in mathematical reasoning via continual pretraining, we introduce a novel strategy that leverages base language models for autonomous data selection. Departing from conventional supervised fine-tuning or trained classifiers with human-annotated data, our approach Autonomous Data Selection (AutoDS) utilizes meta-prompted language models as zero-shot verifiers to evaluate and select high-quality mathematical content autonomously. To demonstrate the efficacy of our method, we continuously pretrained a 7B-parameter language model on our curated dataset, achieving substantial improvements in downstream performance on the MATH, GSM8K, and BIG-Bench Hard (BBH) tasks with a token amount reduced by orders of magnitude compared to previous continual pretraining works. Our method showcases a 2 times increase in pretraining token efficiency compared to state-of-the-art baselines, underscoring the potential of our approach in enhancing models' mathematical reasoning capabilities. The AutoMathText dataset is available at https://huggingface.co/datasets/math-ai/AutoMathText. The code is available at https://github.com/yifanzhang-pro/AutoMathText.
|
[
"['Yifan Zhang' 'Yifan Luo' 'Yang Yuan' 'Andrew Chi-Chih Yao']"
] |
null | null |
2402.07626
| null | null |
http://arxiv.org/pdf/2402.07626v2
|
2024-06-10T10:25:14Z
|
2024-02-12T13:11:11Z
|
Stochastic Gradient Flow Dynamics of Test Risk and its Exact Solution
for Weak Features
|
We investigate the test risk of continuous-time stochastic gradient flow dynamics in learning theory. Using a path integral formulation we provide, in the regime of a small learning rate, a general formula for computing the difference between test risk curves of pure gradient and stochastic gradient flows. We apply the general theory to a simple model of weak features, which displays the double descent phenomenon, and explicitly compute the corrections brought about by the added stochastic term in the dynamics, as a function of time and model parameters. The analytical results are compared to simulations of discrete-time stochastic gradient descent and show good agreement.
|
[
"['Rodrigo Veiga' 'Anastasia Remizova' 'Nicolas Macris']"
] |
null | null |
2402.07630
| null | null |
http://arxiv.org/pdf/2402.07630v3
|
2024-05-27T04:04:40Z
|
2024-02-12T13:13:04Z
|
G-Retriever: Retrieval-Augmented Generation for Textual Graph
Understanding and Question Answering
|
Given a graph with textual attributes, we enable users to `chat with their graph': that is, to ask questions about the graph using a conversational interface. In response to a user's questions, our method provides textual replies and highlights the relevant parts of the graph. While existing works integrate large language models (LLMs) and graph neural networks (GNNs) in various ways, they mostly focus on either conventional graph tasks (such as node, edge, and graph classification), or on answering simple graph queries on small or synthetic graphs. In contrast, we develop a flexible question-answering framework targeting real-world textual graphs, applicable to multiple applications including scene graph understanding, common sense reasoning, and knowledge graph reasoning. Toward this goal, we first develop a Graph Question Answering (GraphQA) benchmark with data collected from different tasks. Then, we propose our G-Retriever method, introducing the first retrieval-augmented generation (RAG) approach for general textual graphs, which can be fine-tuned to enhance graph understanding via soft prompting. To resist hallucination and to allow for textual graphs that greatly exceed the LLM's context window size, G-Retriever performs RAG over a graph by formulating this task as a Prize-Collecting Steiner Tree optimization problem. Empirical evaluations show that our method outperforms baselines on textual graph tasks from multiple domains, scales well with larger graph sizes, and mitigates hallucination.~footnote{Our codes and datasets are available at: url{https://github.com/XiaoxinHe/G-Retriever}}
|
[
"['Xiaoxin He' 'Yijun Tian' 'Yifei Sun' 'Nitesh V. Chawla' 'Thomas Laurent'\n 'Yann LeCun' 'Xavier Bresson' 'Bryan Hooi']"
] |
null | null |
2402.07639
| null | null |
http://arxiv.org/pdf/2402.07639v1
|
2024-02-12T13:24:32Z
|
2024-02-12T13:24:32Z
|
Tighter Bounds on the Information Bottleneck with Application to Deep
Learning
|
Deep Neural Nets (DNNs) learn latent representations induced by their downstream task, objective function, and other parameters. The quality of the learned representations impacts the DNN's generalization ability and the coherence of the emerging latent space. The Information Bottleneck (IB) provides a hypothetically optimal framework for data modeling, yet it is often intractable. Recent efforts combined DNNs with the IB by applying VAE-inspired variational methods to approximate bounds on mutual information, resulting in improved robustness to adversarial attacks. This work introduces a new and tighter variational bound for the IB, improving performance of previous IB-inspired DNNs. These advancements strengthen the case for the IB and its variational approximations as a data modeling framework, and provide a simple method to significantly enhance the adversarial robustness of classifier DNNs.
|
[
"['Nir Weingarten' 'Zohar Yakhini' 'Moshe Butman' 'Ran Gilad-Bachrach']"
] |
null | null |
2402.07642
| null | null |
http://arxiv.org/pdf/2402.07642v1
|
2024-02-12T13:30:34Z
|
2024-02-12T13:30:34Z
|
A Flow-based Credibility Metric for Safety-critical Pedestrian Detection
|
Safety is of utmost importance for perception in automated driving (AD). However, a prime safety concern in state-of-the art object detection is that standard evaluation schemes utilize safety-agnostic metrics to argue sufficient detection performance. Hence, it is imperative to leverage supplementary domain knowledge to accentuate safety-critical misdetections during evaluation tasks. To tackle the underspecification, this paper introduces a novel credibility metric, called c-flow, for pedestrian bounding boxes. To this end, c-flow relies on a complementary optical flow signal from image sequences and enhances the analyses of safety-critical misdetections without requiring additional labels. We implement and evaluate c-flow with a state-of-the-art pedestrian detector on a large AD dataset. Our analysis demonstrates that c-flow allows developers to identify safety-critical misdetections.
|
[
"['Maria Lyssenko' 'Christoph Gladisch' 'Christian Heinzemann'\n 'Matthias Woehrle' 'Rudolph Triebel']"
] |
null | null |
2402.07684
| null | null |
http://arxiv.org/pdf/2402.07684v1
|
2024-02-12T14:46:31Z
|
2024-02-12T14:46:31Z
|
Towards a Foundation Model for Brain Age Prediction using coVariance
Neural Networks
|
Brain age is the estimate of biological age derived from neuroimaging datasets using machine learning algorithms. Increasing brain age with respect to chronological age can reflect increased vulnerability to neurodegeneration and cognitive decline. In this paper, we study NeuroVNN, based on coVariance neural networks, as a paradigm for foundation model for the brain age prediction application. NeuroVNN is pre-trained as a regression model on healthy population to predict chronological age using cortical thickness features and fine-tuned to estimate brain age in different neurological contexts. Importantly, NeuroVNN adds anatomical interpretability to brain age and has a `scale-free' characteristic that allows its transference to datasets curated according to any arbitrary brain atlas. Our results demonstrate that NeuroVNN can extract biologically plausible brain age estimates in different populations, as well as transfer successfully to datasets of dimensionalities distinct from that for the dataset used to train NeuroVNN.
|
[
"['Saurabh Sihag' 'Gonzalo Mateos' 'Alejandro Ribeiro']"
] |
null | null |
2402.07685
| null | null |
http://arxiv.org/pdf/2402.07685v1
|
2024-02-12T14:48:31Z
|
2024-02-12T14:48:31Z
|
Contrastive Multiple Instance Learning for Weakly Supervised Person ReID
|
The acquisition of large-scale, precisely labeled datasets for person re-identification (ReID) poses a significant challenge. Weakly supervised ReID has begun to address this issue, although its performance lags behind fully supervised methods. In response, we introduce Contrastive Multiple Instance Learning (CMIL), a novel framework tailored for more effective weakly supervised ReID. CMIL distinguishes itself by requiring only a single model and no pseudo labels while leveraging contrastive losses -- a technique that has significantly enhanced traditional ReID performance yet is absent in all prior MIL-based approaches. Through extensive experiments and analysis across three datasets, CMIL not only matches state-of-the-art performance on the large-scale SYSU-30k dataset with fewer assumptions but also consistently outperforms all baselines on the WL-market1501 and Weakly Labeled MUddy racer re-iDentification dataset (WL-MUDD) datasets. We introduce and release the WL-MUDD dataset, an extension of the MUDD dataset featuring naturally occurring weak labels from the real-world application at PerformancePhoto.co. All our code and data are accessible at https://drive.google.com/file/d/1rjMbWB6m-apHF3Wg_cfqc8QqKgQ21AsT/view?usp=drive_link.
|
[
"['Jacob Tyo' 'Zachary C. Lipton']"
] |
null | null |
2402.07692
| null | null |
http://arxiv.org/pdf/2402.07692v2
|
2024-05-21T16:08:22Z
|
2024-02-12T14:59:40Z
|
Boundary Exploration for Bayesian Optimization With Unknown Physical
Constraints
|
Bayesian optimization has been successfully applied to optimize black-box functions where the number of evaluations is severely limited. However, in many real-world applications, it is hard or impossible to know in advance which designs are feasible due to some physical or system limitations. These issues lead to an even more challenging problem of optimizing an unknown function with unknown constraints. In this paper, we observe that in such scenarios optimal solution typically lies on the boundary between feasible and infeasible regions of the design space, making it considerably more difficult than that with interior optima. Inspired by this observation, we propose BE-CBO, a new Bayesian optimization method that efficiently explores the boundary between feasible and infeasible designs. To identify the boundary, we learn the constraints with an ensemble of neural networks that outperform the standard Gaussian Processes for capturing complex boundaries. Our method demonstrates superior performance against state-of-the-art methods through comprehensive experiments on synthetic and real-world benchmarks. Code available at: https://github.com/yunshengtian/BE-CBO
|
[
"['Yunsheng Tian' 'Ane Zuniga' 'Xinwei Zhang' 'Johannes P. Dürholt'\n 'Payel Das' 'Jie Chen' 'Wojciech Matusik' 'Mina Konaković Luković']"
] |
null | null |
2402.07703
| null | null |
http://arxiv.org/pdf/2402.07703v3
|
2024-02-23T06:05:19Z
|
2024-02-12T15:17:31Z
|
Online Sequential Decision-Making with Unknown Delays
|
In the field of online sequential decision-making, we address the problem with delays utilizing the framework of online convex optimization (OCO), where the feedback of a decision can arrive with an unknown delay. Unlike previous research that is limited to Euclidean norm and gradient information, we propose three families of delayed algorithms based on approximate solutions to handle different types of received feedback. Our proposed algorithms are versatile and applicable to universal norms. Specifically, we introduce a family of Follow the Delayed Regularized Leader algorithms for feedback with full information on the loss function, a family of Delayed Mirror Descent algorithms for feedback with gradient information on the loss function and a family of Simplified Delayed Mirror Descent algorithms for feedback with the value information of the loss function's gradients at corresponding decision points. For each type of algorithm, we provide corresponding regret bounds under cases of general convexity and relative strong convexity, respectively. We also demonstrate the efficiency of each algorithm under different norms through concrete examples. Furthermore, our theoretical results are consistent with the current best bounds when degenerated to standard settings.
|
[
"['Ping Wu' 'Heyan Huang' 'Zhengyang Liu']"
] |
null | null |
2402.07710
| null | null |
http://arxiv.org/pdf/2402.07710v3
|
2024-04-06T12:49:43Z
|
2024-02-12T15:23:19Z
|
Optimizing Sparse Convolution on GPUs with CUDA for 3D Point Cloud
Processing in Embedded Systems
|
In recent years, there has been a significant increase in the utilization of deep learning methods, particularly convolutional neural networks (CNNs), which have emerged as the dominant approach in various domains that involve structured grid data, such as picture analysis and processing. Nevertheless, the exponential growth in the utilization of LiDAR and 3D sensors across many domains has resulted in an increased need for the analysis of 3D point clouds. The utilization of 3D point clouds is crucial in various applications, including object recognition and segmentation, as they offer a spatial depiction of things within a three-dimensional environment. In contrast to photos, point clouds exhibit sparsity and lack a regular grid, hence posing distinct processing and computational issues.
|
[
"['Chester Luo' 'Kevin Lai']"
] |
null | null |
2402.07712
| null | null |
http://arxiv.org/pdf/2402.07712v2
|
2024-04-30T18:03:13Z
|
2024-02-12T15:26:01Z
|
Model Collapse Demystified: The Case of Regression
|
In the era of proliferation of large language and image generation models, the phenomenon of "model collapse" refers to the situation whereby as a model is trained recursively on data generated from previous generations of itself over time, its performance degrades until the model eventually becomes completely useless, i.e the model collapses. In this work, we study this phenomenon in the setting of high-dimensional regression and obtain analytic formulae which quantitatively outline this phenomenon in a broad range of regimes. In the special case of polynomial decaying spectral and source conditions, we obtain modified scaling laws which exhibit new crossover phenomena from fast to slow rates. We also propose a simple strategy based on adaptive regularization to mitigate model collapse. Our theoretical results are validated with experiments.
|
[
"['Elvis Dohmatob' 'Yunzhen Feng' 'Julia Kempe']"
] |
null | null |
2402.07721
| null | null |
http://arxiv.org/pdf/2402.07721v2
|
2024-06-18T15:13:12Z
|
2024-02-12T15:34:56Z
|
LoRA-drop: Efficient LoRA Parameter Pruning based on Output Evaluation
|
Low-Rank Adaptation (LoRA) is currently the most commonly used Parameter-efficient fine-tuning (PEFT) method, it introduces auxiliary parameters for each layer to fine-tune the pre-trained model under limited computing resources. However, it still faces resource consumption challenges during training when scaling up to larger models. Most previous studies have tackled this issue by using pruning techniques, which involve removing LoRA parameters deemed unimportant. Nonetheless, these efforts only analyze LoRA parameter features to evaluate their importance, such as parameter count, size, and gradient. In fact, the output of LoRA (product of LoRA parameter and hidden state), directly impacts the final results. Preliminary experiments indicate that a fraction of LoRA elements possesses significantly high output values, substantially influencing the layer output. Motivated by the observation, we propose LoRA-drop. Concretely, LoRA-drop evaluates the importance of LoRA based on the LoRA output. Then we retain LoRA for important layers and the other layers share the same LoRA. We conduct abundant experiments with models of different scales on NLU and NLG tasks. Results demonstrate that LoRA-drop can achieve performance comparable to full fine-tuning and LoRA, while retaining 50% of the LoRA parameters on average.
|
[
"['Hongyun Zhou' 'Xiangyu Lu' 'Wang Xu' 'Conghui Zhu' 'Tiejun Zhao'\n 'Muyun Yang']"
] |
null | null |
2402.07723
| null | null |
http://arxiv.org/pdf/2402.07723v2
|
2024-06-03T14:20:34Z
|
2024-02-12T15:35:32Z
|
Generalization Bounds for Heavy-Tailed SDEs through the Fractional
Fokker-Planck Equation
|
Understanding the generalization properties of heavy-tailed stochastic optimization algorithms has attracted increasing attention over the past years. While illuminating interesting aspects of stochastic optimizers by using heavy-tailed stochastic differential equations as proxies, prior works either provided expected generalization bounds, or introduced non-computable information theoretic terms. Addressing these drawbacks, in this work, we prove high-probability generalization bounds for heavy-tailed SDEs which do not contain any nontrivial information theoretic terms. To achieve this goal, we develop new proof techniques based on estimating the entropy flows associated with the so-called fractional Fokker-Planck equation (a partial differential equation that governs the evolution of the distribution of the corresponding heavy-tailed SDE). In addition to obtaining high-probability bounds, we show that our bounds have a better dependence on the dimension of parameters as compared to prior art. Our results further identify a phase transition phenomenon, which suggests that heavy tails can be either beneficial or harmful depending on the problem structure. We support our theory with experiments conducted in a variety of settings.
|
[
"['Benjamin Dupuis' 'Umut Şimşekli']"
] |
null | null |
2402.07729
| null | null |
http://arxiv.org/pdf/2402.07729v1
|
2024-02-12T15:41:22Z
|
2024-02-12T15:41:22Z
|
AIR-Bench: Benchmarking Large Audio-Language Models via Generative
Comprehension
|
Recently, instruction-following audio-language models have received broad attention for human-audio interaction. However, the absence of benchmarks capable of evaluating audio-centric interaction capabilities has impeded advancements in this field. Previous models primarily focus on assessing different fundamental tasks, such as Automatic Speech Recognition (ASR), and lack an assessment of the open-ended generative capabilities centered around audio. Thus, it is challenging to track the progression in the Large Audio-Language Models (LALMs) domain and to provide guidance for future improvement. In this paper, we introduce AIR-Bench (textbf{A}udio textbf{I}nsttextbf{R}uction textbf{Bench}mark), the first benchmark designed to evaluate the ability of LALMs to understand various types of audio signals (including human speech, natural sounds, and music), and furthermore, to interact with humans in the textual format. AIR-Bench encompasses two dimensions: textit{foundation} and textit{chat} benchmarks. The former consists of 19 tasks with approximately 19k single-choice questions, intending to inspect the basic single-task ability of LALMs. The latter one contains 2k instances of open-ended question-and-answer data, directly assessing the comprehension of the model on complex audio and its capacity to follow instructions. Both benchmarks require the model to generate hypotheses directly. We design a unified framework that leverages advanced language models, such as GPT-4, to evaluate the scores of generated hypotheses given the meta-information of the audio. Experimental results demonstrate a high level of consistency between GPT-4-based evaluation and human evaluation. By revealing the limitations of existing LALMs through evaluation results, AIR-Bench can provide insights into the direction of future research.
|
[
"['Qian Yang' 'Jin Xu' 'Wenrui Liu' 'Yunfei Chu' 'Ziyue Jiang'\n 'Xiaohuan Zhou' 'Yichong Leng' 'Yuanjun Lv' 'Zhou Zhao' 'Chang Zhou'\n 'Jingren Zhou']"
] |
null | null |
2402.07735
| null | null |
http://arxiv.org/pdf/2402.07735v2
|
2024-02-13T09:48:47Z
|
2024-02-12T15:48:58Z
|
Graph Structure Inference with BAM: Introducing the Bilinear Attention
Mechanism
|
In statistics and machine learning, detecting dependencies in datasets is a central challenge. We propose a novel neural network model for supervised graph structure learning, i.e., the process of learning a mapping between observational data and their underlying dependence structure. The model is trained with variably shaped and coupled simulated input data and requires only a single forward pass through the trained network for inference. By leveraging structural equation models and employing randomly generated multivariate Chebyshev polynomials for the simulation of training data, our method demonstrates robust generalizability across both linear and various types of non-linear dependencies. We introduce a novel bilinear attention mechanism (BAM) for explicit processing of dependency information, which operates on the level of covariance matrices of transformed data and respects the geometry of the manifold of symmetric positive definite matrices. Empirical evaluation demonstrates the robustness of our method in detecting a wide range of dependencies, excelling in undirected graph estimation and proving competitive in completed partially directed acyclic graph estimation through a novel two-step approach.
|
[
"['Philipp Froehlich' 'Heinz Koeppl']"
] |
null | null |
2402.07738
| null | null |
http://arxiv.org/pdf/2402.07738v2
|
2024-02-15T15:19:30Z
|
2024-02-12T15:52:27Z
|
Universal Link Predictor By In-Context Learning on Graphs
|
Link prediction is a crucial task in graph machine learning, where the goal is to infer missing or future links within a graph. Traditional approaches leverage heuristic methods based on widely observed connectivity patterns, offering broad applicability and generalizability without the need for model training. Despite their utility, these methods are limited by their reliance on human-derived heuristics and lack the adaptability of data-driven approaches. Conversely, parametric link predictors excel in automatically learning the connectivity patterns from data and achieving state-of-the-art but fail short to directly transfer across different graphs. Instead, it requires the cost of extensive training and hyperparameter optimization to adapt to the target graph. In this work, we introduce the Universal Link Predictor (UniLP), a novel model that combines the generalizability of heuristic approaches with the pattern learning capabilities of parametric models. UniLP is designed to autonomously identify connectivity patterns across diverse graphs, ready for immediate application to any unseen graph dataset without targeted training. We address the challenge of conflicting connectivity patterns-arising from the unique distributions of different graphs-through the implementation of In-context Learning (ICL). This approach allows UniLP to dynamically adjust to various target graphs based on contextual demonstrations, thereby avoiding negative transfer. Through rigorous experimentation, we demonstrate UniLP's effectiveness in adapting to new, unseen graphs at test time, showcasing its ability to perform comparably or even outperform parametric models that have been finetuned for specific datasets. Our findings highlight UniLP's potential to set a new standard in link prediction, combining the strengths of heuristic and parametric methods in a single, versatile framework.
|
[
"['Kaiwen Dong' 'Haitao Mao' 'Zhichun Guo' 'Nitesh V. Chawla']"
] |
null | null |
2402.07739
| null | null |
http://arxiv.org/pdf/2402.07739v4
|
2024-05-06T09:50:22Z
|
2024-02-12T15:57:31Z
|
Task-conditioned adaptation of visual features in multi-task policy
learning
|
Successfully addressing a wide variety of tasks is a core ability of autonomous agents, requiring flexibly adapting the underlying decision-making strategies and, as we argue in this work, also adapting the perception modules. An analogical argument would be the human visual system, which uses top-down signals to focus attention determined by the current task. Similarly, we adapt pre-trained large vision models conditioned on specific downstream tasks in the context of multi-task policy learning. We introduce task-conditioned adapters that do not require finetuning any pre-trained weights, combined with a single policy trained with behavior cloning and capable of addressing multiple tasks. We condition the visual adapters on task embeddings, which can be selected at inference if the task is known, or alternatively inferred from a set of example demonstrations. To this end, we propose a new optimization-based estimator. We evaluate the method on a wide variety of tasks from the CortexBench benchmark and show that, compared to existing work, it can be addressed with a single policy. In particular, we demonstrate that adapting visual features is a key design choice and that the method generalizes to unseen tasks given a few demonstrations.
|
[
"['Pierre Marza' 'Laetitia Matignon' 'Olivier Simonin' 'Christian Wolf']"
] |
null | null |
2402.07744
| null | null |
http://arxiv.org/pdf/2402.07744v2
|
2024-02-14T18:43:54Z
|
2024-02-12T16:14:22Z
|
Towards Unified Alignment Between Agents, Humans, and Environment
|
The rapid progress of foundation models has led to the prosperity of autonomous agents, which leverage the universal capabilities of foundation models to conduct reasoning, decision-making, and environmental interaction. However, the efficacy of agents remains limited when operating in intricate, realistic environments. In this work, we introduce the principles of $mathbf{U}$nified $mathbf{A}$lignment for $mathbf{A}$gents ($mathbf{UA}^2$), which advocate for the simultaneous alignment of agents with human intentions, environmental dynamics, and self-constraints such as the limitation of monetary budgets. From the perspective of $mathbf{UA}^2$, we review the current agent research and highlight the neglected factors in existing agent benchmarks and method candidates. We also conduct proof-of-concept studies by introducing realistic features to WebShop, including user profiles to demonstrate intentions, personalized reranking for complex environmental dynamics, and runtime cost statistics to reflect self-constraints. We then follow the principles of $mathbf{UA}^2$ to propose an initial design of our agent, and benchmark its performance with several candidate baselines in the retrofitted WebShop. The extensive experimental results further prove the importance of the principles of $mathbf{UA}^2$. Our research sheds light on the next steps of autonomous agent research with improved general problem-solving abilities.
|
[
"['Zonghan Yang' 'An Liu' 'Zijun Liu' 'Kaiming Liu' 'Fangzhou Xiong'\n 'Yile Wang' 'Zeyuan Yang' 'Qingyuan Hu' 'Xinrui Chen' 'Zhenhe Zhang'\n 'Fuwen Luo' 'Zhicheng Guo' 'Peng Li' 'Yang Liu']"
] |
null | null |
2402.07745
| null | null |
http://arxiv.org/pdf/2402.07745v1
|
2024-02-12T16:15:25Z
|
2024-02-12T16:15:25Z
|
Predictive Churn with the Set of Good Models
|
Machine learning models in modern mass-market applications are often updated over time. One of the foremost challenges faced is that, despite increasing overall performance, these updates may flip specific model predictions in unpredictable ways. In practice, researchers quantify the number of unstable predictions between models pre and post update -- i.e., predictive churn. In this paper, we study this effect through the lens of predictive multiplicity -- i.e., the prevalence of conflicting predictions over the set of near-optimal models (the Rashomon set). We show how traditional measures of predictive multiplicity can be used to examine expected churn over this set of prospective models -- i.e., the set of models that may be used to replace a baseline model in deployment. We present theoretical results on the expected churn between models within the Rashomon set from different perspectives. And we characterize expected churn over model updates via the Rashomon set, pairing our analysis with empirical results on real-world datasets -- showing how our approach can be used to better anticipate, reduce, and avoid churn in consumer-facing applications. Further, we show that our approach is useful even for models enhanced with uncertainty awareness.
|
[
"['Jamelle Watson-Daniels' 'Flavio du Pin Calmon' \"Alexander D'Amour\"\n 'Carol Long' 'David C. Parkes' 'Berk Ustun']"
] |
null | null |
2402.07752
| null | null |
http://arxiv.org/pdf/2402.07752v1
|
2024-02-12T16:21:50Z
|
2024-02-12T16:21:50Z
|
Mixed Q-Functionals: Advancing Value-Based Methods in Cooperative MARL
with Continuous Action Domains
|
Tackling multi-agent learning problems efficiently is a challenging task in continuous action domains. While value-based algorithms excel in sample efficiency when applied to discrete action domains, they are usually inefficient when dealing with continuous actions. Policy-based algorithms, on the other hand, attempt to address this challenge by leveraging critic networks for guiding the learning process and stabilizing the gradient estimation. The limitations in the estimation of true return and falling into local optima in these methods result in inefficient and often sub-optimal policies. In this paper, we diverge from the trend of further enhancing critic networks, and focus on improving the effectiveness of value-based methods in multi-agent continuous domains by concurrently evaluating numerous actions. We propose a novel multi-agent value-based algorithm, Mixed Q-Functionals (MQF), inspired from the idea of Q-Functionals, that enables agents to transform their states into basis functions. Our algorithm fosters collaboration among agents by mixing their action-values. We evaluate the efficacy of our algorithm in six cooperative multi-agent scenarios. Our empirical findings reveal that MQF outperforms four variants of Deep Deterministic Policy Gradient through rapid action evaluation and increased sample efficiency.
|
[
"['Yasin Findik' 'S. Reza Ahmadzadeh']"
] |
null | null |
2402.07754
| null | null |
http://arxiv.org/pdf/2402.07754v2
|
2024-07-15T10:03:59Z
|
2024-02-12T16:23:28Z
|
Diffusion of Thoughts: Chain-of-Thought Reasoning in Diffusion Language
Models
|
Recently, diffusion models have garnered significant interest in the field of text processing due to their many potential advantages compared to conventional autoregressive models. In this work, we propose Diffusion-of-Thought (DoT), a novel approach that integrates diffusion models with Chain-of-Thought, a well-established technique for improving the reasoning ability of autoregressive language models. In contrast to autoregressive language models that make decisions in a left-to-right, token-by-token manner, DoT allows reasoning steps to diffuse over time through a diffusion language model and offers greater flexibility in trading-off computation for reasoning performance. Our experimental results demonstrate the effectiveness of DoT in multi-digit multiplication, boolean logic, and grade school math problems, with a small diffusion model outperforming a much larger autoregressive model in both efficiency and accuracy. In addition to that, DoT showcases promising self-correction abilities and benefits from existing reasoning-enhancing techniques like self-consistency decoding. Our findings contribute to the understanding and development of reasoning with diffusion language models.
|
[
"['Jiacheng Ye' 'Shansan Gong' 'Liheng Chen' 'Lin Zheng' 'Jiahui Gao'\n 'Han Shi' 'Chuan Wu' 'Xin Jiang' 'Zhenguo Li' 'Wei Bi' 'Lingpeng Kong']"
] |
null | null |
2402.07757
| null | null |
http://arxiv.org/pdf/2402.07757v1
|
2024-02-12T16:25:47Z
|
2024-02-12T16:25:47Z
|
Towards an Understanding of Stepwise Inference in Transformers: A
Synthetic Graph Navigation Model
|
Stepwise inference protocols, such as scratchpads and chain-of-thought, help language models solve complex problems by decomposing them into a sequence of simpler subproblems. Despite the significant gain in performance achieved via these protocols, the underlying mechanisms of stepwise inference have remained elusive. To address this, we propose to study autoregressive Transformer models on a synthetic task that embodies the multi-step nature of problems where stepwise inference is generally most useful. Specifically, we define a graph navigation problem wherein a model is tasked with traversing a path from a start to a goal node on the graph. Despite is simplicity, we find we can empirically reproduce and analyze several phenomena observed at scale: (i) the stepwise inference reasoning gap, the cause of which we find in the structure of the training data; (ii) a diversity-accuracy tradeoff in model generations as sampling temperature varies; (iii) a simplicity bias in the model's output; and (iv) compositional generalization and a primacy bias with in-context exemplars. Overall, our work introduces a grounded, synthetic framework for studying stepwise inference and offers mechanistic hypotheses that can lay the foundation for a deeper understanding of this phenomenon.
|
[
"['Mikail Khona' 'Maya Okawa' 'Jan Hula' 'Rahul Ramesh' 'Kento Nishi'\n 'Robert Dick' 'Ekdeep Singh Lubana' 'Hidenori Tanaka']"
] |
null | null |
2402.07762
| null | null |
http://arxiv.org/pdf/2402.07762v1
|
2024-02-12T16:28:52Z
|
2024-02-12T16:28:52Z
|
Scalable Structure Learning for Sparse Context-Specific Causal Systems
|
Several approaches to graphically representing context-specific relations among jointly distributed categorical variables have been proposed, along with structure learning algorithms. While existing optimization-based methods have limited scalability due to the large number of context-specific models, the constraint-based methods are more prone to error than even constraint-based DAG learning algorithms since more relations must be tested. We present a hybrid algorithm for learning context-specific models that scales to hundreds of variables while testing no more constraints than standard DAG learning algorithms. Scalable learning is achieved through a combination of an order-based MCMC algorithm and sparsity assumptions analogous to those typically invoked for DAG models. To implement the method, we solve a special case of an open problem recently posed by Alon and Balogh. The method is shown to perform well on synthetic data and real world examples, in terms of both accuracy and scalability.
|
[
"['Felix Leopoldo Rios' 'Alex Markham' 'Liam Solus']"
] |
null | null |
2402.07763
| null | null |
http://arxiv.org/pdf/2402.07763v1
|
2024-02-12T16:28:57Z
|
2024-02-12T16:28:57Z
|
Multi-level Optimal Control with Neural Surrogate Models
|
Optimal actuator and control design is studied as a multi-level optimisation problem, where the actuator design is evaluated based on the performance of the associated optimal closed loop. The evaluation of the optimal closed loop for a given actuator realisation is a computationally demanding task, for which the use of a neural network surrogate is proposed. The use of neural network surrogates to replace the lower level of the optimisation hierarchy enables the use of fast gradient-based and gradient-free consensus-based optimisation methods to determine the optimal actuator design. The effectiveness of the proposed surrogate models and optimisation methods is assessed in a test related to optimal actuator location for heat control.
|
[
"['Dante Kalise' 'Estefanía Loayza-Romero' 'Kirsten A. Morris'\n 'Zhengang Zhong']"
] |
null | null |
2402.07781
| null | null |
http://arxiv.org/pdf/2402.07781v1
|
2024-02-12T16:47:08Z
|
2024-02-12T16:47:08Z
|
IR-Aware ECO Timing Optimization Using Reinforcement Learning
|
Engineering change orders (ECOs) in late stages make minimal design fixes to recover from timing shifts due to excessive IR drops. This paper integrates IR-drop-aware timing analysis and ECO timing optimization using reinforcement learning (RL). The method operates after physical design and power grid synthesis, and rectifies IR-drop-induced timing degradation through gate sizing. It incorporates the Lagrangian relaxation (LR) technique into a novel RL framework, which trains a relational graph convolutional network (R-GCN) agent to sequentially size gates to fix timing violations. The R-GCN agent outperforms a classical LR-only algorithm: in an open 45nm technology, it (a) moves the Pareto front of the delay-area tradeoff curve to the left and (b) saves runtime over the classical method by running fast inference using trained models at iso-quality. The RL model is transferable across timing specifications, and transferable to unseen designs with zero-shot learning or fine tuning.
|
[
"['Vidya A. Chhabria' 'Wenjing Jiang' 'Sachin S. Sapatnekar']"
] |
null | null |
2402.07785
| null | null |
http://arxiv.org/pdf/2402.07785v2
|
2024-03-19T20:17:10Z
|
2024-02-12T16:50:07Z
|
HYPO: Hyperspherical Out-of-Distribution Generalization
|
Out-of-distribution (OOD) generalization is critical for machine learning models deployed in the real world. However, achieving this can be fundamentally challenging, as it requires the ability to learn invariant features across different domains or environments. In this paper, we propose a novel framework HYPO (HYPerspherical OOD generalization) that provably learns domain-invariant representations in a hyperspherical space. In particular, our hyperspherical learning algorithm is guided by intra-class variation and inter-class separation principles -- ensuring that features from the same class (across different training domains) are closely aligned with their class prototypes, while different class prototypes are maximally separated. We further provide theoretical justifications on how our prototypical learning objective improves the OOD generalization bound. Through extensive experiments on challenging OOD benchmarks, we demonstrate that our approach outperforms competitive baselines and achieves superior performance. Code is available at https://github.com/deeplearning-wisc/hypo.
|
[
"['Yifei Ming' 'Haoyue Bai' 'Julian Katz-Samuels' 'Yixuan Li']"
] |
null | null |
2402.07790
| null | null |
http://arxiv.org/pdf/2402.07790v1
|
2024-02-12T16:55:19Z
|
2024-02-12T16:55:19Z
|
From Uncertainty to Precision: Enhancing Binary Classifier Performance
through Calibration
|
The assessment of binary classifier performance traditionally centers on discriminative ability using metrics, such as accuracy. However, these metrics often disregard the model's inherent uncertainty, especially when dealing with sensitive decision-making domains, such as finance or healthcare. Given that model-predicted scores are commonly seen as event probabilities, calibration is crucial for accurate interpretation. In our study, we analyze the sensitivity of various calibration measures to score distortions and introduce a refined metric, the Local Calibration Score. Comparing recalibration methods, we advocate for local regressions, emphasizing their dual role as effective recalibration tools and facilitators of smoother visualizations. We apply these findings in a real-world scenario using Random Forest classifier and regressor to predict credit default while simultaneously measuring calibration during performance optimization.
|
[
"['Agathe Fernandes Machado' 'Arthur Charpentier' 'Emmanuel Flachaire'\n 'Ewen Gallic' 'François Hu']"
] |
null | null |
2402.07792
| null | null |
http://arxiv.org/pdf/2402.07792v1
|
2024-02-12T16:59:05Z
|
2024-02-12T16:59:05Z
|
Empowering Federated Learning for Massive Models with NVIDIA FLARE
|
In the ever-evolving landscape of artificial intelligence (AI) and large language models (LLMs), handling and leveraging data effectively has become a critical challenge. Most state-of-the-art machine learning algorithms are data-centric. However, as the lifeblood of model performance, necessary data cannot always be centralized due to various factors such as privacy, regulation, geopolitics, copyright issues, and the sheer effort required to move vast datasets. In this paper, we explore how federated learning enabled by NVIDIA FLARE can address these challenges with easy and scalable integration capabilities, enabling parameter-efficient and full supervised fine-tuning of LLMs for natural language processing and biopharmaceutical applications to enhance their accuracy and robustness.
|
[
"['Holger R. Roth' 'Ziyue Xu' 'Yuan-Ting Hsieh' 'Adithya Renduchintala'\n 'Isaac Yang' 'Zhihong Zhang' 'Yuhong Wen' 'Sean Yang' 'Kevin Lu'\n 'Kristopher Kersten' 'Camir Ricketts' 'Daguang Xu' 'Chester Chen'\n 'Yan Cheng' 'Andrew Feng']"
] |
null | null |
2402.07793
| null | null |
http://arxiv.org/pdf/2402.07793v2
|
2024-03-18T20:19:43Z
|
2024-02-12T16:59:06Z
|
Tuning-Free Stochastic Optimization
|
Large-scale machine learning problems make the cost of hyperparameter tuning ever more prohibitive. This creates a need for algorithms that can tune themselves on-the-fly. We formalize the notion of "tuning-free" algorithms that can match the performance of optimally-tuned optimization algorithms up to polylogarithmic factors given only loose hints on the relevant problem parameters. We consider in particular algorithms that can match optimally-tuned Stochastic Gradient Descent (SGD). When the domain of optimization is bounded, we show tuning-free matching of SGD is possible and achieved by several existing algorithms. We prove that for the task of minimizing a convex and smooth or Lipschitz function over an unbounded domain, tuning-free optimization is impossible. We discuss conditions under which tuning-free optimization is possible even over unbounded domains. In particular, we show that the recently proposed DoG and DoWG algorithms are tuning-free when the noise distribution is sufficiently well-behaved. For the task of finding a stationary point of a smooth and potentially nonconvex function, we give a variant of SGD that matches the best-known high-probability convergence rate for tuned SGD at only an additional polylogarithmic cost. However, we also give an impossibility result that shows no algorithm can hope to match the optimal expected convergence rate for tuned SGD with high probability.
|
[
"['Ahmed Khaled' 'Chi Jin']"
] |
null | null |
2402.07802
| null | null |
http://arxiv.org/pdf/2402.07802v1
|
2024-02-12T17:07:02Z
|
2024-02-12T17:07:02Z
|
Towards a mathematical theory for consistency training in diffusion
models
|
Consistency models, which were proposed to mitigate the high computational overhead during the sampling phase of diffusion models, facilitate single-step sampling while attaining state-of-the-art empirical performance. When integrated into the training phase, consistency models attempt to train a sequence of consistency functions capable of mapping any point at any time step of the diffusion process to its starting point. Despite the empirical success, a comprehensive theoretical understanding of consistency training remains elusive. This paper takes a first step towards establishing theoretical underpinnings for consistency models. We demonstrate that, in order to generate samples within $varepsilon$ proximity to the target in distribution (measured by some Wasserstein metric), it suffices for the number of steps in consistency learning to exceed the order of $d^{5/2}/varepsilon$, with $d$ the data dimension. Our theory offers rigorous insights into the validity and efficacy of consistency models, illuminating their utility in downstream inference tasks.
|
[
"['Gen Li' 'Zhihan Huang' 'Yuting Wei']"
] |
null | null |
2402.07808
| null | null |
http://arxiv.org/pdf/2402.07808v2
|
2024-05-15T14:32:14Z
|
2024-02-12T17:13:02Z
|
Sourcerer: Sample-based Maximum Entropy Source Distribution Estimation
|
Scientific modeling applications often require estimating a distribution of parameters consistent with a dataset of observations - an inference task also known as source distribution estimation. This problem can be ill-posed, however, since many different source distributions might produce the same distribution of data-consistent simulations. To make a principled choice among many equally valid sources, we propose an approach which targets the maximum entropy distribution, i.e., prioritizes retaining as much uncertainty as possible. Our method is purely sample-based - leveraging the Sliced-Wasserstein distance to measure the discrepancy between the dataset and simulations - and thus suitable for simulators with intractable likelihoods. We benchmark our method on several tasks, and show that it can recover source distributions with substantially higher entropy than recent source estimation methods, without sacrificing the fidelity of the simulations. Finally, to demonstrate the utility of our approach, we infer source distributions for parameters of the Hodgkin-Huxley model from experimental datasets with thousands of single-neuron measurements. In summary, we propose a principled method for inferring source distributions of scientific simulator parameters while retaining as much uncertainty as possible.
|
[
"['Julius Vetter' 'Guy Moss' 'Cornelius Schröder' 'Richard Gao'\n 'Jakob H. Macke']"
] |
null | null |
2402.07812
| null | null |
http://arxiv.org/pdf/2402.07812v1
|
2024-02-12T17:17:50Z
|
2024-02-12T17:17:50Z
|
Retrieval-Augmented Thought Process as Sequential Decision Making
|
Large Language Models (LLMs) have demonstrated their strong ability to assist people and show "sparks of intelligence". However, several open challenges hinder their wider application: such as concerns over privacy, tendencies to produce hallucinations, and difficulties in handling long contexts. In this work, we address those challenges by introducing the Retrieval-Augmented Thought Process (RATP). Given access to external knowledge, RATP formulates the thought generation of LLMs as a multiple-step decision process. To optimize such a thought process, RATP leverages Monte-Carlo Tree Search, and learns a Q-value estimator that permits cost-efficient inference. In addressing the task of question-answering with private data, where ethical and security concerns limit LLM training methods, RATP achieves a 50% improvement over existing in-context retrieval-augmented language models.
|
[
"['Thomas Pouplin' 'Hao Sun' 'Samuel Holt' 'Mihaela van der Schaar']"
] |
null | null |
2402.07818
| null | null |
http://arxiv.org/pdf/2402.07818v4
|
2024-05-09T09:41:23Z
|
2024-02-12T17:24:15Z
|
Differentially Private Zeroth-Order Methods for Scalable Large Language
Model Finetuning
|
Fine-tuning on task-specific datasets is a widely-embraced paradigm of harnessing the powerful capability of pretrained LLMs for various downstream tasks. Due to the popularity of LLMs fine-tuning and its accompanying privacy concerns, differentially private (DP) fine-tuning of pretrained LLMs has been widely used to safeguarding the privacy of task-specific datasets. Lying at the design core of DP LLM fine-tuning methods is the satisfactory tradeoff among privacy, utility, and scalability. Most existing methods build upon the seminal work of DP-SGD. Despite pushing the scalability of DP-SGD to its limit, DP-SGD-based fine-tuning methods are unfortunately limited by the inherent inefficiency of SGD. In this paper, we investigate the potential of DP zeroth-order methods for LLM pretraining, which avoids the scalability bottleneck of SGD by approximating the gradient with the more efficient zeroth-order gradient. Rather than treating the zeroth-order method as a drop-in replacement for SGD, this paper presents a comprehensive study both theoretically and empirically. First, we propose the stagewise DP zeroth-order method (DP-ZOSO) that dynamically schedules key hyperparameters. This design is grounded on the synergy between DP random perturbation and the gradient approximation error of the zeroth-order method, and its effect on fine-tuning trajectory. We provide theoretical analysis for both proposed methods. We conduct extensive empirical analysis on both encoder-only masked language model and decoder-only autoregressive language model, achieving impressive results in terms of scalability and utility (compared with DPZero, DP-ZOPO improves 4.5% on SST-5, 5.5% on MNLI with RoBERTa-Large and 9.2% on CB, 3.9% on BoolQ with OPT-2.7B when $epsilon=4$).
|
[
"['Z Liu' 'J Lou' 'W Bao' 'Y Hu' 'B Li' 'Z Qin' 'K Ren']"
] |
null | null |
2402.07821
| null | null |
http://arxiv.org/pdf/2402.07821v2
|
2024-06-08T04:27:46Z
|
2024-02-12T17:25:23Z
|
On Computationally Efficient Multi-Class Calibration
|
Consider a multi-class labelling problem, where the labels can take values in $[k]$, and a predictor predicts a distribution over the labels. In this work, we study the following foundational question: Are there notions of multi-class calibration that give strong guarantees of meaningful predictions and can be achieved in time and sample complexities polynomial in $k$? Prior notions of calibration exhibit a tradeoff between computational efficiency and expressivity: they either suffer from having sample complexity exponential in $k$, or needing to solve computationally intractable problems, or give rather weak guarantees. Our main contribution is a notion of calibration that achieves all these desiderata: we formulate a robust notion of projected smooth calibration for multi-class predictions, and give new recalibration algorithms for efficiently calibrating predictors under this definition with complexity polynomial in $k$. Projected smooth calibration gives strong guarantees for all downstream decision makers who want to use the predictor for binary classification problems of the form: does the label belong to a subset $T subseteq [k]$: e.g. is this an image of an animal? It ensures that the probabilities predicted by summing the probabilities assigned to labels in $T$ are close to some perfectly calibrated binary predictor for that task. We also show that natural strengthenings of our definition are computationally hard to achieve: they run into information theoretic barriers or computational intractability. Underlying both our upper and lower bounds is a tight connection that we prove between multi-class calibration and the well-studied problem of agnostic learning in the (standard) binary prediction setting.
|
[
"['Parikshit Gopalan' 'Lunjia Hu' 'Guy N. Rothblum']"
] |
null | null |
2402.07834
| null | null |
http://arxiv.org/pdf/2402.07834v2
|
2024-02-15T18:28:51Z
|
2024-02-12T17:45:40Z
|
Generalizing across Temporal Domains with Koopman Operators
|
In the field of domain generalization, the task of constructing a predictive model capable of generalizing to a target domain without access to target data remains challenging. This problem becomes further complicated when considering evolving dynamics between domains. While various approaches have been proposed to address this issue, a comprehensive understanding of the underlying generalization theory is still lacking. In this study, we contribute novel theoretic results that aligning conditional distribution leads to the reduction of generalization bounds. Our analysis serves as a key motivation for solving the Temporal Domain Generalization (TDG) problem through the application of Koopman Neural Operators, resulting in Temporal Koopman Networks (TKNets). By employing Koopman Operators, we effectively address the time-evolving distributions encountered in TDG using the principles of Koopman theory, where measurement functions are sought to establish linear transition relations between evolving domains. Through empirical evaluations conducted on synthetic and real-world datasets, we validate the effectiveness of our proposed approach.
|
[
"['Qiuhao Zeng' 'Wei Wang' 'Fan Zhou' 'Gezheng Xu' 'Ruizhi Pu'\n 'Changjian Shui' 'Christian Gagne' 'Shichun Yang' 'Boyu Wang'\n 'Charles X. Ling']"
] |
null | null |
2402.07839
| null | null |
http://arxiv.org/pdf/2402.07839v2
|
2024-02-13T13:19:54Z
|
2024-02-12T17:50:56Z
|
Towards Meta-Pruning via Optimal Transport
|
Structural pruning of neural networks conventionally relies on identifying and discarding less important neurons, a practice often resulting in significant accuracy loss that necessitates subsequent fine-tuning efforts. This paper introduces a novel approach named Intra-Fusion, challenging this prevailing pruning paradigm. Unlike existing methods that focus on designing meaningful neuron importance metrics, Intra-Fusion redefines the overlying pruning procedure. Through utilizing the concepts of model fusion and Optimal Transport, we leverage an agnostically given importance metric to arrive at a more effective sparse model representation. Notably, our approach achieves substantial accuracy recovery without the need for resource-intensive fine-tuning, making it an efficient and promising tool for neural network compression. Additionally, we explore how fusion can be added to the pruning process to significantly decrease the training time while maintaining competitive performance. We benchmark our results for various networks on commonly used datasets such as CIFAR-10, CIFAR-100, and ImageNet. More broadly, we hope that the proposed Intra-Fusion approach invigorates exploration into a fresh alternative to the predominant compression approaches. Our code is available here: https://github.com/alexandertheus/Intra-Fusion.
|
[
"['Alexander Theus' 'Olin Geimer' 'Friedrich Wicke' 'Thomas Hofmann'\n 'Sotiris Anagnostidis' 'Sidak Pal Singh']"
] |
null | null |
2402.07845
| null | null |
http://arxiv.org/pdf/2402.07845v2
|
2024-02-20T18:46:04Z
|
2024-02-12T17:53:43Z
|
Unsupervised Optimisation of GNNs for Node Clustering
|
Graph Neural Networks (GNNs) can be trained to detect communities within a graph by learning from the duality of feature and connectivity information. Currently, the common approach for optimisation of GNNs is to use comparisons to ground-truth for hyperparameter tuning and model selection. In this work, we show that nodes can be clustered into communities with GNNs by solely optimising for modularity, without any comparison to ground-truth. Although modularity is a graph partitioning quality metric, we show that this can be used to optimise GNNs that also encode features without a drop in performance. We take it a step further and also study whether the unsupervised metric performance can predict ground-truth performance. To investigate why modularity can be used to optimise GNNs, we design synthetic experiments that show the limitations of this approach. The synthetic graphs are created to highlight current capabilities in distinct, random and zero information space partitions in attributed graphs. We conclude that modularity can be used for hyperparameter optimisation and model selection on real-world datasets as well as being a suitable proxy for predicting ground-truth performance, however, GNNs fail to balance the information duality when the spaces contain conflicting signals.
|
[
"['William Leeney' 'Ryan McConville']"
] |
null | null |
2402.07846
| null | null |
http://arxiv.org/pdf/2402.07846v1
|
2024-02-12T17:56:52Z
|
2024-02-12T17:56:52Z
|
Generative Modeling of Discrete Joint Distributions by E-Geodesic Flow
Matching on Assignment Manifolds
|
This paper introduces a novel generative model for discrete distributions based on continuous normalizing flows on the submanifold of factorizing discrete measures. Integration of the flow gradually assigns categories and avoids issues of discretizing the latent continuous model like rounding, sample truncation etc. General non-factorizing discrete distributions capable of representing complex statistical dependencies of structured discrete data, can be approximated by embedding the submanifold into a the meta-simplex of all joint discrete distributions and data-driven averaging. Efficient training of the generative model is demonstrated by matching the flow of geodesics of factorizing discrete distributions. Various experiments underline the approach's broad applicability.
|
[
"['Bastian Boll' 'Daniel Gonzalez-Alvarado' 'Christoph Schnörr']"
] |
null | null |
2402.07851
| null | null |
http://arxiv.org/pdf/2402.07851v1
|
2024-02-12T17:59:20Z
|
2024-02-12T17:59:20Z
|
Comparing skill of historical rainfall data based monsoon rainfall
prediction in India with NCEP-NWP forecasts
|
In this draft we consider the problem of forecasting rainfall across India during the four monsoon months, one day as well as three days in advance. We train neural networks using historical daily gridded precipitation data for India obtained from IMD for the time period $1901- 2022$, at a spatial resolution of $1^{circ} times 1^{circ}$. This is compared with the numerical weather prediction (NWP) forecasts obtained from NCEP (National Centre for Environmental Prediction) available for the period 2011-2022. We conduct a detailed country wide analysis and separately analyze some of the most populated cities in India. Our conclusion is that forecasts obtained by applying deep learning to historical rainfall data are more accurate compared to NWP forecasts as well as predictions based on persistence. On average, compared to our predictions, forecasts from NCEP-NWP model have about 34% higher error for a single day prediction, and over 68% higher error for a three day prediction. Similarly, persistence estimates report a 29% higher error in a single day forecast, and over 54% error in a three day forecast. We further observe that data up to 20 days in the past is useful in reducing errors of one and three day forecasts, when a transformer based learning architecture, and to a lesser extent when an LSTM is used. A key conclusion suggested by our preliminary analysis is that NWP forecasts can be substantially improved upon through more and diverse data relevant to monsoon prediction combined with carefully selected neural network architecture.
|
[
"['Apoorva Narula' 'Aastha Jain' 'Jatin Batra' 'Sandeep Juneja']"
] |
null | null |
2402.07858
| null | null |
http://arxiv.org/pdf/2402.07858v1
|
2024-02-12T18:05:03Z
|
2024-02-12T18:05:03Z
|
Multiscale Neuroimaging Features for the Identification of Medication
Class and Non-Responders in Mood Disorder Treatment
|
In the clinical treatment of mood disorders, the complex behavioral symptoms presented by patients and variability of patient response to particular medication classes can create difficulties in providing fast and reliable treatment when standard diagnostic and prescription methods are used. Increasingly, the incorporation of physiological information such as neuroimaging scans and derivatives into the clinical process promises to alleviate some of the uncertainty surrounding this process. Particularly, if neural features can help to identify patients who may not respond to standard courses of anti-depressants or mood stabilizers, clinicians may elect to avoid lengthy and side-effect-laden treatments and seek out a different, more effective course that might otherwise not have been under consideration. Previously, approaches for the derivation of relevant neuroimaging features work at only one scale in the data, potentially limiting the depth of information available for clinical decision support. In this work, we show that the utilization of multi spatial scale neuroimaging features - particularly resting state functional networks and functional network connectivity measures - provide a rich and robust basis for the identification of relevant medication class and non-responders in the treatment of mood disorders. We demonstrate that the generated features, along with a novel approach for fast and automated feature selection, can support high accuracy rates in the identification of medication class and non-responders as well as the identification of novel, multi-scale biomarkers.
|
[
"['Bradley T. Baker' 'Mustafa S. Salman' 'Zening Fu' 'Armin Iraji'\n 'Elizabeth Osuch' 'Jeremy Bockholt' 'Vince D. Calhoun']"
] |
null | null |
2402.07862
| null | null |
http://arxiv.org/pdf/2402.07862v1
|
2024-02-12T18:14:43Z
|
2024-02-12T18:14:43Z
|
AI-Augmented Predictions: LLM Assistants Improve Human Forecasting
Accuracy
|
Large language models (LLMs) show impressive capabilities, matching and sometimes exceeding human performance in many domains. This study explores the potential of LLMs to augment judgement in forecasting tasks. We evaluated the impact on forecasting accuracy of two GPT-4-Turbo assistants: one designed to provide high-quality advice ('superforecasting'), and the other designed to be overconfident and base-rate-neglecting. Participants (N = 991) had the option to consult their assigned LLM assistant throughout the study, in contrast to a control group that used a less advanced model (DaVinci-003) without direct forecasting support. Our preregistered analyses reveal that LLM augmentation significantly enhances forecasting accuracy by 23% across both types of assistants, compared to the control group. This improvement occurs despite the superforecasting assistant's higher accuracy in predictions, indicating the augmentation's benefit is not solely due to model prediction accuracy. Exploratory analyses showed a pronounced effect in one forecasting item, without which we find that the superforecasting assistant increased accuracy by 43%, compared with 28% for the biased assistant. We further examine whether LLM augmentation disproportionately benefits less skilled forecasters, degrades the wisdom-of-the-crowd by reducing prediction diversity, or varies in effectiveness with question difficulty. Our findings do not consistently support these hypotheses. Our results suggest that access to an LLM assistant, even a biased one, can be a helpful decision aid in cognitively demanding tasks where the answer is not known at the time of interaction.
|
[
"['Philipp Schoenegger' 'Peter S. Park' 'Ezra Karger' 'Philip E. Tetlock']"
] |
null | null |
2402.07865
| null | null |
http://arxiv.org/pdf/2402.07865v2
|
2024-05-30T13:08:48Z
|
2024-02-12T18:21:14Z
|
Prismatic VLMs: Investigating the Design Space of Visually-Conditioned
Language Models
|
Visually-conditioned language models (VLMs) have seen growing adoption in applications such as visual dialogue, scene understanding, and robotic task planning; adoption that has fueled a wealth of new models such as LLaVa, InstructBLIP, and PaLI-3. Despite the volume of new releases, key design decisions around image preprocessing, architecture, and optimization are under-explored, making it challenging to understand what factors account for model performance $-$ a challenge further complicated by the lack of objective, consistent evaluations. To address these gaps, we first compile a suite of standardized evaluations spanning visual question answering, object localization, and challenge sets that probe properties such as hallucination; evaluations that provide fine-grained insight VLM capabilities. Second, we rigorously investigate VLMs along key design axes, including pretrained visual representations and training from base vs. instruct-tuned language models, amongst others. We couple our analysis with three resource contributions: (1) a unified framework for evaluating VLMs, (2) optimized, flexible training code, and (3) checkpoints for all models, including a family of VLMs at the 7-13B scale that strictly outperform InstructBLIP and LLaVa v1.5, the state-of-the-art in open VLMs.
|
[
"['Siddharth Karamcheti' 'Suraj Nair' 'Ashwin Balakrishna' 'Percy Liang'\n 'Thomas Kollar' 'Dorsa Sadigh']"
] |
null | null |
2402.07867
| null | null |
http://arxiv.org/pdf/2402.07867v1
|
2024-02-12T18:28:36Z
|
2024-02-12T18:28:36Z
|
PoisonedRAG: Knowledge Poisoning Attacks to Retrieval-Augmented
Generation of Large Language Models
|
Large language models (LLMs) have achieved remarkable success due to their exceptional generative capabilities. Despite their success, they also have inherent limitations such as a lack of up-to-date knowledge and hallucination. Retrieval-Augmented Generation (RAG) is a state-of-the-art technique to mitigate those limitations. In particular, given a question, RAG retrieves relevant knowledge from a knowledge database to augment the input of the LLM. For instance, the retrieved knowledge could be a set of top-k texts that are most semantically similar to the given question when the knowledge database contains millions of texts collected from Wikipedia. As a result, the LLM could utilize the retrieved knowledge as the context to generate an answer for the given question. Existing studies mainly focus on improving the accuracy or efficiency of RAG, leaving its security largely unexplored. We aim to bridge the gap in this work. Particularly, we propose PoisonedRAG , a set of knowledge poisoning attacks to RAG, where an attacker could inject a few poisoned texts into the knowledge database such that the LLM generates an attacker-chosen target answer for an attacker-chosen target question. We formulate knowledge poisoning attacks as an optimization problem, whose solution is a set of poisoned texts. Depending on the background knowledge (e.g., black-box and white-box settings) of an attacker on the RAG, we propose two solutions to solve the optimization problem, respectively. Our results on multiple benchmark datasets and LLMs show our attacks could achieve 90% attack success rates when injecting 5 poisoned texts for each target question into a database with millions of texts. We also evaluate recent defenses and our results show they are insufficient to defend against our attacks, highlighting the need for new defenses.
|
[
"['Wei Zou' 'Runpeng Geng' 'Binghui Wang' 'Jinyuan Jia']"
] |
null | null |
2402.07868
| null | null |
http://arxiv.org/pdf/2402.07868v4
|
2024-05-29T12:15:40Z
|
2024-02-12T18:29:17Z
|
Nesting Particle Filters for Experimental Design in Dynamical Systems
|
In this paper, we propose a novel approach to Bayesian experimental design for non-exchangeable data that formulates it as risk-sensitive policy optimization. We develop the Inside-Out SMC$^2$ algorithm, a nested sequential Monte Carlo technique to infer optimal designs, and embed it into a particle Markov chain Monte Carlo framework to perform gradient-based policy amortization. Our approach is distinct from other amortized experimental design techniques, as it does not rely on contrastive estimators. Numerical validation on a set of dynamical systems showcases the efficacy of our method in comparison to other state-of-the-art strategies.
|
[
"['Sahel Iqbal' 'Adrien Corenflos' 'Simo Särkkä' 'Hany Abdulsamad']"
] |
null | null |
2402.07871
| null | null |
http://arxiv.org/pdf/2402.07871v1
|
2024-02-12T18:33:47Z
|
2024-02-12T18:33:47Z
|
Scaling Laws for Fine-Grained Mixture of Experts
|
Mixture of Experts (MoE) models have emerged as a primary solution for reducing the computational cost of Large Language Models. In this work, we analyze their scaling properties, incorporating an expanded range of variables. Specifically, we introduce a new hyperparameter, granularity, whose adjustment enables precise control over the size of the experts. Building on this, we establish scaling laws for fine-grained MoE, taking into account the number of training tokens, model size, and granularity. Leveraging these laws, we derive the optimal training configuration for a given computational budget. Our findings not only show that MoE models consistently outperform dense Transformers but also highlight that the efficiency gap between dense and MoE models widens as we scale up the model size and training budget. Furthermore, we demonstrate that the common practice of setting the size of experts in MoE to mirror the feed-forward layer is not optimal at almost any computational budget.
|
[
"['Jakub Krajewski' 'Jan Ludziejewski' 'Kamil Adamczewski' 'Maciej Pióro'\n 'Michał Krutul' 'Szymon Antoniak' 'Kamil Ciebiera' 'Krystian Król'\n 'Tomasz Odrzygóźdź' 'Piotr Sankowski' 'Marek Cygan' 'Sebastian Jaszczur']"
] |
null | null |
2402.07872
| null | null |
http://arxiv.org/pdf/2402.07872v1
|
2024-02-12T18:33:47Z
|
2024-02-12T18:33:47Z
|
PIVOT: Iterative Visual Prompting Elicits Actionable Knowledge for VLMs
|
Vision language models (VLMs) have shown impressive capabilities across a variety of tasks, from logical reasoning to visual understanding. This opens the door to richer interaction with the world, for example robotic control. However, VLMs produce only textual outputs, while robotic control and other spatial tasks require outputting continuous coordinates, actions, or trajectories. How can we enable VLMs to handle such settings without fine-tuning on task-specific data? In this paper, we propose a novel visual prompting approach for VLMs that we call Prompting with Iterative Visual Optimization (PIVOT), which casts tasks as iterative visual question answering. In each iteration, the image is annotated with a visual representation of proposals that the VLM can refer to (e.g., candidate robot actions, localizations, or trajectories). The VLM then selects the best ones for the task. These proposals are iteratively refined, allowing the VLM to eventually zero in on the best available answer. We investigate PIVOT on real-world robotic navigation, real-world manipulation from images, instruction following in simulation, and additional spatial inference tasks such as localization. We find, perhaps surprisingly, that our approach enables zero-shot control of robotic systems without any robot training data, navigation in a variety of environments, and other capabilities. Although current performance is far from perfect, our work highlights potentials and limitations of this new regime and shows a promising approach for Internet-Scale VLMs in robotic and spatial reasoning domains. Website: pivot-prompt.github.io and HuggingFace: https://huggingface.co/spaces/pivot-prompt/pivot-prompt-demo.
|
[
"['Soroush Nasiriany' 'Fei Xia' 'Wenhao Yu' 'Ted Xiao' 'Jacky Liang'\n 'Ishita Dasgupta' 'Annie Xie' 'Danny Driess' 'Ayzaan Wahid' 'Zhuo Xu'\n 'Quan Vuong' 'Tingnan Zhang' 'Tsang-Wei Edward Lee' 'Kuang-Huei Lee'\n 'Peng Xu' 'Sean Kirmani' 'Yuke Zhu' 'Andy Zeng' 'Karol Hausman'\n 'Nicolas Heess' 'Chelsea Finn' 'Sergey Levine' 'Brian Ichter']"
] |
null | null |
2402.07875
| null | null |
http://arxiv.org/pdf/2402.07875v2
|
2024-06-01T18:17:12Z
|
2024-02-12T18:41:31Z
|
Implicit Bias of Policy Gradient in Linear Quadratic Control:
Extrapolation to Unseen Initial States
|
In modern machine learning, models can often fit training data in numerous ways, some of which perform well on unseen (test) data, while others do not. Remarkably, in such cases gradient descent frequently exhibits an implicit bias that leads to excellent performance on unseen data. This implicit bias was extensively studied in supervised learning, but is far less understood in optimal control (reinforcement learning). There, learning a controller applied to a system via gradient descent is known as policy gradient, and a question of prime importance is the extent to which a learned controller extrapolates to unseen initial states. This paper theoretically studies the implicit bias of policy gradient in terms of extrapolation to unseen initial states. Focusing on the fundamental Linear Quadratic Regulator (LQR) problem, we establish that the extent of extrapolation depends on the degree of exploration induced by the system when commencing from initial states included in training. Experiments corroborate our theory, and demonstrate its conclusions on problems beyond LQR, where systems are non-linear and controllers are neural networks. We hypothesize that real-world optimal control may be greatly improved by developing methods for informed selection of initial states to train on.
|
[
"['Noam Razin' 'Yotam Alexander' 'Edo Cohen-Karlik' 'Raja Giryes'\n 'Amir Globerson' 'Nadav Cohen']"
] |
null | null |
2402.07876
| null | null |
http://arxiv.org/pdf/2402.07876v4
|
2024-04-18T20:35:32Z
|
2024-02-12T18:41:34Z
|
Policy Improvement using Language Feedback Models
|
We introduce Language Feedback Models (LFMs) that identify desirable behaviour - actions that help achieve tasks specified in the instruction - for imitation learning in instruction following. To train LFMs, we obtain feedback from Large Language Models (LLMs) on visual trajectories verbalized to language descriptions. First, by using LFMs to identify desirable behaviour to imitate, we improve in task-completion rate over strong behavioural cloning baselines on three distinct language grounding environments (Touchdown, ScienceWorld, and ALFWorld). Second, LFMs outperform using LLMs as experts to directly predict actions, when controlling for the number of LLM output tokens. Third, LFMs generalize to unseen environments, improving task-completion rate by 3.5-12.0% through one round of adaptation. Finally, LFM can be modified to provide human-interpretable feedback without performance loss, allowing human verification of desirable behaviour for imitation learning.
|
[
"['Victor Zhong' 'Dipendra Misra' 'Xingdi Yuan' 'Marc-Alexandre Côté']"
] |
null | null |
2402.07878
| null | null |
http://arxiv.org/pdf/2402.07878v1
|
2024-02-12T18:44:02Z
|
2024-02-12T18:44:02Z
|
Using Graph Theory for Improving Machine Learning-based Detection of
Cyber Attacks
|
Early detection of network intrusions and cyber threats is one of the main pillars of cybersecurity. One of the most effective approaches for this purpose is to analyze network traffic with the help of artificial intelligence algorithms, with the aim of detecting the possible presence of an attacker by distinguishing it from a legitimate user. This is commonly done by collecting the traffic exchanged between terminals in a network and analyzing it on a per-packet or per-connection basis. In this paper, we propose instead to perform pre-processing of network traffic under analysis with the aim of extracting some new metrics on which we can perform more efficient detection and overcome some limitations of classical approaches. These new metrics are based on graph theory, and consider the network as a whole, rather than focusing on individual packets or connections. Our approach is validated through experiments performed on publicly available data sets, from which it results that it can not only overcome some of the limitations of classical approaches, but also achieve a better detection capability of cyber threats.
|
[
"['Giacomo Zonneveld' 'Lorenzo Principi' 'Marco Baldi']"
] |
null | null |
2402.07890
| null | null |
http://arxiv.org/abs/2402.07890v1
|
2024-02-12T18:53:20Z
|
2024-02-12T18:53:20Z
|
MAIDCRL: Semi-centralized Multi-Agent Influence Dense-CNN Reinforcement
Learning
|
Distributed decision-making in multi-agent systems presents difficult challenges for interactive behavior learning in both cooperative and competitive systems. To mitigate this complexity, MAIDRL presents a semi-centralized Dense Reinforcement Learning algorithm enhanced by agent influence maps (AIMs), for learning effective multi-agent control on StarCraft Multi-Agent Challenge (SMAC) scenarios. In this paper, we extend the DenseNet in MAIDRL and introduce semi-centralized Multi-Agent Dense-CNN Reinforcement Learning, MAIDCRL, by incorporating convolutional layers into the deep model architecture, and evaluate the performance on both homogeneous and heterogeneous scenarios. The results show that the CNN-enabled MAIDCRL significantly improved the learning performance and achieved a faster learning rate compared to the existing MAIDRL, especially on more complicated heterogeneous SMAC scenarios. We further investigate the stability and robustness of our model. The statistics reflect that our model not only achieves higher winning rate in all the given scenarios but also boosts the agent's learning process in fine-grained decision-making.
|
[
"['Ayesha Siddika Nipu' 'Siming Liu' 'Anthony Harris']"
] |
null | null |
2402.07891
| null | null |
http://arxiv.org/pdf/2402.07891v3
|
2024-06-06T11:07:17Z
|
2024-02-12T18:54:02Z
|
Label-Efficient Model Selection for Text Generation
|
Model selection for a given target task can be costly, as it may entail extensive annotation of the quality of outputs of different models. We introduce DiffUse, an efficient method to make an informed decision between candidate text generation models based on preference annotations. DiffUse reduces the required amount of annotations, thus saving valuable time and resources in performing evaluation. DiffUse intelligently selects instances by clustering embeddings that represent the semantic differences between model outputs. Thus, it is able to identify a subset of examples that are more informative for preference decisions. Our method is model-agnostic, and can be applied to any text generation model for selecting between models, prompts and configurations. Moreover, we propose a practical iterative approach for dynamically determining how many instances to annotate. In a series of experiments over hundreds of model pairs, we demonstrate that DiffUse can dramatically reduce the required number of annotations -- by up to 75% -- while maintaining high evaluation reliability.
|
[
"['Shir Ashury-Tahan' 'Ariel Gera' 'Benjamin Sznajder' 'Leshem Choshen'\n 'Liat Ein-Dor' 'Eyal Shnarch']"
] |
null | null |
2402.07899
| null | null |
http://arxiv.org/pdf/2402.07899v2
|
2024-05-10T18:54:59Z
|
2024-02-12T18:58:58Z
|
A systematic investigation of learnability from single child linguistic
input
|
Language models (LMs) have demonstrated remarkable proficiency in generating linguistically coherent text, sparking discussions about their relevance to understanding human language learnability. However, a significant gap exists between the training data for these models and the linguistic input a child receives. LMs are typically trained on data that is orders of magnitude larger and fundamentally different from child-directed speech (Warstadt and Bowman, 2022; Warstadt et al., 2023; Frank, 2023a). Addressing this discrepancy, our research focuses on training LMs on subsets of a single child's linguistic input. Previously, Wang, Vong, Kim, and Lake (2023) found that LMs trained in this setting can form syntactic and semantic word clusters and develop sensitivity to certain linguistic phenomena, but they only considered LSTMs and simpler neural networks trained from just one single-child dataset. Here, to examine the robustness of learnability from single-child input, we systematically train six different model architectures on five datasets (3 single-child and 2 baselines). We find that the models trained on single-child datasets showed consistent results that matched with previous work, underscoring the robustness of forming meaningful syntactic and semantic representations from a subset of a child's linguistic input.
|
[
"['Yulu Qin' 'Wentao Wang' 'Brenden M. Lake']"
] |
null | null |
2402.07901
| null | null |
http://arxiv.org/pdf/2402.07901v1
|
2024-02-12T18:59:39Z
|
2024-02-12T18:59:39Z
|
FAST: Factorizable Attention for Speeding up Transformers
|
Motivated by the factorization inherent in the original fast multipole method and the improved fast Gauss transform we introduce a factorable form of attention that operates efficiently in high dimensions. This approach reduces the computational and memory complexity of the attention mechanism in transformers from $O(N^2)$ to $O(N)$. In comparison to previous attempts, our work presents a linearly scaled attention mechanism that maintains the full representation of the attention matrix without compromising on sparsification and incorporates the all-to-all relationship between tokens. We explore the properties of our new attention metric and conduct tests in various standard settings. Results indicate that our attention mechanism has a robust performance and holds significant promise for diverse applications where self-attention is used.
|
[
"['Armin Gerami' 'Monte Hoover' 'Pranav S. Dulepet' 'Ramani Duraiswami']"
] |
null | null |
2402.07915
| null | null |
http://arxiv.org/pdf/2402.07915v1
|
2024-02-01T16:39:02Z
|
2024-02-01T16:39:02Z
|
Research on Older Adults' Interaction with E-Health Interface Based on
Explainable Artificial Intelligence
|
This paper proposed a comprehensive mixed-methods framework with varied samples of older adults, including user experience, usability assessments, and in-depth interviews with the integration of Explainable Artificial Intelligence (XAI) methods. The experience of older adults' interaction with the Ehealth interface is collected through interviews and transformed into operatable databases whereas XAI methods are utilized to explain the collected interview results in this research work. The results show that XAI-infused e-health interfaces could play an important role in bridging the age-related digital divide by investigating elders' preferences when interacting with E-health interfaces. Furthermore, the study identifies important design factors, such as intuitive visualization and straightforward explanations, that are critical for creating efficient Human Computer Interaction (HCI) tools among older users. Furthermore, this study emphasizes the revolutionary potential of XAI in e-health interfaces for older users, emphasizing the importance of transparency and understandability in HCI-driven healthcare solutions. This study's findings have far-reaching implications for the design and development of user-centric e-health technologies, intending to increase the overall well-being of older adults.
|
[
"['Xueting Huang' 'Zhibo Zhang' 'Fusen Guo' 'Xianghao Wang' 'Kun Chi'\n 'Kexin Wu']"
] |
null | null |
2402.07928
| null | null |
http://arxiv.org/pdf/2402.07928v1
|
2024-02-05T21:17:44Z
|
2024-02-05T21:17:44Z
|
Abstracted Trajectory Visualization for Explainability in Reinforcement
Learning
|
Explainable AI (XAI) has demonstrated the potential to help reinforcement learning (RL) practitioners to understand how RL models work. However, XAI for users who do not have RL expertise (non-RL experts), has not been studied sufficiently. This results in a difficulty for the non-RL experts to participate in the fundamental discussion of how RL models should be designed for an incoming society where humans and AI coexist. Solving such a problem would enable RL experts to communicate with the non-RL experts in producing machine learning solutions that better fit our society. We argue that abstracted trajectories, that depicts transitions between the major states of the RL model, will be useful for non-RL experts to build a mental model of the agents. Our early results suggest that by leveraging a visualization of the abstracted trajectories, users without RL expertise are able to infer the behavior patterns of RL.
|
[
"['Yoshiki Takagi' 'Roderick Tabalba' 'Nurit Kirshenbaum' 'Jason Leigh']"
] |
null | null |
2402.07933
| null | null |
http://arxiv.org/pdf/2402.07933v2
|
2024-06-07T11:09:17Z
|
2024-02-06T16:00:32Z
|
Human-Centered AI Product Prototyping with No-Code AutoML: Conceptual
Framework, Potentials and Limitations
|
This paper addresses the complexities inherent in AI product prototyping, focusing on the challenges posed by the probabilistic nature of AI behavior and the limited accessibility of prototyping tools to non-experts. A Design Science Research (DSR) approach is presented which culminates in a conceptual framework aimed at improving the AI prototyping process. Through a comprehensive literature review, key challenges were identified and no-code AutoML was analyzed as a solution. The framework describes the seamless incorporation of non-expert input and evaluation during prototyping, leveraging the potential of no-code AutoML to enhance accessibility and interpretability. A hybrid approach of combining naturalistic (case study) and artificial evaluation methods (criteria-based analysis) validated the utility of our approach, highlighting its efficacy in supporting AI non-experts and streamlining decision-making and its limitations. Implications for academia and industry, emphasizing the strategic integration of no-code AutoML to enhance AI product development processes, mitigate risks, and foster innovation, are discussed.
|
[
"['Mario Truss' 'Marc Schmitt']"
] |
null | null |
2402.07937
| null | null |
http://arxiv.org/abs/2402.07937v1
|
2024-02-04T18:55:24Z
|
2024-02-04T18:55:24Z
|
A Physiological Sensor-Based Android Application Synchronized with a
Driving Simulator for Driver Monitoring
|
In this paper, we present an Android application to control and monitor the physiological sensors from the Shimmer platform and its synchronized working with a driving simulator. The Android app can monitor drivers and their parameters can be used to analyze the relation between their physiological states and driving performance. The app can configure, select, receive, process, represent graphically, and store the signals from electrocardiogram (ECG), electromyogram (EMG) and galvanic skin response (GSR) modules and accelerometers, a magnetometer and a gyroscope. The Android app is synchronized in two steps with a driving simulator that we previously developed using the Unity game engine to analyze driving security and efficiency. The Android app was tested with different sensors working simultaneously at various sampling rates and in different Android devices. We also tested the synchronized working of the driving simulator and the Android app with 25 people and analyzed the relation between data from the ECG, EMG, GSR, and gyroscope sensors and from the simulator. Among others, some significant correlations between a gyroscope-based feature calculated by the Android app and vehicle data and particular traffic offences were found. The Android app can be applied with minor adaptations to other different users such as patients with chronic diseases or athletes.
|
[
"['David González-Ortega' 'Francisco Javier Díaz-Pernas'\n 'Mario Martínez-Zarzuela' 'Míriam Antón-Rodríguez']"
] |
null | null |
2402.07938
| null | null |
http://arxiv.org/pdf/2402.07938v2
|
2024-04-16T07:39:05Z
|
2024-02-07T21:08:49Z
|
Large Language User Interfaces: Voice Interactive User Interfaces
powered by LLMs
|
The evolution of Large Language Models (LLMs) has showcased remarkable capacities for logical reasoning and natural language comprehension. These capabilities can be leveraged in solutions that semantically and textually model complex problems. In this paper, we present our efforts toward constructing a framework that can serve as an intermediary between a user and their user interface (UI), enabling dynamic and real-time interactions. We employ a system that stands upon textual semantic mappings of UI components, in the form of annotations. These mappings are stored, parsed, and scaled in a custom data structure, supplementary to an agent-based prompting backend engine. Employing textual semantic mappings allows each component to not only explain its role to the engine but also provide expectations. By comprehending the needs of both the user and the components, our LLM engine can classify the most appropriate application, extract relevant parameters, and subsequently execute precise predictions of the user's expected actions. Such an integration evolves static user interfaces into highly dynamic and adaptable solutions, introducing a new frontier of intelligent and responsive user experiences.
|
[
"['Syed Mekael Wasti' 'Ken Q. Pu' 'Ali Neshati']"
] |
null | null |
2402.07946
| null | null |
http://arxiv.org/pdf/2402.07946v2
|
2024-03-28T15:17:30Z
|
2024-02-09T16:10:29Z
|
Re-Envisioning Command and Control
|
Future warfare will require Command and Control (C2) decision-making to occur in more complex, fast-paced, ill-structured, and demanding conditions. C2 will be further complicated by operational challenges such as Denied, Degraded, Intermittent, and Limited (DDIL) communications and the need to account for many data streams, potentially across multiple domains of operation. Yet, current C2 practices -- which stem from the industrial era rather than the emerging intelligence era -- are linear and time-consuming. Critically, these approaches may fail to maintain overmatch against adversaries on the future battlefield. To address these challenges, we propose a vision for future C2 based on robust partnerships between humans and artificial intelligence (AI) systems. This future vision is encapsulated in three operational impacts: streamlining the C2 operations process, maintaining unity of effort, and developing adaptive collective knowledge systems. This paper illustrates the envisaged future C2 capabilities, discusses the assumptions that shaped them, and describes how the proposed developments could transform C2 in future warfare.
|
[
"['Kaleb McDowell' 'Ellen Novoseller' 'Anna Madison' 'Vinicius G. Goecks'\n 'Christopher Kelshaw']"
] |
null | null |
2402.07948
| null | null |
http://arxiv.org/pdf/2402.07948v1
|
2024-02-09T20:33:48Z
|
2024-02-09T20:33:48Z
|
evolSOM: an R Package for evolutionary conservation analysis with SOMs
|
Motivation: Unraveling the connection between genes and traits is crucial for solving many biological puzzles. Genes provide instructions for building cellular machinery, directing the processes that sustain life. RNA molecules and proteins, derived from these genetic instructions, play crucial roles in shaping cell structures, influencing reactions, and guiding behavior. This fundamental biological principle links genetic makeup to observable traits, but integrating and extracting meaningful relationships from this complex, multimodal data presents a significant challenge. Results: We introduce evolSOM, a novel R package that utilizes Self-Organizing Maps (SOMs) to explore and visualize the conservation of biological variables, easing the integration of phenotypic and genotypic attributes. By constructing species-specific or condition-specific SOMs that capture non-redundant patterns, evolSOM allows the analysis of displacement of biological variables between species or conditions. Variables displaced together suggest membership in the same regulatory network, and the nature of the displacement may hold biological significance. The package automatically calculates and graphically presents these displacements, enabling efficient comparison and revealing conserved and displaced variables. The package facilitates the integration of diverse phenotypic data types, enabling the exploration of potential gene drivers underlying observed phenotypic changes. Its user-friendly interface and visualization capabilities enhance the accessibility of complex network analyses. Illustratively, we employed evolSOM to study the displacement of genes and phenotypic traits, successfully identifying potential drivers of phenotypic differentiation in grass leaves. Availability: The package is open-source and is available at https://github.com/sanprochetto/evolSOM.
|
[
"['Santiago Prochetto' 'Renata Reinheimer' 'Georgina Stegmayer']"
] |
null | null |
2402.07949
| null | null |
http://arxiv.org/pdf/2402.07949v1
|
2024-02-10T00:49:46Z
|
2024-02-10T00:49:46Z
|
Optimizing the Design of an Artificial Pancreas to Improve Diabetes
Management
|
Diabetes, a chronic condition that impairs how the body turns food into energy, i.e. blood glucose, affects 38 million people in the US alone. The standard treatment is to supplement carbohydrate intake with an artificial pancreas, i.e. a continuous insulin pump (basal shots), as well as occasional insulin injections (bolus shots). The goal of the treatment is to keep blood glucose at the center of an acceptable range, as measured through a continuous glucose meter. A secondary goal is to minimize injections, which are unpleasant and difficult for some patients to implement. In this study, neuroevolution was used to discover an optimal strategy for the treatment. Based on a dataset of 30 days of treatment and measurements of a single patient, a random forest was first trained to predict future glucose levels. A neural network was then evolved to prescribe carbohydrates, basal pumping levels, and bolus injections. Evolution discovered a Pareto front that reduced deviation from the target and number of injections compared to the original data, thus improving patients' quality of life. To make the system easier to adopt, a language interface was developed with a large language model. Thus, these technologies not only improve patient care but also adoption in a broader population.
|
[
"['Ashok Khanna' 'Olivier Francon' 'Risto Miikkulainen']"
] |
null | null |
2402.07955
| null | null |
http://arxiv.org/pdf/2402.07955v1
|
2024-02-10T17:31:46Z
|
2024-02-10T17:31:46Z
|
ProtIR: Iterative Refinement between Retrievers and Predictors for
Protein Function Annotation
|
Protein function annotation is an important yet challenging task in biology. Recent deep learning advancements show significant potential for accurate function prediction by learning from protein sequences and structures. Nevertheless, these predictor-based methods often overlook the modeling of protein similarity, an idea commonly employed in traditional approaches using sequence or structure retrieval tools. To fill this gap, we first study the effect of inter-protein similarity modeling by benchmarking retriever-based methods against predictors on protein function annotation tasks. Our results show that retrievers can match or outperform predictors without large-scale pre-training. Building on these insights, we introduce a novel variational pseudo-likelihood framework, ProtIR, designed to improve function predictors by incorporating inter-protein similarity modeling. This framework iteratively refines knowledge between a function predictor and retriever, thereby combining the strengths of both predictors and retrievers. ProtIR showcases around 10% improvement over vanilla predictor-based methods. Besides, it achieves performance on par with protein language model-based methods, yet without the need for massive pre-training, highlighting the efficacy of our framework. Code will be released upon acceptance.
|
[
"['Zuobai Zhang' 'Jiarui Lu' 'Vijil Chenthamarakshan' 'Aurélie Lozano'\n 'Payel Das' 'Jian Tang']"
] |
null | null |
2402.07963
| null | null |
http://arxiv.org/pdf/2402.07963v2
|
2024-07-07T09:48:13Z
|
2024-02-12T10:32:47Z
|
SPO: Sequential Monte Carlo Policy Optimisation
|
Leveraging planning during learning and decision-making is central to the long-term development of intelligent agents. Recent works have successfully combined tree-based search methods and self-play learning mechanisms to this end. However, these methods typically face scaling challenges due to the sequential nature of their search. While practical engineering solutions can partly overcome this, they often result in a negative impact on performance. In this paper, we introduce SPO: Sequential Monte Carlo Policy Optimisation, a model-based reinforcement learning algorithm grounded within the Expectation Maximisation (EM) framework. We show that SPO provides robust policy improvement and efficient scaling properties. The sample-based search makes it directly applicable to both discrete and continuous action spaces without modifications. We demonstrate statistically significant improvements in performance relative to model-free and model-based baselines across both continuous and discrete environments. Furthermore, the parallel nature of SPO's search enables effective utilisation of hardware accelerators, yielding favourable scaling laws.
|
[
"['Matthew V Macfarlane' 'Edan Toledo' 'Donal Byrne' 'Paul Duckworth'\n 'Alexandre Laterre']"
] |
null | null |
2402.07970
| null | null |
http://arxiv.org/pdf/2402.07970v1
|
2024-02-12T18:24:32Z
|
2024-02-12T18:24:32Z
|
Utilizing Low-Dimensional Molecular Embeddings for Rapid Chemical
Similarity Search
|
Nearest neighbor-based similarity searching is a common task in chemistry, with notable use cases in drug discovery. Yet, some of the most commonly used approaches for this task still leverage a brute-force approach. In practice this can be computationally costly and overly time-consuming, due in part to the sheer size of modern chemical databases. Previous computational advancements for this task have generally relied on improvements to hardware or dataset-specific tricks that lack generalizability. Approaches that leverage lower-complexity searching algorithms remain relatively underexplored. However, many of these algorithms are approximate solutions and/or struggle with typical high-dimensional chemical embeddings. Here we evaluate whether a combination of low-dimensional chemical embeddings and a k-d tree data structure can achieve fast nearest neighbor queries while maintaining performance on standard chemical similarity search benchmarks. We examine different dimensionality reductions of standard chemical embeddings as well as a learned, structurally-aware embedding -- SmallSA -- for this task. With this framework, searches on over one billion chemicals execute in less than a second on a single CPU core, five orders of magnitude faster than the brute-force approach. We also demonstrate that SmallSA achieves competitive performance on chemical similarity benchmarks.
|
[
"['Kathryn E. Kirchoff' 'James Wellnitz' 'Joshua E. Hochuli'\n 'Travis Maxfield' 'Konstantin I. Popov' 'Shawn Gomez' 'Alexander Tropsha']"
] |
null | null |
2402.07999
| null | null |
http://arxiv.org/pdf/2402.07999v3
|
2024-03-20T14:30:41Z
|
2024-02-12T19:04:32Z
|
NetInfoF Framework: Measuring and Exploiting Network Usable Information
|
Given a node-attributed graph, and a graph task (link prediction or node classification), can we tell if a graph neural network (GNN) will perform well? More specifically, do the graph structure and the node features carry enough usable information for the task? Our goals are (1) to develop a fast tool to measure how much information is in the graph structure and in the node features, and (2) to exploit the information to solve the task, if there is enough. We propose NetInfoF, a framework including NetInfoF_Probe and NetInfoF_Act, for the measurement and the exploitation of network usable information (NUI), respectively. Given a graph data, NetInfoF_Probe measures NUI without any model training, and NetInfoF_Act solves link prediction and node classification, while two modules share the same backbone. In summary, NetInfoF has following notable advantages: (a) General, handling both link prediction and node classification; (b) Principled, with theoretical guarantee and closed-form solution; (c) Effective, thanks to the proposed adjustment to node similarity; (d) Scalable, scaling linearly with the input size. In our carefully designed synthetic datasets, NetInfoF correctly identifies the ground truth of NUI and is the only method being robust to all graph scenarios. Applied on real-world datasets, NetInfoF wins in 11 out of 12 times on link prediction compared to general GNN baselines.
|
[
"['Meng-Chieh Lee' 'Haiyang Yu' 'Jian Zhang' 'Vassilis N. Ioannidis'\n 'Xiang Song' 'Soji Adeshina' 'Da Zheng' 'Christos Faloutsos']"
] |
null | null |
2402.08001
| null | null |
http://arxiv.org/pdf/2402.08001v1
|
2024-02-12T19:05:27Z
|
2024-02-12T19:05:27Z
|
Improvement and generalization of ABCD method with Bayesian inference
|
To find New Physics or to refine our knowledge of the Standard Model at the LHC is an enterprise that involves many factors. We focus on taking advantage of available information and pour our effort in re-thinking the usual data-driven ABCD method to improve it and to generalize it using Bayesian Machine Learning tools. We propose that a dataset consisting of a signal and many backgrounds is well described through a mixture model. Signal, backgrounds and their relative fractions in the sample can be well extracted by exploiting the prior knowledge and the dependence between the different observables at the event-by-event level with Bayesian tools. We show how, in contrast to the ABCD method, one can take advantage of understanding some properties of the different backgrounds and of having more than two independent observables to measure in each event. In addition, instead of regions defined through hard cuts, the Bayesian framework uses the information of continuous distribution to obtain soft-assignments of the events which are statistically more robust. To compare both methods we use a toy problem inspired by $ppto hhto bbar b b bar b$, selecting a reduced and simplified number of processes and analysing the flavor of the four jets and the invariant mass of the jet-pairs, modeled with simplified distributions. Taking advantage of all this information, and starting from a combination of biased and agnostic priors, leads us to a very good posterior once we use the Bayesian framework to exploit the data and the mutual information of the observables at the event-by-event level. We show how, in this simplified model, the Bayesian framework outperforms the ABCD method sensitivity in obtaining the signal fraction in scenarios with $1%$ and $0.5%$ true signal fractions in the dataset. We also show that the method is robust against the absence of signal.
|
[
"['Ezequiel Alvarez' 'Leandro Da Rold' 'Manuel Szewc' 'Alejandro Szynkman'\n 'Santiago A. Tanco' 'Tatiana Tarutina']"
] |
null | null |
2402.08005
| null | null |
http://arxiv.org/pdf/2402.08005v1
|
2024-02-12T19:10:13Z
|
2024-02-12T19:10:13Z
|
Refined Direct Preference Optimization with Synthetic Data for
Behavioral Alignment of LLMs
|
In this paper, we introduce emph{refined Direct Preference Optimization} (rDPO), a method for improving the behavioral alignment of Large Language Models (LLMs) without the need for human-annotated data. The method involves creating synthetic data using self-critique prompting by a teacher LLM and then utilising a generalized DPO loss function to distil to a student LLM. The loss function incorporates an additional external reward model to improve the quality of synthetic data, making rDPO robust to potential noise in the synthetic dataset. rDPO is shown to be effective in a diverse set of behavioural alignment tasks, such as improved safety, robustness against role-playing, and reduced sycophancy. Code to be released at https://github.com/vicgalle/refined-dpo.
|
[
"['Víctor Gallego']"
] |
null | null |
2402.08010
| null | null |
http://arxiv.org/pdf/2402.08010v1
|
2024-02-12T19:18:50Z
|
2024-02-12T19:18:50Z
|
Which Frequencies do CNNs Need? Emergent Bottleneck Structure in Feature
Learning
|
We describe the emergence of a Convolution Bottleneck (CBN) structure in CNNs, where the network uses its first few layers to transform the input representation into a representation that is supported only along a few frequencies and channels, before using the last few layers to map back to the outputs. We define the CBN rank, which describes the number and type of frequencies that are kept inside the bottleneck, and partially prove that the parameter norm required to represent a function $f$ scales as depth times the CBN rank $f$. We also show that the parameter norm depends at next order on the regularity of $f$. We show that any network with almost optimal parameter norm will exhibit a CBN structure in both the weights and - under the assumption that the network is stable under large learning rate - the activations, which motivates the common practice of down-sampling; and we verify that the CBN results still hold with down-sampling. Finally we use the CBN structure to interpret the functions learned by CNNs on a number of tasks.
|
[
"['Yuxiao Wen' 'Arthur Jacot']"
] |
null | null |
2402.08012
| null | null |
http://arxiv.org/pdf/2402.08012v1
|
2024-02-12T19:21:14Z
|
2024-02-12T19:21:14Z
|
Online Differentially Private Synthetic Data Generation
|
We present a polynomial-time algorithm for online differentially private synthetic data generation. For a data stream within the hypercube $[0,1]^d$ and an infinite time horizon, we develop an online algorithm that generates a differentially private synthetic dataset at each time $t$. This algorithm achieves a near-optimal accuracy bound of $O(t^{-1/d}log(t))$ for $dgeq 2$ and $O(t^{-1}log^{4.5}(t))$ for $d=1$ in the 1-Wasserstein distance. This result generalizes the previous work on the continual release model for counting queries to include Lipschitz queries. Compared to the offline case, where the entire dataset is available at once, our approach requires only an extra polylog factor in the accuracy bound.
|
[
"['Yiyun He' 'Roman Vershynin' 'Yizhe Zhu']"
] |
null | null |
2402.08017
| null | null |
http://arxiv.org/pdf/2402.08017v2
|
2024-06-01T21:46:50Z
|
2024-02-12T19:27:26Z
|
Lumos : Empowering Multimodal LLMs with Scene Text Recognition
|
We introduce Lumos, the first end-to-end multimodal question-answering system with text understanding capabilities. At the core of Lumos is a Scene Text Recognition (STR) component that extracts text from first person point-of-view images, the output of which is used to augment input to a Multimodal Large Language Model (MM-LLM). While building Lumos, we encountered numerous challenges related to STR quality, overall latency, and model inference. In this paper, we delve into those challenges, and discuss the system architecture, design choices, and modeling techniques employed to overcome these obstacles. We also provide a comprehensive evaluation for each component, showcasing high quality and efficiency.
|
[
"['Ashish Shenoy' 'Yichao Lu' 'Srihari Jayakumar' 'Debojeet Chatterjee'\n 'Mohsen Moslehpour' 'Pierce Chuang' 'Abhay Harpale' 'Vikas Bhardwaj'\n 'Di Xu' 'Shicong Zhao' 'Longfang Zhao' 'Ankit Ramchandani'\n 'Xin Luna Dong' 'Anuj Kumar']"
] |
null | null |
2402.08018
| null | null |
http://arxiv.org/pdf/2402.08018v1
|
2024-02-12T19:27:30Z
|
2024-02-12T19:27:30Z
|
Nearest Neighbour Score Estimators for Diffusion Generative Models
|
Score function estimation is the cornerstone of both training and sampling from diffusion generative models. Despite this fact, the most commonly used estimators are either biased neural network approximations or high variance Monte Carlo estimators based on the conditional score. We introduce a novel nearest neighbour score function estimator which utilizes multiple samples from the training set to dramatically decrease estimator variance. We leverage our low variance estimator in two compelling applications. Training consistency models with our estimator, we report a significant increase in both convergence speed and sample quality. In diffusion models, we show that our estimator can replace a learned network for probability-flow ODE integration, opening promising new avenues of future research.
|
[
"['Matthew Niedoba' 'Dylan Green' 'Saeid Naderiparizi' 'Vasileios Lioutas'\n 'Jonathan Wilder Lavington' 'Xiaoxuan Liang' 'Yunpeng Liu' 'Ke Zhang'\n 'Setareh Dabiri' 'Adam Ścibior' 'Berend Zwartsenberg' 'Frank Wood']"
] |
null | null |
2402.08022
| null | null |
http://arxiv.org/pdf/2402.08022v1
|
2024-02-12T19:39:07Z
|
2024-02-12T19:39:07Z
|
Leveraging Digital Cousins for Ensemble Q-Learning in Large-Scale
Wireless Networks
|
Optimizing large-scale wireless networks, including optimal resource management, power allocation, and throughput maximization, is inherently challenging due to their non-observable system dynamics and heterogeneous and complex nature. Herein, a novel ensemble Q-learning algorithm that addresses the performance and complexity challenges of the traditional Q-learning algorithm for optimizing wireless networks is presented. Ensemble learning with synthetic Markov Decision Processes is tailored to wireless networks via new models for approximating large state-space observable wireless networks. In particular, digital cousins are proposed as an extension of the traditional digital twin concept wherein multiple Q-learning algorithms on multiple synthetic Markovian environments are run in parallel and their outputs are fused into a single Q-function. Convergence analyses of key statistics and Q-functions and derivations of upper bounds on the estimation bias and variance are provided. Numerical results across a variety of real-world wireless networks show that the proposed algorithm can achieve up to 50% less average policy error with up to 40% less runtime complexity than the state-of-the-art reinforcement learning algorithms. It is also shown that theoretical results properly predict trends in the experimental results.
|
[
"['Talha Bozkus' 'Urbashi Mitra']"
] |
null | null |
2402.08023
| null | null |
http://arxiv.org/pdf/2402.08023v1
|
2024-02-12T19:39:26Z
|
2024-02-12T19:39:26Z
|
UGMAE: A Unified Framework for Graph Masked Autoencoders
|
Generative self-supervised learning on graphs, particularly graph masked autoencoders, has emerged as a popular learning paradigm and demonstrated its efficacy in handling non-Euclidean data. However, several remaining issues limit the capability of existing methods: 1) the disregard of uneven node significance in masking, 2) the underutilization of holistic graph information, 3) the ignorance of semantic knowledge in the representation space due to the exclusive use of reconstruction loss in the output space, and 4) the unstable reconstructions caused by the large volume of masked contents. In light of this, we propose UGMAE, a unified framework for graph masked autoencoders to address these issues from the perspectives of adaptivity, integrity, complementarity, and consistency. Specifically, we first develop an adaptive feature mask generator to account for the unique significance of nodes and sample informative masks (adaptivity). We then design a ranking-based structure reconstruction objective joint with feature reconstruction to capture holistic graph information and emphasize the topological proximity between neighbors (integrity). After that, we present a bootstrapping-based similarity module to encode the high-level semantic knowledge in the representation space, complementary to the low-level reconstruction in the output space (complementarity). Finally, we build a consistency assurance module to provide reconstruction objectives with extra stabilized consistency targets (consistency). Extensive experiments demonstrate that UGMAE outperforms both contrastive and generative state-of-the-art baselines on several tasks across multiple datasets.
|
[
"['Yijun Tian' 'Chuxu Zhang' 'Ziyi Kou' 'Zheyuan Liu' 'Xiangliang Zhang'\n 'Nitesh V. Chawla']"
] |
null | null |
2402.08030
| null | null |
http://arxiv.org/abs/2402.08030v1
|
2024-02-12T19:49:58Z
|
2024-02-12T19:49:58Z
|
Why and When LLM-Based Assistants Can Go Wrong: Investigating the
Effectiveness of Prompt-Based Interactions for Software Help-Seeking
|
Large Language Model (LLM) assistants, such as ChatGPT, have emerged as potential alternatives to search methods for helping users navigate complex, feature-rich software. LLMs use vast training data from domain-specific texts, software manuals, and code repositories to mimic human-like interactions, offering tailored assistance, including step-by-step instructions. In this work, we investigated LLM-generated software guidance through a within-subject experiment with 16 participants and follow-up interviews. We compared a baseline LLM assistant with an LLM optimized for particular software contexts, SoftAIBot, which also offered guidelines for constructing appropriate prompts. We assessed task completion, perceived accuracy, relevance, and trust. Surprisingly, although SoftAIBot outperformed the baseline LLM, our results revealed no significant difference in LLM usage and user perceptions with or without prompt guidelines and the integration of domain context. Most users struggled to understand how the prompt's text related to the LLM's responses and often followed the LLM's suggestions verbatim, even if they were incorrect. This resulted in difficulties when using the LLM's advice for software tasks, leading to low task completion rates. Our detailed analysis also revealed that users remained unaware of inaccuracies in the LLM's responses, indicating a gap between their lack of software expertise and their ability to evaluate the LLM's assistance. With the growing push for designing domain-specific LLM assistants, we emphasize the importance of incorporating explainable, context-aware cues into LLMs to help users understand prompt-based interactions, identify biases, and maximize the utility of LLM assistants.
|
[
"['Anjali Khurana' 'Hari Subramonyam' 'Parmit K Chilana']"
] |
null | null |
2402.08056
| null | null |
http://arxiv.org/abs/2402.08056v1
|
2024-02-12T20:46:47Z
|
2024-02-12T20:46:47Z
|
MIML library: a Modular and Flexible Library for Multi-instance
Multi-label Learning
|
MIML library is a Java software tool to develop, test, and compare classification algorithms for multi-instance multi-label (MIML) learning. The library includes 43 algorithms and provides a specific format and facilities for data managing and partitioning, holdout and cross-validation methods, standard metrics for performance evaluation, and generation of reports. In addition, algorithms can be executed through $xml$ configuration files without needing to program. It is platform-independent, extensible, free, open-source, and available on GitHub under the GNU General Public License.
|
[
"['Álvaro Belmonte' 'Amelia Zafra' 'Eva Gibaja']"
] |
null | null |
2402.08062
| null | null |
http://arxiv.org/pdf/2402.08062v2
|
2024-05-26T16:55:07Z
|
2024-02-12T21:12:11Z
|
Avoiding Catastrophe in Continuous Spaces by Asking for Help
|
Most reinforcement learning algorithms with formal regret guarantees assume all mistakes are reversible and essentially rely on trying all possible behaviors. This approach leads to poor outcomes when some mistakes are irreparable or even catastrophic. We propose a variant of the contextual bandit problem where the goal is to minimize the chance of catastrophe. Specifically, we assume that the payoff each round represents the chance of avoiding catastrophe that round, and try to maximize the product of payoffs (the overall chance of avoiding catastrophe). We allow a limited number of queries to a mentor and assume a Lipschitz continuous payoff function. We first show that in general, any algorithm either constantly queries the mentor or is nearly guaranteed to cause catastrophe. However, when the mentor policy class has bounded Natarajan dimension and contains at least some "reasonable" policies, we provide an algorithm whose regret and rate of querying the mentor both approach 0 as the time horizon grows. We also present an alternative algorithm which provides the same regret and query guarantees when the mentor's action changes a constant number of times in a 1D state space, and can handle adversarially chosen states.
|
[
"['Benjamin Plaut' 'Hanlin Zhu' 'Stuart Russell']"
] |
null | null |
2402.08063
| null | null |
http://arxiv.org/abs/2402.08063v1
|
2024-02-12T21:14:37Z
|
2024-02-12T21:14:37Z
|
Locality Sensitive Hashing for Network Traffic Fingerprinting
|
The advent of the Internet of Things (IoT) has brought forth additional intricacies and difficulties to computer networks. These gadgets are particularly susceptible to cyber-attacks because of their simplistic design. Therefore, it is crucial to recognise these devices inside a network for the purpose of network administration and to identify any harmful actions. Network traffic fingerprinting is a crucial technique for identifying devices and detecting anomalies. Currently, the predominant methods for this depend heavily on machine learning (ML). Nevertheless, machine learning (ML) methods need the selection of features, adjustment of hyperparameters, and retraining of models to attain optimal outcomes and provide resilience to concept drifts detected in a network. In this research, we suggest using locality-sensitive hashing (LSH) for network traffic fingerprinting as a solution to these difficulties. Our study focuses on examining several design options for the Nilsimsa LSH function. We then use this function to create unique fingerprints for network data, which may be used to identify devices. We also compared it with ML-based traffic fingerprinting and observed that our method increases the accuracy of state-of-the-art by 12% achieving around 94% accuracy in identifying devices in a network.
|
[
"['Nowfel Mashnoor' 'Jay Thom' 'Abdur Rouf' 'Shamik Sengupta'\n 'Batyr Charyyev']"
] |
null | null |
2402.08073
| null | null |
http://arxiv.org/pdf/2402.08073v2
|
2024-03-15T01:18:45Z
|
2024-02-12T21:32:49Z
|
Grounding Data Science Code Generation with Input-Output Specifications
|
Large language models (LLMs) have recently demonstrated a remarkable ability to generate code from natural language (NL) prompts. However, in the real world, NL is often too ambiguous to capture the true intent behind programming problems, requiring additional input-output (I/O) specifications. Unfortunately, LLMs can have difficulty aligning their outputs with both the NL prompt and the I/O specification. In this paper, we give a way to mitigate this issue in the context of data science programming, where tasks require explicit I/O specifications for clarity. Specifically, we propose GIFT4Code, a novel approach for the instruction fine-tuning of LLMs with respect to I/O specifications. Our method leverages synthetic data produced by the LLM itself and utilizes execution-derived feedback as a key learning signal. This feedback, in the form of program I/O specifications, is provided to the LLM to facilitate instruction fine-tuning. We evaluated our approach on two challenging data science benchmarks, Arcade and DS-1000. The results demonstrate a significant improvement in the LLM's ability to generate code that is not only executable but also accurately aligned with user specifications, substantially improving the quality of code generation for complex data science tasks.
|
[
"['Yeming Wen' 'Pengcheng Yin' 'Kensen Shi' 'Henryk Michalewski'\n 'Swarat Chaudhuri' 'Alex Polozov']"
] |
null | null |
2402.08075
| null | null |
http://arxiv.org/pdf/2402.08075v1
|
2024-02-12T21:40:45Z
|
2024-02-12T21:40:45Z
|
Efficient and Scalable Fine-Tune of Language Models for Genome
Understanding
|
Although DNA foundation models have advanced the understanding of genomes, they still face significant challenges in the limited scale and diversity of genomic data. This limitation starkly contrasts with the success of natural language foundation models, which thrive on substantially larger scales. Furthermore, genome understanding involves numerous downstream genome annotation tasks with inherent data heterogeneity, thereby necessitating more efficient and robust fine-tuning methods tailored for genomics. Here, we present textsc{Lingo}: textsc{L}anguage prefix ftextsc{In}e-tuning for textsc{G}entextsc{O}mes. Unlike DNA foundation models, textsc{Lingo} strategically leverages natural language foundation models' contextual cues, recalibrating their linguistic knowledge to genomic sequences. textsc{Lingo} further accommodates numerous, heterogeneous downstream fine-tune tasks by an adaptive rank sampling method that prunes and stochastically reintroduces pruned singular vectors within small computational budgets. Adaptive rank sampling outperformed existing fine-tuning methods on all benchmarked 14 genome understanding tasks, while requiring fewer than 2% of trainable parameters as genomic-specific adapters. Impressively, applying these adapters on natural language foundation models matched or even exceeded the performance of DNA foundation models. textsc{Lingo} presents a new paradigm of efficient and scalable genome understanding via genomic-specific adapters on language models.
|
[
"['Huixin Zhan' 'Ying Nian Wu' 'Zijun Zhang']"
] |
null | null |
2402.08077
| null | null |
http://arxiv.org/pdf/2402.08077v1
|
2024-02-12T21:44:20Z
|
2024-02-12T21:44:20Z
|
Diffeomorphic Measure Matching with Kernels for Generative Modeling
|
This article presents a general framework for the transport of probability measures towards minimum divergence generative modeling and sampling using ordinary differential equations (ODEs) and Reproducing Kernel Hilbert Spaces (RKHSs), inspired by ideas from diffeomorphic matching and image registration. A theoretical analysis of the proposed method is presented, giving a priori error bounds in terms of the complexity of the model, the number of samples in the training set, and model misspecification. An extensive suite of numerical experiments further highlights the properties, strengths, and weaknesses of the method and extends its applicability to other tasks, such as conditional simulation and inference.
|
[
"['Biraj Pandey' 'Bamdad Hosseini' 'Pau Batlle' 'Houman Owhadi']"
] |
null | null |
2402.08078
| null | null |
http://arxiv.org/pdf/2402.08078v1
|
2024-02-12T21:44:32Z
|
2024-02-12T21:44:32Z
|
Large Language Models as Agents in Two-Player Games
|
By formally defining the training processes of large language models (LLMs), which usually encompasses pre-training, supervised fine-tuning, and reinforcement learning with human feedback, within a single and unified machine learning paradigm, we can glean pivotal insights for advancing LLM technologies. This position paper delineates the parallels between the training methods of LLMs and the strategies employed for the development of agents in two-player games, as studied in game theory, reinforcement learning, and multi-agent systems. We propose a re-conceptualization of LLM learning processes in terms of agent learning in language-based games. This framework unveils innovative perspectives on the successes and challenges in LLM development, offering a fresh understanding of addressing alignment issues among other strategic considerations. Furthermore, our two-player game approach sheds light on novel data preparation and machine learning techniques for training LLMs.
|
[
"['Yang Liu' 'Peng Sun' 'Hang Li']"
] |
null | null |
2402.08082
| null | null |
http://arxiv.org/pdf/2402.08082v3
|
2024-02-23T17:51:20Z
|
2024-02-12T22:02:23Z
|
Score-based generative models break the curse of dimensionality in
learning a family of sub-Gaussian probability distributions
|
While score-based generative models (SGMs) have achieved remarkable success in enormous image generation tasks, their mathematical foundations are still limited. In this paper, we analyze the approximation and generalization of SGMs in learning a family of sub-Gaussian probability distributions. We introduce a notion of complexity for probability distributions in terms of their relative density with respect to the standard Gaussian measure. We prove that if the log-relative density can be locally approximated by a neural network whose parameters can be suitably bounded, then the distribution generated by empirical score matching approximates the target distribution in total variation with a dimension-independent rate. We illustrate our theory through examples, which include certain mixtures of Gaussians. An essential ingredient of our proof is to derive a dimension-free deep neural network approximation rate for the true score function associated with the forward process, which is interesting in its own right.
|
[
"['Frank Cole' 'Yulong Lu']"
] |
null | null |
2402.08085
| null | null |
http://arxiv.org/pdf/2402.08085v1
|
2024-02-12T22:06:37Z
|
2024-02-12T22:06:37Z
|
Message Detouring: A Simple Yet Effective Cycle Representation for
Expressive Graph Learning
|
Graph learning is crucial in the fields of bioinformatics, social networks, and chemicals. Although high-order graphlets, such as cycles, are critical to achieving an informative graph representation for node classification, edge prediction, and graph recognition, modeling high-order topological characteristics poses significant computational challenges, restricting its widespread applications in machine learning. To address this limitation, we introduce the concept of textit{message detouring} to hierarchically characterize cycle representation throughout the entire graph, which capitalizes on the contrast between the shortest and longest pathways within a range of local topologies associated with each graph node. The topological feature representations derived from our message detouring landscape demonstrate comparable expressive power to high-order textit{Weisfeiler-Lehman} (WL) tests but much less computational demands. In addition to the integration with graph kernel and message passing neural networks, we present a novel message detouring neural network, which uses Transformer backbone to integrate cycle representations across nodes and edges. Aside from theoretical results, experimental results on expressiveness, graph classification, and node classification show message detouring can significantly outperform current counterpart approaches on various benchmark datasets.
|
[
"['Ziquan Wei' 'Tingting Dan' 'Guorong Wu']"
] |
null | null |
2402.08086
| null | null |
http://arxiv.org/pdf/2402.08086v2
|
2024-05-20T19:18:52Z
|
2024-02-12T22:07:43Z
|
Text-centric Alignment for Multi-Modality Learning
|
This research paper addresses the challenge of modality mismatch in multimodal learning, where the modalities available during inference differ from those available at training. We propose the Text-centric Alignment for Multi-Modality Learning (TAMML) approach, an innovative method that utilizes Large Language Models (LLMs) with in-context learning and foundation models to enhance the generalizability of multimodal systems under these conditions. By leveraging the unique properties of text as a unified semantic space, TAMML demonstrates significant improvements in handling unseen, diverse, and unpredictable modality combinations. TAMML not only adapts to varying modalities but also maintains robust performance, showcasing the potential of foundation models in overcoming the limitations of traditional fixed-modality frameworks in embedding representations. This study contributes to the field by offering a flexible, effective solution for real-world applications where modality availability is dynamic and uncertain.
|
[
"['Yun-Da Tsai' 'Ting-Yu Yen' 'Pei-Fu Guo' 'Zhe-Yan Li' 'Shou-De Lin']"
] |
null | null |
2402.08088
| null | null |
http://arxiv.org/pdf/2402.08088v1
|
2024-02-12T22:10:06Z
|
2024-02-12T22:10:06Z
|
Out-of-Distribution Detection and Data Drift Monitoring using
Statistical Process Control
|
Background: Machine learning (ML) methods often fail with data that deviates from their training distribution. This is a significant concern for ML-enabled devices in clinical settings, where data drift may cause unexpected performance that jeopardizes patient safety. Method: We propose a ML-enabled Statistical Process Control (SPC) framework for out-of-distribution (OOD) detection and drift monitoring. SPC is advantageous as it visually and statistically highlights deviations from the expected distribution. To demonstrate the utility of the proposed framework for monitoring data drift in radiological images, we investigated different design choices, including methods for extracting feature representations, drift quantification, and SPC parameter selection. Results: We demonstrate the effectiveness of our framework for two tasks: 1) differentiating axial vs. non-axial computed tomography (CT) images and 2) separating chest x-ray (CXR) from other modalities. For both tasks, we achieved high accuracy in detecting OOD inputs, with 0.913 in CT and 0.995 in CXR, and sensitivity of 0.980 in CT and 0.984 in CXR. Our framework was also adept at monitoring data streams and identifying the time a drift occurred. In a simulation with 100 daily CXR cases, we detected a drift in OOD input percentage from 0-1% to 3-5% within two days, maintaining a low false-positive rate. Through additional experimental results, we demonstrate the framework's data-agnostic nature and independence from the underlying model's structure. Conclusion: We propose a framework for OOD detection and drift monitoring that is agnostic to data, modality, and model. The framework is customizable and can be adapted for specific applications.
|
[
"['Ghada Zamzmi' 'Kesavan Venkatesh' 'Brandon Nelson' 'Smriti Prathapan'\n 'Paul H. Yi' 'Berkman Sahiner' 'Jana G. Delfino']"
] |
null | null |
2402.08090
| null | null |
http://arxiv.org/pdf/2402.08090v3
|
2024-05-29T23:05:07Z
|
2024-02-12T22:17:28Z
|
Learning Neural Contracting Dynamics: Extended Linearization and Global
Guarantees
|
Global stability and robustness guarantees in learned dynamical systems are essential to ensure well-behavedness of the systems in the face of uncertainty. We present Extended Linearized Contracting Dynamics (ELCD), the first neural network-based dynamical system with global contractivity guarantees in arbitrary metrics. The key feature of ELCD is a parametrization of the extended linearization of the nonlinear vector field. In its most basic form, ELCD is guaranteed to be (i) globally exponentially stable, (ii) equilibrium contracting, and (iii) globally contracting with respect to some metric. To allow for contraction with respect to more general metrics in the data space, we train diffeomorphisms between the data space and a latent space and enforce contractivity in the latent space, which ensures global contractivity in the data space. We demonstrate the performance of ELCD on the high dimensional LASA, multi-link pendulum, and Rosenbrock datasets.
|
[
"['Sean Jaffe' 'Alexander Davydov' 'Deniz Lapsekili' 'Ambuj Singh'\n 'Francesco Bullo']"
] |
null | null |
2402.08093
| null | null |
http://arxiv.org/pdf/2402.08093v2
|
2024-02-15T18:57:26Z
|
2024-02-12T22:21:30Z
|
BASE TTS: Lessons from building a billion-parameter Text-to-Speech model
on 100K hours of data
|
We introduce a text-to-speech (TTS) model called BASE TTS, which stands for $textbf{B}$ig $textbf{A}$daptive $textbf{S}$treamable TTS with $textbf{E}$mergent abilities. BASE TTS is the largest TTS model to-date, trained on 100K hours of public domain speech data, achieving a new state-of-the-art in speech naturalness. It deploys a 1-billion-parameter autoregressive Transformer that converts raw texts into discrete codes ("speechcodes") followed by a convolution-based decoder which converts these speechcodes into waveforms in an incremental, streamable manner. Further, our speechcodes are built using a novel speech tokenization technique that features speaker ID disentanglement and compression with byte-pair encoding. Echoing the widely-reported "emergent abilities" of large language models when trained on increasing volume of data, we show that BASE TTS variants built with 10K+ hours and 500M+ parameters begin to demonstrate natural prosody on textually complex sentences. We design and share a specialized dataset to measure these emergent abilities for text-to-speech. We showcase state-of-the-art naturalness of BASE TTS by evaluating against baselines that include publicly available large-scale text-to-speech systems: YourTTS, Bark and TortoiseTTS. Audio samples generated by the model can be heard at https://amazon-ltts-paper.com/.
|
[
"['Mateusz Łajszczak' 'Guillermo Cámbara' 'Yang Li' 'Fatih Beyhan'\n 'Arent van Korlaar' 'Fan Yang' 'Arnaud Joly' 'Álvaro Martín-Cortinas'\n 'Ammar Abbas' 'Adam Michalski' 'Alexis Moinet' 'Sri Karlapati'\n 'Ewa Muszyńska' 'Haohan Guo' 'Bartosz Putrycz' 'Soledad López Gambino'\n 'Kayeon Yoo' 'Elena Sokolova' 'Thomas Drugman']"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.