categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
sequence
null
null
2406.05986
null
null
http://arxiv.org/pdf/2406.05986v1
2024-06-10T03:00:28Z
2024-06-10T03:00:28Z
Neural-g: A Deep Learning Framework for Mixing Density Estimation
Mixing (or prior) density estimation is an important problem in machine learning and statistics, especially in empirical Bayes $g$-modeling where accurately estimating the prior is necessary for making good posterior inferences. In this paper, we propose neural-$g$, a new neural network-based estimator for $g$-modeling. Neural-$g$ uses a softmax output layer to ensure that the estimated prior is a valid probability density. Under default hyperparameters, we show that neural-$g$ is very flexible and capable of capturing many unknown densities, including those with flat regions, heavy tails, and/or discontinuities. In contrast, existing methods struggle to capture all of these prior shapes. We provide justification for neural-$g$ by establishing a new universal approximation theorem regarding the capability of neural networks to learn arbitrary probability mass functions. To accelerate convergence of our numerical implementation, we utilize a weighted average gradient descent approach to update the network parameters. Finally, we extend neural-$g$ to multivariate prior density estimation. We illustrate the efficacy of our approach through simulations and analyses of real datasets. A software package to implement neural-$g$ is publicly available at https://github.com/shijiew97/neuralG.
[ "['Shijie Wang' 'Saptarshi Chakraborty' 'Qian Qin' 'Ray Bai']" ]
null
null
2406.05993
null
null
http://arxiv.org/pdf/2406.05993v1
2024-06-10T03:25:49Z
2024-06-10T03:25:49Z
Discovering Multiple Solutions from a Single Task in Offline Reinforcement Learning
Recent studies on online reinforcement learning (RL) have demonstrated the advantages of learning multiple behaviors from a single task, as in the case of few-shot adaptation to a new environment. Although this approach is expected to yield similar benefits in offline RL, appropriate methods for learning multiple solutions have not been fully investigated in previous studies. In this study, we therefore addressed the problem of finding multiple solutions from a single task in offline RL. We propose algorithms that can learn multiple solutions in offline RL, and empirically investigate their performance. Our experimental results show that the proposed algorithm learns multiple qualitatively and quantitatively distinctive solutions in offline RL.
[ "['Takayuki Osa' 'Tatsuya Harada']" ]
null
null
2406.05995
null
null
http://arxiv.org/pdf/2406.05995v1
2024-06-10T03:29:23Z
2024-06-10T03:29:23Z
A Dual-View Approach to Classifying Radiology Reports by Co-Training
Radiology report analysis provides valuable information that can aid with public health initiatives, and has been attracting increasing attention from the research community. In this work, we present a novel insight that the structure of a radiology report (namely, the Findings and Impression sections) offers different views of a radiology scan. Based on this intuition, we further propose a co-training approach, where two machine learning models are built upon the Findings and Impression sections, respectively, and use each other's information to boost performance with massive unlabeled data in a semi-supervised manner. We conducted experiments in a public health surveillance study, and results show that our co-training approach is able to improve performance using the dual views and surpass competing supervised and semi-supervised methods.
[ "['Yutong Han' 'Yan Yuan' 'Lili Mou']" ]
null
null
2406.05999
null
null
http://arxiv.org/abs/2406.05999v1
2024-06-10T03:38:35Z
2024-06-10T03:38:35Z
fSEAD: a Composable FPGA-based Streaming Ensemble Anomaly Detection Library
Machine learning ensembles combine multiple base models to produce a more accurate output. They can be applied to a range of machine learning problems, including anomaly detection. In this paper, we investigate how to maximize the composability and scalability of an FPGA-based streaming ensemble anomaly detector (fSEAD). To achieve this, we propose a flexible computing architecture consisting of multiple partially reconfigurable regions, pblocks, which each implement anomaly detectors. Our proof-of-concept design supports three state-of-the-art anomaly detection algorithms: Loda, RS-Hash and xStream. Each algorithm is scalable, meaning multiple instances can be placed within a pblock to improve performance. Moreover, fSEAD is implemented using High-level synthesis (HLS), meaning further custom anomaly detectors can be supported. Pblocks are interconnected via an AXI-switch, enabling them to be composed in an arbitrary fashion before combining and merging results at run-time to create an ensemble that maximizes the use of FPGA resources and accuracy. Through utilizing reconfigurable Dynamic Function eXchange (DFX), the detector can be modified at run-time to adapt to changing environmental conditions. We compare fSEAD to an equivalent central processing unit (CPU) implementation using four standard datasets, with speed-ups ranging from $3times$ to $8times$.
[ "['Binglei Lou' 'David Boland' 'Philip H. W. Leong']" ]
null
null
2406.06002
null
null
http://arxiv.org/pdf/2406.06002v1
2024-06-10T03:51:38Z
2024-06-10T03:51:38Z
Computational and Statistical Guarantees for Tensor-on-Tensor Regression with Tensor Train Decomposition
Recently, a tensor-on-tensor (ToT) regression model has been proposed to generalize tensor recovery, encompassing scenarios like scalar-on-tensor regression and tensor-on-vector regression. However, the exponential growth in tensor complexity poses challenges for storage and computation in ToT regression. To overcome this hurdle, tensor decompositions have been introduced, with the tensor train (TT)-based ToT model proving efficient in practice due to reduced memory requirements, enhanced computational efficiency, and decreased sampling complexity. Despite these practical benefits, a disparity exists between theoretical analysis and real-world performance. In this paper, we delve into the theoretical and algorithmic aspects of the TT-based ToT regression model. Assuming the regression operator satisfies the restricted isometry property (RIP), we conduct an error analysis for the solution to a constrained least-squares optimization problem. This analysis includes upper error bound and minimax lower bound, revealing that such error bounds polynomially depend on the order $N+M$. To efficiently find solutions meeting such error bounds, we propose two optimization algorithms: the iterative hard thresholding (IHT) algorithm (employing gradient descent with TT-singular value decomposition (TT-SVD)) and the factorization approach using the Riemannian gradient descent (RGD) algorithm. When RIP is satisfied, spectral initialization facilitates proper initialization, and we establish the linear convergence rate of both IHT and RGD.
[ "['Zhen Qin' 'Zhihui Zhu']" ]
null
null
2406.06007
null
null
http://arxiv.org/pdf/2406.06007v1
2024-06-10T04:07:09Z
2024-06-10T04:07:09Z
CARES: A Comprehensive Benchmark of Trustworthiness in Medical Vision Language Models
Artificial intelligence has significantly impacted medical applications, particularly with the advent of Medical Large Vision Language Models (Med-LVLMs), sparking optimism for the future of automated and personalized healthcare. However, the trustworthiness of Med-LVLMs remains unverified, posing significant risks for future model deployment. In this paper, we introduce CARES and aim to comprehensively evaluate the Trustworthiness of Med-LVLMs across the medical domain. We assess the trustworthiness of Med-LVLMs across five dimensions, including trustfulness, fairness, safety, privacy, and robustness. CARES comprises about 41K question-answer pairs in both closed and open-ended formats, covering 16 medical image modalities and 27 anatomical regions. Our analysis reveals that the models consistently exhibit concerns regarding trustworthiness, often displaying factual inaccuracies and failing to maintain fairness across different demographic groups. Furthermore, they are vulnerable to attacks and demonstrate a lack of privacy awareness. We publicly release our benchmark and code in https://github.com/richard-peng-xia/CARES.
[ "['Peng Xia' 'Ze Chen' 'Juanxi Tian' 'Yangrui Gong' 'Ruibo Hou' 'Yue Xu'\n 'Zhenbang Wu' 'Zhiyuan Fan' 'Yiyang Zhou' 'Kangyu Zhu' 'Wenhao Zheng'\n 'Zhaoyang Wang' 'Xiao Wang' 'Xuchao Zhang' 'Chetan Bansal'\n 'Marc Niethammer' 'Junzhou Huang' 'Hongtu Zhu' 'Yun Li' 'Jimeng Sun'\n 'Zongyuan Ge' 'Gang Li' 'James Zou' 'Huaxiu Yao']" ]
null
null
2406.06016
null
null
http://arxiv.org/pdf/2406.06016v1
2024-06-10T04:35:14Z
2024-06-10T04:35:14Z
EpiLearn: A Python Library for Machine Learning in Epidemic Modeling
EpiLearn is a Python toolkit developed for modeling, simulating, and analyzing epidemic data. Although there exist several packages that also deal with epidemic modeling, they are often restricted to mechanistic models or traditional statistical tools. As machine learning continues to shape the world, the gap between these packages and the latest models has become larger. To bridge the gap and inspire innovative research in epidemic modeling, EpiLearn not only provides support for evaluating epidemic models based on machine learning, but also incorporates comprehensive tools for analyzing epidemic data, such as simulation, visualization, transformations, etc. For the convenience of both epidemiologists and data scientists, we provide a unified framework for training and evaluation of epidemic models on two tasks: Forecasting and Source Detection. To facilitate the development of new models, EpiLearn follows a modular design, making it flexible and easy to use. In addition, an interactive web application is also developed to visualize the real-world or simulated epidemic data. Our package is available at https://github.com/Emory-Melody/EpiLearn.
[ "['Zewen Liu' 'Yunxiao Li' 'Mingyang Wei' 'Guancheng Wan' 'Max S. Y. Lau'\n 'Wei Jin']" ]
null
null
2406.06022
null
null
http://arxiv.org/pdf/2406.06022v1
2024-06-10T04:56:16Z
2024-06-10T04:56:16Z
GraphStorm: all-in-one graph machine learning framework for industry applications
Graph machine learning (GML) is effective in many business applications. However, making GML easy to use and applicable to industry applications with massive datasets remain challenging. We developed GraphStorm, which provides an end-to-end solution for scalable graph construction, graph model training and inference. GraphStorm has the following desirable properties: (a) Easy to use: it can perform graph construction and model training and inference with just a single command; (b) Expert-friendly: GraphStorm contains many advanced GML modeling techniques to handle complex graph data and improve model performance; (c) Scalable: every component in GraphStorm can operate on graphs with billions of nodes and can scale model training and inference to different hardware without changing any code. GraphStorm has been used and deployed for over a dozen billion-scale industry applications after its release in May 2023. It is open-sourced in Github: https://github.com/awslabs/graphstorm.
[ "['Da Zheng' 'Xiang Song' 'Qi Zhu' 'Jian Zhang' 'Theodore Vasiloudis'\n 'Runjie Ma' 'Houyu Zhang' 'Zichen Wang' 'Soji Adeshina' 'Israt Nisa'\n 'Alejandro Mottini' 'Qingjun Cui' 'Huzefa Rangwala' 'Belinda Zeng'\n 'Christos Faloutsos' 'George Karypis']" ]
null
null
2406.06025
null
null
http://arxiv.org/pdf/2406.06025v1
2024-06-10T05:15:30Z
2024-06-10T05:15:30Z
RepoQA: Evaluating Long Context Code Understanding
Recent advances have been improving the context windows of Large Language Models (LLMs). To quantify the real long-context capabilities of LLMs, evaluators such as the popular Needle in a Haystack have been developed to test LLMs over a large chunk of raw texts. While effective, current evaluations overlook the insight of how LLMs work with long-context code, i.e., repositories. To this end, we initiate the RepoQA benchmark to evaluate LLMs on long-context code understanding. Traditional needle testers ask LLMs to directly retrieve the answer from the context without necessary deep understanding. In RepoQA, we built our initial task, namely Searching Needle Function (SNF), which exercises LLMs to search functions given their natural-language description, i.e., LLMs cannot find the desired function if they cannot understand the description and code. RepoQA is multilingual and comprehensive: it includes 500 code search tasks gathered from 50 popular repositories across 5 modern programming languages. By evaluating 26 general and code-specific LLMs on RepoQA, we show (i) there is still a small gap between the best open and proprietary models; (ii) different models are good at different languages; and (iii) models may understand code better without comments.
[ "['Jiawei Liu' 'Jia Le Tian' 'Vijay Daita' 'Yuxiang Wei' 'Yifeng Ding'\n 'Yuhan Katherine Wang' 'Jun Yang' 'Lingming Zhang']" ]
null
null
2406.06037
null
null
http://arxiv.org/pdf/2406.06037v1
2024-06-10T06:06:38Z
2024-06-10T06:06:38Z
Investigating Pre-Training Objectives for Generalization in Vision-Based Reinforcement Learning
Recently, various pre-training methods have been introduced in vision-based Reinforcement Learning (RL). However, their generalization ability remains unclear due to evaluations being limited to in-distribution environments and non-unified experimental setups. To address this, we introduce the Atari Pre-training Benchmark (Atari-PB), which pre-trains a ResNet-50 model on 10 million transitions from 50 Atari games and evaluates it across diverse environment distributions. Our experiments show that pre-training objectives focused on learning task-agnostic features (e.g., identifying objects and understanding temporal dynamics) enhance generalization across different environments. In contrast, objectives focused on learning task-specific knowledge (e.g., identifying agents and fitting reward functions) improve performance in environments similar to the pre-training dataset but not in varied ones. We publicize our codes, datasets, and model checkpoints at https://github.com/dojeon-ai/Atari-PB.
[ "['Donghu Kim' 'Hojoon Lee' 'Kyungmin Lee' 'Dongyoon Hwang' 'Jaegul Choo']" ]
null
null
2406.06041
null
null
http://arxiv.org/pdf/2406.06041v1
2024-06-10T06:19:33Z
2024-06-10T06:19:33Z
Risk Sensitivity in Markov Games and Multi-Agent Reinforcement Learning: A Systematic Review
Markov games (MGs) and multi-agent reinforcement learning (MARL) are studied to model decision making in multi-agent systems. Traditionally, the objective in MG and MARL has been risk-neutral, i.e., agents are assumed to optimize a performance metric such as expected return, without taking into account subjective or cognitive preferences of themselves or of other agents. However, ignoring such preferences leads to inaccurate models of decision making in many real-world scenarios in finance, operations research, and behavioral economics. Therefore, when these preferences are present, it is necessary to incorporate a suitable measure of risk into the optimization objective of agents, which opens the door to risk-sensitive MG and MARL. In this paper, we systemically review the literature on risk sensitivity in MG and MARL that has been growing in recent years alongside other areas of reinforcement learning and game theory. We define and mathematically describe different risk measures used in MG and MARL and individually for each measure, discuss articles that incorporate it. Finally, we identify recent trends in theoretical and applied works in the field and discuss possible directions of future research.
[ "['Hafez Ghaemi' 'Shirin Jamshidi' 'Mohammad Mashreghi'\n 'Majid Nili Ahmadabadi' 'Hamed Kebriaei']" ]
null
null
2406.06046
null
null
http://arxiv.org/pdf/2406.06046v1
2024-06-10T06:27:42Z
2024-06-10T06:27:42Z
MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models
Pretraining data selection has the potential to improve language model pretraining efficiency by utilizing higher-quality data from massive web data corpora. Current data selection methods, which rely on either hand-crafted rules or larger reference models, are conducted statically and do not capture the evolving data preferences during pretraining. In this paper, we introduce model-aware data selection with data influence models (MATES), where a data influence model continuously adapts to the evolving data preferences of the pretraining model and then selects the data most effective for the current pretraining progress. Specifically, we fine-tune a small data influence model to approximate oracle data preference signals collected by locally probing the pretraining model and to select data accordingly for the next pretraining stage. Experiments on Pythia and the C4 dataset demonstrate that MATES significantly outperforms random data selection on extensive downstream tasks in both zero- and few-shot settings. It doubles the gains achieved by recent data selection approaches that leverage larger reference models and reduces the total FLOPs required to reach certain performances by half. Further analysis validates the ever-changing data preferences of pretraining models and the effectiveness of our data influence models to capture them. Our code is open-sourced at https://github.com/cxcscmu/MATES.
[ "['Zichun Yu' 'Spandan Das' 'Chenyan Xiong']" ]
null
null
2406.06051
null
null
http://arxiv.org/pdf/2406.06051v1
2024-06-10T06:39:37Z
2024-06-10T06:39:37Z
On the Utility of Accounting for Human Beliefs about AI Behavior in Human-AI Collaboration
To enable effective human-AI collaboration, merely optimizing AI performance while ignoring humans is not sufficient. Recent research has demonstrated that designing AI agents to account for human behavior leads to improved performance in human-AI collaboration. However, a limitation of most existing approaches is their assumption that human behavior is static, irrespective of AI behavior. In reality, humans may adjust their action plans based on their observations of AI behavior. In this paper, we address this limitation by enabling a collaborative AI agent to consider the beliefs of its human partner, i.e., what the human partner thinks the AI agent is doing, and design its action plan to facilitate easier collaboration with its human partner. Specifically, we developed a model of human beliefs that accounts for how humans reason about the behavior of their AI partners. Based on this belief model, we then developed an AI agent that considers both human behavior and human beliefs in devising its strategy for working with humans. Through extensive real-world human-subject experiments, we demonstrated that our belief model more accurately predicts humans' beliefs about AI behavior. Moreover, we showed that our design of AI agents that accounts for human beliefs enhances performance in human-AI collaboration.
[ "['Guanghui Yu' 'Robert Kasumba' 'Chien-Ju Ho' 'William Yeoh']" ]
null
null
2406.06060
null
null
http://arxiv.org/pdf/2406.06060v1
2024-06-10T07:14:56Z
2024-06-10T07:14:56Z
Learning Physical Simulation with Message Passing Transformer
Machine learning methods for physical simulation have achieved significant success in recent years. We propose a new universal architecture based on Graph Neural Network, the Message Passing Transformer, which incorporates a Message Passing framework, employs an Encoder-Processor-Decoder structure, and applies Graph Fourier Loss as loss function for model optimization. To take advantage of the past message passing state information, we propose Hadamard-Product Attention to update the node attribute in the Processor, Hadamard-Product Attention is a variant of Dot-Product Attention that focuses on more fine-grained semantics and emphasizes on assigning attention weights over each feature dimension rather than each position in the sequence relative to others. We further introduce Graph Fourier Loss (GFL) to balance high-energy and low-energy components. To improve time performance, we precompute the graph's Laplacian eigenvectors before the training process. Our architecture achieves significant accuracy improvements in long-term rollouts for both Lagrangian and Eulerian dynamical systems over current methods.
[ "['Zeyi Xu' 'Yifei Li']" ]
null
null
2406.06072
null
null
http://arxiv.org/pdf/2406.06072v1
2024-06-10T07:36:24Z
2024-06-10T07:36:24Z
Adapting Pretrained ViTs with Convolution Injector for Visuo-Motor Control
Vision Transformers (ViT), when paired with large-scale pretraining, have shown remarkable performance across various computer vision tasks, primarily due to their weak inductive bias. However, while such weak inductive bias aids in pretraining scalability, this may hinder the effective adaptation of ViTs for visuo-motor control tasks as a result of the absence of control-centric inductive biases. Such absent inductive biases include spatial locality and translation equivariance bias which convolutions naturally offer. To this end, we introduce Convolution Injector (CoIn), an add-on module that injects convolutions which are rich in locality and equivariance biases into a pretrained ViT for effective adaptation in visuo-motor control. We evaluate CoIn with three distinct types of pretrained ViTs (CLIP, MVP, VC-1) across 12 varied control tasks within three separate domains (Adroit, MetaWorld, DMC), and demonstrate that CoIn consistently enhances control task performance across all experimented environments and models, validating the effectiveness of providing pretrained ViTs with control-centric biases.
[ "['Dongyoon Hwang' 'Byungkun Lee' 'Hojoon Lee' 'Hyunseung Kim'\n 'Jaegul Choo']" ]
null
null
2406.06081
null
null
http://arxiv.org/abs/2406.06081v2
2024-06-17T07:35:12Z
2024-06-10T07:54:56Z
An Open and Large-Scale Dataset for Multi-Modal Climate Change-aware Crop Yield Predictions
Precise crop yield predictions are of national importance for ensuring food security and sustainable agricultural practices. While AI-for-science approaches have exhibited promising achievements in solving many scientific problems such as drug discovery, precipitation nowcasting, etc., the development of deep learning models for predicting crop yields is constantly hindered by the lack of an open and large-scale deep learning-ready dataset with multiple modalities to accommodate sufficient information. To remedy this, we introduce the CropNet dataset, the first terabyte-sized, publicly available, and multi-modal dataset specifically targeting climate change-aware crop yield predictions for the contiguous United States (U.S.) continent at the county level. Our CropNet dataset is composed of three modalities of data, i.e., Sentinel-2 Imagery, WRF-HRRR Computed Dataset, and USDA Crop Dataset, for over 2200 U.S. counties spanning 6 years (2017-2022), expected to facilitate researchers in developing versatile deep learning models for timely and precisely predicting crop yields at the county-level, by accounting for the effects of both short-term growing season weather variations and long-term climate change on crop yields. Besides, we develop the CropNet package, offering three types of APIs, for facilitating researchers in downloading the CropNet data on the fly over the time and region of interest, and flexibly building their deep learning models for accurate crop yield predictions. Extensive experiments have been conducted on our CropNet dataset via employing various types of deep learning solutions, with the results validating the general applicability and the efficacy of the CropNet dataset in climate change-aware crop yield predictions.
[ "['Fudong Lin' 'Kaleb Guillot' 'Summer Crawford' 'Yihe Zhang' 'Xu Yuan'\n 'Nian-Feng Tzeng']" ]
null
null
2406.06099
null
null
http://arxiv.org/pdf/2406.06099v1
2024-06-10T08:34:13Z
2024-06-10T08:34:13Z
Sequential Binary Classification for Intrusion Detection in Software Defined Networks
Software-Defined Networks (SDN) are the standard architecture for network deployment. Intrusion Detection Systems (IDS) are a pivotal part of this technology as networks become more vulnerable to new and sophisticated attacks. Machine Learning (ML)-based IDS are increasingly seen as the most effective approach to handle this issue. However, IDS datasets suffer from high class imbalance, which impacts the performance of standard ML models. We propose Sequential Binary Classification (SBC) - an algorithm for multi-class classification to address this issue. SBC is a hierarchical cascade of base classifiers, each of which can be modelled on any general binary classifier. Extensive experiments are reported on benchmark datasets that evaluate the performance of SBC under different scenarios.
[ "['Ishan Chokshi' 'Shrihari Vasudevan' 'Nachiappan Sundaram'\n 'Raaghul Ranganathan']" ]
null
null
2406.06101
null
null
http://arxiv.org/pdf/2406.06101v1
2024-06-10T08:35:01Z
2024-06-10T08:35:01Z
On the Consistency of Kernel Methods with Dependent Observations
The consistency of a learning method is usually established under the assumption that the observations are a realization of an independent and identically distributed (i.i.d.) or mixing process. Yet, kernel methods such as support vector machines (SVMs), Gaussian processes, or conditional kernel mean embeddings (CKMEs) all give excellent performance under sampling schemes that are obviously non-i.i.d., such as when data comes from a dynamical system. We propose the new notion of empirical weak convergence (EWC) as a general assumption explaining such phenomena for kernel methods. It assumes the existence of a random asymptotic data distribution and is a strict weakening of previous assumptions in the field. Our main results then establish consistency of SVMs, kernel mean embeddings, and general Hilbert-space valued empirical expectations with EWC data. Our analysis holds for both finite- and infinite-dimensional outputs, as we extend classical results of statistical learning to the latter case. In particular, it is also applicable to CKMEs. Overall, our results open new classes of processes to statistical learning and can serve as a foundation for a theory of learning beyond i.i.d. and mixing.
[ "['Pierre-François Massiani' 'Sebastian Trimpe' 'Friedrich Solowjow']" ]
null
null
2406.06106
null
null
http://arxiv.org/pdf/2406.06106v1
2024-06-10T08:42:48Z
2024-06-10T08:42:48Z
Testably Learning Polynomial Threshold Functions
Rubinfeld & Vasilyan recently introduced the framework of testable learning as an extension of the classical agnostic model. It relaxes distributional assumptions which are difficult to verify by conditions that can be checked efficiently by a tester. The tester has to accept whenever the data truly satisfies the original assumptions, and the learner has to succeed whenever the tester accepts. We focus on the setting where the tester has to accept standard Gaussian data. There, it is known that basic concept classes such as halfspaces can be learned testably with the same time complexity as in the (distribution-specific) agnostic model. In this work, we ask whether there is a price to pay for testably learning more complex concept classes. In particular, we consider polynomial threshold functions (PTFs), which naturally generalize halfspaces. We show that PTFs of arbitrary constant degree can be testably learned up to excess error $varepsilon > 0$ in time $n^{mathrm{poly}(1/varepsilon)}$. This qualitatively matches the best known guarantees in the agnostic model. Our results build on a connection between testable learning and fooling. In particular, we show that distributions that approximately match at least $mathrm{poly}(1/varepsilon)$ moments of the standard Gaussian fool constant-degree PTFs (up to error $varepsilon$). As a secondary result, we prove that a direct approach to show testable learning (without fooling), which was successfully used for halfspaces, cannot work for PTFs.
[ "['Lucas Slot' 'Stefan Tiegel' 'Manuel Wiedmer']" ]
null
null
2406.06119
null
null
http://arxiv.org/pdf/2406.06119v1
2024-06-10T09:11:30Z
2024-06-10T09:11:30Z
A Survey on Incomplete Multi-label Learning: Recent Advances and Future Trends
In reality, data often exhibit associations with multiple labels, making multi-label learning (MLL) become a prominent research topic. The last two decades have witnessed the success of MLL, which is indispensable from complete and accurate supervised information. However, obtaining such information in practice is always laborious and sometimes even impossible. To circumvent this dilemma, incomplete multi-label learning (InMLL) has emerged, aiming to learn from incomplete labeled data. To date, enormous InMLL works have been proposed to narrow the performance gap with complete MLL, whereas a systematic review for InMLL is still absent. In this paper, we not only attempt to fill the lacuna but also strive to pave the way for innovative research. Specifically, we retrospect the origin of InMLL, analyze the challenges of InMLL, and make a taxonomy of InMLL from the data-oriented and algorithm-oriented perspectives, respectively. Besides, we also present real applications of InMLL in various domains. More importantly, we highlight several potential future trends, including four open problems that are more in line with practice and three under-explored/unexplored techniques in addressing the challenges of InMLL, which may shed new light on developing novel research directions in the field of InMLL.
[ "['Xiang Li' 'Jiexi Liu' 'Xinrui Wang' 'Songcan Chen']" ]
null
null
2406.06134
null
null
http://arxiv.org/pdf/2406.06134v1
2024-06-10T09:45:38Z
2024-06-10T09:45:38Z
DiffInject: Revisiting Debias via Synthetic Data Generation using Diffusion-based Style Injection
Dataset bias is a significant challenge in machine learning, where specific attributes, such as texture or color of the images are unintentionally learned resulting in detrimental performance. To address this, previous efforts have focused on debiasing models either by developing novel debiasing algorithms or by generating synthetic data to mitigate the prevalent dataset biases. However, generative approaches to date have largely relied on using bias-specific samples from the dataset, which are typically too scarce. In this work, we propose, DiffInject, a straightforward yet powerful method to augment synthetic bias-conflict samples using a pretrained diffusion model. This approach significantly advances the use of diffusion models for debiasing purposes by manipulating the latent space. Our framework does not require any explicit knowledge of the bias types or labelling, making it a fully unsupervised setting for debiasing. Our methodology demonstrates substantial result in effectively reducing dataset bias.
[ "['Donggeun Ko' 'Sangwoo Jo' 'Dongjun Lee' 'Namjun Park' 'Jaekwang Kim']" ]
null
null
2406.06136
null
null
http://arxiv.org/pdf/2406.06136v1
2024-06-10T09:48:13Z
2024-06-10T09:48:13Z
A Comparative Survey of Vision Transformers for Feature Extraction in Texture Analysis
Texture, a significant visual attribute in images, has been extensively investigated across various image recognition applications. Convolutional Neural Networks (CNNs), which have been successful in many computer vision tasks, are currently among the best texture analysis approaches. On the other hand, Vision Transformers (ViTs) have been surpassing the performance of CNNs on tasks such as object recognition, causing a paradigm shift in the field. However, ViTs have so far not been scrutinized for texture recognition, hindering a proper appreciation of their potential in this specific setting. For this reason, this work explores various pre-trained ViT architectures when transferred to tasks that rely on textures. We review 21 different ViT variants and perform an extensive evaluation and comparison with CNNs and hand-engineered models on several tasks, such as assessing robustness to changes in texture rotation, scale, and illumination, and distinguishing color textures, material textures, and texture attributes. The goal is to understand the potential and differences among these models when directly applied to texture recognition, using pre-trained ViTs primarily for feature extraction and employing linear classifiers for evaluation. We also evaluate their efficiency, which is one of the main drawbacks in contrast to other methods. Our results show that ViTs generally outperform both CNNs and hand-engineered models, especially when using stronger pre-training and tasks involving in-the-wild textures (images from the internet). We highlight the following promising models: ViT-B with DINO pre-training, BeiTv2, and the Swin architecture, as well as the EfficientFormer as a low-cost alternative. In terms of efficiency, although having a higher number of GFLOPs and parameters, ViT-B and BeiT(v2) can achieve a lower feature extraction time on GPUs compared to ResNet50.
[ "['Leonardo Scabini' 'Andre Sacilotti' 'Kallil M. Zielinski'\n 'Lucas C. Ribas' 'Bernard De Baets' 'Odemir M. Bruno']" ]
null
null
2406.06140
null
null
http://arxiv.org/pdf/2406.06140v1
2024-06-10T09:53:54Z
2024-06-10T09:53:54Z
Can I understand what I create? Self-Knowledge Evaluation of Large Language Models
Large language models (LLMs) have achieved remarkable progress in linguistic tasks, necessitating robust evaluation frameworks to understand their capabilities and limitations. Inspired by Feynman's principle of understanding through creation, we introduce a self-knowledge evaluation framework that is easy to implement, evaluating models on their ability to comprehend and respond to self-generated questions. Our findings, based on testing multiple models across diverse tasks, reveal significant gaps in the model's self-knowledge ability. Further analysis indicates these gaps may be due to misalignment with human attention mechanisms. Additionally, fine-tuning on self-generated math task may enhance the model's math performance, highlighting the potential of the framework for efficient and insightful model evaluation and may also contribute to the improvement of LLMs.
[ "['Zhiquan Tan' 'Lai Wei' 'Jindong Wang' 'Xing Xie' 'Weiran Huang']" ]
null
null
2406.06149
null
null
http://arxiv.org/pdf/2406.06149v1
2024-06-10T10:15:32Z
2024-06-10T10:15:32Z
Decoupled Marked Temporal Point Process using Neural Ordinary Differential Equations
A Marked Temporal Point Process (MTPP) is a stochastic process whose realization is a set of event-time data. MTPP is often used to understand complex dynamics of asynchronous temporal events such as money transaction, social media, healthcare, etc. Recent studies have utilized deep neural networks to capture complex temporal dependencies of events and generate embedding that aptly represent the observed events. While most previous studies focus on the inter-event dependencies and their representations, how individual events influence the overall dynamics over time has been under-explored. In this regime, we propose a Decoupled MTPP framework that disentangles characterization of a stochastic process into a set of evolving influences from different events. Our approach employs Neural Ordinary Differential Equations (Neural ODEs) to learn flexible continuous dynamics of these influences while simultaneously addressing multiple inference problems, such as density estimation and survival rate computation. We emphasize the significance of disentangling the influences by comparing our framework with state-of-the-art methods on real-life datasets, and provide analysis on the model behavior for potential applications.
[ "['Yujee Song' 'Donghyun Lee' 'Rui Meng' 'Won Hwa Kim']" ]
null
null
2406.06150
null
null
http://arxiv.org/pdf/2406.06150v1
2024-06-10T10:17:06Z
2024-06-10T10:17:06Z
Physics-Informed Bayesian Optimization of Variational Quantum Circuits
In this paper, we propose a novel and powerful method to harness Bayesian optimization for Variational Quantum Eigensolvers (VQEs) -- a hybrid quantum-classical protocol used to approximate the ground state of a quantum Hamiltonian. Specifically, we derive a VQE-kernel which incorporates important prior information about quantum circuits: the kernel feature map of the VQE-kernel exactly matches the known functional form of the VQE's objective function and thereby significantly reduces the posterior uncertainty. Moreover, we propose a novel acquisition function for Bayesian optimization called Expected Maximum Improvement over Confident Regions (EMICoRe) which can actively exploit the inductive bias of the VQE-kernel by treating regions with low predictive uncertainty as indirectly ``observed''. As a result, observations at as few as three points in the search domain are sufficient to determine the complete objective function along an entire one-dimensional subspace of the optimization landscape. Our numerical experiments demonstrate that our approach improves over state-of-the-art baselines.
[ "['Kim A. Nicoli' 'Christopher J. Anders' 'Lena Funcke' 'Tobias Hartung'\n 'Karl Jansen' 'Stefan Kühn' 'Klaus-Robert Müller' 'Paolo Stornati'\n 'Pan Kessel' 'Shinichi Nakajima']" ]
null
null
2406.06158
null
null
http://arxiv.org/pdf/2406.06158v1
2024-06-10T10:42:37Z
2024-06-10T10:42:37Z
Get rich quick: exact solutions reveal how unbalanced initializations promote rapid feature learning
While the impressive performance of modern neural networks is often attributed to their capacity to efficiently extract task-relevant features from data, the mechanisms underlying this rich feature learning regime remain elusive, with much of our theoretical understanding stemming from the opposing lazy regime. In this work, we derive exact solutions to a minimal model that transitions between lazy and rich learning, precisely elucidating how unbalanced layer-specific initialization variances and learning rates determine the degree of feature learning. Our analysis reveals that they conspire to influence the learning regime through a set of conserved quantities that constrain and modify the geometry of learning trajectories in parameter and function space. We extend our analysis to more complex linear models with multiple neurons, outputs, and layers and to shallow nonlinear networks with piecewise linear activation functions. In linear networks, rapid feature learning only occurs with balanced initializations, where all layers learn at similar speeds. While in nonlinear networks, unbalanced initializations that promote faster learning in earlier layers can accelerate rich learning. Through a series of experiments, we provide evidence that this unbalanced rich regime drives feature learning in deep finite-width networks, promotes interpretability of early layers in CNNs, reduces the sample complexity of learning hierarchical data, and decreases the time to grokking in modular arithmetic. Our theory motivates further exploration of unbalanced initializations to enhance efficient feature learning.
[ "['Daniel Kunin' 'Allan Raventós' 'Clémentine Dominé' 'Feng Chen'\n 'David Klindt' 'Andrew Saxe' 'Surya Ganguli']" ]
null
null
2406.06165
null
null
http://arxiv.org/pdf/2406.06165v1
2024-06-10T11:00:26Z
2024-06-10T11:00:26Z
Generalized Nested Latent Variable Models for Lossy Coding applied to Wind Turbine Scenarios
Rate-distortion optimization through neural networks has accomplished competitive results in compression efficiency and image quality. This learning-based approach seeks to minimize the compromise between compression rate and reconstructed image quality by automatically extracting and retaining crucial information, while discarding less critical details. A successful technique consists in introducing a deep hyperprior that operates within a 2-level nested latent variable model, enhancing compression by capturing complex data dependencies. This paper extends this concept by designing a generalized L-level nested generative model with a Markov chain structure. We demonstrate as L increases that a trainable prior is detrimental and explore a common dimensionality along the distinct latent variables to boost compression performance. As this structured framework can represent autoregressive coders, we outperform the hyperprior model and achieve state-of-the-art performance while reducing substantially the computational cost. Our experimental evaluation is performed on wind turbine scenarios to study its application on visual inspections
[ "['Raül Pérez-Gonzalo' 'Andreas Espersen' 'Antonio Agudo']" ]
null
null
2406.06184
null
null
http://arxiv.org/pdf/2406.06184v1
2024-06-10T11:28:25Z
2024-06-10T11:28:25Z
Deep Multi-Objective Reinforcement Learning for Utility-Based Infrastructural Maintenance Optimization
In this paper, we introduce Multi-Objective Deep Centralized Multi-Agent Actor-Critic (MO- DCMAC), a multi-objective reinforcement learning (MORL) method for infrastructural maintenance optimization, an area traditionally dominated by single-objective reinforcement learning (RL) approaches. Previous single-objective RL methods combine multiple objectives, such as probability of collapse and cost, into a singular reward signal through reward-shaping. In contrast, MO-DCMAC can optimize a policy for multiple objectives directly, even when the utility function is non-linear. We evaluated MO-DCMAC using two utility functions, which use probability of collapse and cost as input. The first utility function is the Threshold utility, in which MO-DCMAC should minimize cost so that the probability of collapse is never above the threshold. The second is based on the Failure Mode, Effects, and Criticality Analysis (FMECA) methodology used by asset managers to asses maintenance plans. We evaluated MO-DCMAC, with both utility functions, in multiple maintenance environments, including ones based on a case study of the historical quay walls of Amsterdam. The performance of MO-DCMAC was compared against multiple rule-based policies based on heuristics currently used for constructing maintenance plans. Our results demonstrate that MO-DCMAC outperforms traditional rule-based policies across various environments and utility functions.
[ "['Jesse van Remmerden' 'Maurice Kenter' 'Diederik M. Roijers'\n 'Charalampos Andriotis' 'Yingqian Zhang' 'Zaharah Bukhsh']" ]
null
null
2406.06185
null
null
http://arxiv.org/pdf/2406.06185v2
2024-06-11T21:18:14Z
2024-06-10T11:28:29Z
EARS: An Anechoic Fullband Speech Dataset Benchmarked for Speech Enhancement and Dereverberation
We release the EARS (Expressive Anechoic Recordings of Speech) dataset, a high-quality speech dataset comprising 107 speakers from diverse backgrounds, totaling in 100 hours of clean, anechoic speech data. The dataset covers a large range of different speaking styles, including emotional speech, different reading styles, non-verbal sounds, and conversational freeform speech. We benchmark various methods for speech enhancement and dereverberation on the dataset and evaluate their performance through a set of instrumental metrics. In addition, we conduct a listening test with 20 participants for the speech enhancement task, where a generative method is preferred. We introduce a blind test set that allows for automatic online evaluation of uploaded data. Dataset download links and automatic evaluation server can be found online.
[ "['Julius Richter' 'Yi-Chiao Wu' 'Steven Krenn' 'Simon Welker'\n 'Bunlong Lay' 'Shinji Watanabe' 'Alexander Richard' 'Timo Gerkmann']" ]
null
null
2406.06202
null
null
http://arxiv.org/pdf/2406.06202v1
2024-06-10T11:58:11Z
2024-06-10T11:58:11Z
Federated learning in food research
Research in the food domain is at times limited due to data sharing obstacles, such as data ownership, privacy requirements, and regulations. While important, these obstacles can restrict data-driven methods such as machine learning. Federated learning, the approach of training models on locally kept data and only sharing the learned parameters, is a potential technique to alleviate data sharing obstacles. This systematic review investigates the use of federated learning within the food domain, structures included papers in a federated learning framework, highlights knowledge gaps, and discusses potential applications. A total of 41 papers were included in the review. The current applications include solutions to water and milk quality assessment, cybersecurity of water processing, pesticide residue risk analysis, weed detection, and fraud detection, focusing on centralized horizontal federated learning. One of the gaps found was the lack of vertical or transfer federated learning and decentralized architectures.
[ "['Zuzanna Fendor' 'Bas H. M. van der Velden' 'Xinxin Wang'\n 'Andrea Jr. Carnoli' 'Osman Mutlu' 'Ali Hürriyetoğlu']" ]
null
null
2406.06207
null
null
http://arxiv.org/pdf/2406.06207v1
2024-06-10T12:14:05Z
2024-06-10T12:14:05Z
Lurking in the shadows: Unveiling Stealthy Backdoor Attacks against Personalized Federated Learning
Federated Learning (FL) is a collaborative machine learning technique where multiple clients work together with a central server to train a global model without sharing their private data. However, the distribution shift across non-IID datasets of clients poses a challenge to this one-model-fits-all method hindering the ability of the global model to effectively adapt to each client's unique local data. To echo this challenge, personalized FL (PFL) is designed to allow each client to create personalized local models tailored to their private data. While extensive research has scrutinized backdoor risks in FL, it has remained underexplored in PFL applications. In this study, we delve deep into the vulnerabilities of PFL to backdoor attacks. Our analysis showcases a tale of two cities. On the one hand, the personalization process in PFL can dilute the backdoor poisoning effects injected into the personalized local models. Furthermore, PFL systems can also deploy both server-end and client-end defense mechanisms to strengthen the barrier against backdoor attacks. On the other hand, our study shows that PFL fortified with these defense methods may offer a false sense of security. We propose textit{PFedBA}, a stealthy and effective backdoor attack strategy applicable to PFL systems. textit{PFedBA} ingeniously aligns the backdoor learning task with the main learning task of PFL by optimizing the trigger generation process. Our comprehensive experiments demonstrate the effectiveness of textit{PFedBA} in seamlessly embedding triggers into personalized local models. textit{PFedBA} yields outstanding attack performance across 10 state-of-the-art PFL algorithms, defeating the existing 6 defense mechanisms. Our study sheds light on the subtle yet potent backdoor threats to PFL systems, urging the community to bolster defenses against emerging backdoor challenges.
[ "['Xiaoting Lyu' 'Yufei Han' 'Wei Wang' 'Jingkai Liu' 'Yongsheng Zhu'\n 'Guangquan Xu' 'Jiqiang Liu' 'Xiangliang Zhang']" ]
null
null
2406.06213
null
null
http://arxiv.org/pdf/2406.06213v1
2024-06-10T12:25:13Z
2024-06-10T12:25:13Z
A Statistical Theory of Regularization-Based Continual Learning
We provide a statistical analysis of regularization-based continual learning on a sequence of linear regression tasks, with emphasis on how different regularization terms affect the model performance. We first derive the convergence rate for the oracle estimator obtained as if all data were available simultaneously. Next, we consider a family of generalized $ell_2$-regularization algorithms indexed by matrix-valued hyperparameters, which includes the minimum norm estimator and continual ridge regression as special cases. As more tasks are introduced, we derive an iterative update formula for the estimation error of generalized $ell_2$-regularized estimators, from which we determine the hyperparameters resulting in the optimal algorithm. Interestingly, the choice of hyperparameters can effectively balance the trade-off between forward and backward knowledge transfer and adjust for data heterogeneity. Moreover, the estimation error of the optimal algorithm is derived explicitly, which is of the same order as that of the oracle estimator. In contrast, our lower bounds for the minimum norm estimator and continual ridge regression show their suboptimality. A byproduct of our theoretical analysis is the equivalence between early stopping and generalized $ell_2$-regularization in continual learning, which may be of independent interest. Finally, we conduct experiments to complement our theory.
[ "['Xuyang Zhao' 'Huiyuan Wang' 'Weiran Huang' 'Wei Lin']" ]
null
null
2406.06220
null
null
http://arxiv.org/pdf/2406.06220v1
2024-06-10T12:34:38Z
2024-06-10T12:34:38Z
Label-Looping: Highly Efficient Decoding for Transducers
This paper introduces a highly efficient greedy decoding algorithm for Transducer inference. We propose a novel data structure using CUDA tensors to represent partial hypotheses in a batch that supports parallelized hypothesis manipulations. During decoding, our algorithm maximizes GPU parallelism by adopting a nested-loop design, where the inner loop consumes all blank predictions, while non-blank predictions are handled in the outer loop. Our algorithm is general-purpose and can work with both conventional Transducers and Token-and-Duration Transducers. Experiments show that the label-looping algorithm can bring a speedup up to 2.0X compared to conventional batched decoding algorithms when using batch size 32, and can be combined with other compiler or GPU call-related techniques to bring more speedup. We will open-source our implementation to benefit the research community.
[ "['Vladimir Bataev' 'Hainan Xu' 'Daniel Galvez' 'Vitaly Lavrukhin'\n 'Boris Ginsburg']" ]
null
null
2406.06225
null
null
http://arxiv.org/pdf/2406.06225v1
2024-06-10T12:47:49Z
2024-06-10T12:47:49Z
Siren -- Advancing Cybersecurity through Deception and Adaptive Analysis
Siren represents a pioneering research effort aimed at fortifying cybersecurity through strategic integration of deception, machine learning, and proactive threat analysis. Drawing inspiration from mythical sirens, this project employs sophisticated methods to lure potential threats into controlled environments. The system features a dynamic machine learning model for real-time analysis and classification, ensuring continuous adaptability to emerging cyber threats. The architectural framework includes a link monitoring proxy, a purpose-built machine learning model for dynamic link analysis, and a honeypot enriched with simulated user interactions to intensify threat engagement. Data protection within the honeypot is fortified with probabilistic encryption. Additionally, the incorporation of simulated user activity extends the system's capacity to capture and learn from potential attackers even after user disengagement. Siren introduces a paradigm shift in cybersecurity, transforming traditional defense mechanisms into proactive systems that actively engage and learn from potential adversaries. The research strives to enhance user protection while yielding valuable insights for ongoing refinement in response to the evolving landscape of cybersecurity threats.
[ "['Girish Kulathumani' 'Samruth Ananthanarayanan' 'Ganesh Narayanan']" ]
null
null
2406.06227
null
null
http://arxiv.org/pdf/2406.06227v1
2024-06-10T12:53:13Z
2024-06-10T12:53:13Z
PAC-Bayes Analysis for Recalibration in Classification
Nonparametric estimation with binning is widely employed in the calibration error evaluation and the recalibration of machine learning models. Recently, theoretical analyses of the bias induced by this estimation approach have been actively pursued; however, the understanding of the generalization of the calibration error to unknown data remains limited. In addition, although many recalibration algorithms have been proposed, their generalization performance lacks theoretical guarantees. To address this problem, we conduct a generalization analysis of the calibration error under the probably approximately correct (PAC) Bayes framework. This approach enables us to derive a first optimizable upper bound for the generalization error in the calibration context. We then propose a generalization-aware recalibration algorithm based on our generalization theory. Numerical experiments show that our algorithm improves the Gaussian-process-based recalibration performance on various benchmark datasets and models.
[ "['Masahiro Fujisawa' 'Futoshi Futami']" ]
null
null
2406.06237
null
null
http://arxiv.org/pdf/2406.06237v1
2024-06-10T13:07:13Z
2024-06-10T13:07:13Z
Efficient Neural Compression with Inference-time Decoding
This paper explores the combination of neural network quantization and entropy coding for memory footprint minimization. Edge deployment of quantized models is hampered by the harsh Pareto frontier of the accuracy-to-bitwidth tradeoff, causing dramatic accuracy loss below a certain bitwidth. This accuracy loss can be alleviated thanks to mixed precision quantization, allowing for more flexible bitwidth allocation. However, standard mixed precision benefits remain limited due to the 1-bit frontier, that forces each parameter to be encoded on at least 1 bit of data. This paper introduces an approach that combines mixed precision, zero-point quantization and entropy coding to push the compression boundary of Resnets beyond the 1-bit frontier with an accuracy drop below 1% on the ImageNet benchmark. From an implementation standpoint, a compact decoder architecture features reduced latency, thus allowing for inference-compatible decoding.
[ "['C. Metz' 'O. Bichler' 'A. Dupret']" ]
null
null
2406.06246
null
null
http://arxiv.org/pdf/2406.06246v1
2024-06-10T13:23:00Z
2024-06-10T13:23:00Z
Data-Efficient Learning with Neural Programs
Many computational tasks can be naturally expressed as a composition of a DNN followed by a program written in a traditional programming language or an API call to an LLM. We call such composites "neural programs" and focus on the problem of learning the DNN parameters when the training data consist of end-to-end input-output labels for the composite. When the program is written in a differentiable logic programming language, techniques from neurosymbolic learning are applicable, but in general, the learning for neural programs requires estimating the gradients of black-box components. We present an algorithm for learning neural programs, called ISED, that only relies on input-output samples of black-box components. For evaluation, we introduce new benchmarks that involve calls to modern LLMs such as GPT-4 and also consider benchmarks from the neurosymolic learning literature. Our evaluation shows that for the latter benchmarks, ISED has comparable performance to state-of-the-art neurosymbolic frameworks. For the former, we use adaptations of prior work on gradient approximations of black-box components as a baseline, and show that ISED achieves comparable accuracy but in a more data- and sample-efficient manner.
[ "['Alaia Solko-Breslin' 'Seewon Choi' 'Ziyang Li' 'Neelay Velingker'\n 'Rajeev Alur' 'Mayur Naik' 'Eric Wong']" ]
null
null
2406.06248
null
null
http://arxiv.org/pdf/2406.06248v1
2024-06-10T13:25:43Z
2024-06-10T13:25:43Z
Compute Better Spent: Replacing Dense Layers with Structured Matrices
Dense linear layers are the dominant computational bottleneck in foundation models. Identifying more efficient alternatives to dense matrices has enormous potential for building more compute-efficient models, as exemplified by the success of convolutional networks in the image domain. In this work, we systematically explore structured matrices as replacements for dense matrices. We show that different structures often require drastically different initialization scales and learning rates, which are crucial to performance, especially as models scale. Using insights from the Maximal Update Parameterization, we determine the optimal scaling for initialization and learning rates of these unconventional layers. Finally, we measure the scaling laws of different structures to compare how quickly their performance improves with compute. We propose a novel matrix family containing Monarch matrices, the Block Tensor-Train (BTT), which we show performs better than dense matrices for the same compute on multiple tasks. On CIFAR-10/100 with augmentation, BTT achieves exponentially lower training loss than dense when training MLPs and ViTs. BTT matches dense ViT-S/32 performance on ImageNet-1k with 3.8 times less compute and is more efficient than dense for training small GPT-2 language models.
[ "['Shikai Qiu' 'Andres Potapczynski' 'Marc Finzi' 'Micah Goldblum'\n 'Andrew Gordon Wilson']" ]
null
null
2406.06282
null
null
http://arxiv.org/pdf/2406.06282v2
2024-06-12T09:39:41Z
2024-06-10T14:01:21Z
PowerInfer-2: Fast Large Language Model Inference on a Smartphone
This paper introduces PowerInfer-2, a framework designed for high-speed inference of Large Language Models (LLMs) on smartphones, particularly effective for models whose sizes exceed the device's memory capacity. The key insight of PowerInfer-2 is to utilize the heterogeneous computation, memory, and I/O resources in smartphones by decomposing traditional matrix computations into fine-grained neuron cluster computations. Specifically, PowerInfer-2 features a polymorphic neuron engine that adapts computational strategies for various stages of LLM inference. Additionally, it introduces segmented neuron caching and fine-grained neuron-cluster-level pipelining, which effectively minimize and conceal the overhead caused by I/O operations. The implementation and evaluation of PowerInfer-2 demonstrate its capability to support a wide array of LLM models on two smartphones, achieving up to a 29.2x speed increase compared with state-of-the-art frameworks. Notably, PowerInfer-2 is the first system to serve the TurboSparse-Mixtral-47B model with a generation rate of 11.68 tokens per second on a smartphone. For models that fit entirely within the memory, PowerInfer-2 can achieve approximately a 40% reduction in memory usage while maintaining inference speeds comparable to llama.cpp and MLC-LLM. For more details, including a demonstration video, please visit the project site at www.powerinfer.ai/v2.
[ "['Zhenliang Xue' 'Yixin Song' 'Zeyu Mi' 'Le Chen' 'Yubin Xia' 'Haibo Chen']" ]
null
null
2406.06287
null
null
http://arxiv.org/pdf/2406.06287v2
2024-07-12T06:08:09Z
2024-06-10T14:11:15Z
VS-PINN: A fast and efficient training of physics-informed neural networks using variable-scaling methods for solving PDEs with stiff behavior
Physics-informed neural networks (PINNs) have recently emerged as a promising way to compute the solutions of partial differential equations (PDEs) using deep neural networks. However, despite their significant success in various fields, it remains unclear in many aspects how to effectively train PINNs if the solutions of PDEs exhibit stiff behaviors or high frequencies. In this paper, we propose a new method for training PINNs using variable-scaling techniques. This method is simple and it can be applied to a wide range of problems including PDEs with rapidly-varying solutions. Throughout various numerical experiments, we will demonstrate the effectiveness of the proposed method for these problems and confirm that it can significantly improve the training efficiency and performance of PINNs. Furthermore, based on the analysis of the neural tangent kernel (NTK), we will provide theoretical evidence for this phenomenon and show that our methods can indeed improve the performance of PINNs.
[ "['Seungchan Ko' 'Sang Hyeon Park']" ]
null
null
2406.06290
null
null
http://arxiv.org/pdf/2406.06290v1
2024-06-10T14:12:33Z
2024-06-10T14:12:33Z
Geometric sparsification in recurrent neural networks
A common technique for ameliorating the computational costs of running large neural models is sparsification, or the removal of neural connections during training. Sparse models are capable of maintaining the high accuracy of state of the art models, while functioning at the cost of more parsimonious models. The structures which underlie sparse architectures are, however, poorly understood and not consistent between differently trained models and sparsification schemes. In this paper, we propose a new technique for sparsification of recurrent neural nets (RNNs), called moduli regularization, in combination with magnitude pruning. Moduli regularization leverages the dynamical system induced by the recurrent structure to induce a geometric relationship between neurons in the hidden state of the RNN. By making our regularizing term explicitly geometric, we provide the first, to our knowledge, a priori description of the desired sparse architecture of our neural net. We verify the effectiveness of our scheme for navigation and natural language processing RNNs. Navigation is a structurally geometric task, for which there are known moduli spaces, and we show that regularization can be used to reach 90% sparsity while maintaining model performance only when coefficients are chosen in accordance with a suitable moduli space. Natural language processing, however, has no known moduli space in which computations are performed. Nevertheless, we show that moduli regularization induces more stable recurrent neural nets with a variety of moduli regularizers, and achieves high fidelity models at 98% sparsity.
[ "['Wyatt Mackey' 'Ioannis Schizas' 'Jared Deighton' 'David L. Boothe, Jr.'\n 'Vasileios Maroulas']" ]
null
null
2406.06307
null
null
http://arxiv.org/abs/2406.06307v1
2024-06-10T14:23:25Z
2024-06-10T14:23:25Z
Building Continuous Quantum-Classical Bayesian Neural Networks for a Classical Clinical Dataset
In this work, we are introducing a Quantum-Classical Bayesian Neural Network (QCBNN) that is capable to perform uncertainty-aware classification of classical medical dataset. This model is a symbiosis of a classical Convolutional NN that performs ultra-sound image processing and a quantum circuit that generates its stochastic weights, within a Bayesian learning framework. To test the utility of this idea for the possible future deployment in the medical sector we track multiple behavioral metrics that capture both predictive performance as well as model's uncertainty. It is our ambition to create a hybrid model that is capable to classify samples in a more uncertainty aware fashion, which will advance the trustworthiness of these models and thus bring us step closer to utilizing them in the industry. We test multiple setups for quantum circuit for this task, and our best architectures display bigger uncertainty gap between correctly and incorrectly identified samples than its classical benchmark at an expense of a slight drop in predictive performance. The innovation of this paper is two-fold: (1) combining of different approaches that allow the stochastic weights from the quantum circuit to be continues thus allowing the model to classify application-driven dataset; (2) studying architectural features of quantum circuit that make-or-break these models, which pave the way into further investigation of more informed architectural designs.
[ "['Alona Sakhnenko' 'Julian Sikora' 'Jeanette Miriam Lorenz']" ]
null
null
2406.06309
null
null
http://arxiv.org/pdf/2406.06309v1
2024-06-10T14:25:11Z
2024-06-10T14:25:11Z
Is Value Functions Estimation with Classification Plug-and-play for Offline Reinforcement Learning?
In deep Reinforcement Learning (RL), value functions are typically approximated using deep neural networks and trained via mean squared error regression objectives to fit the true value functions. Recent research has proposed an alternative approach, utilizing the cross-entropy classification objective, which has demonstrated improved performance and scalability of RL algorithms. However, existing study have not extensively benchmarked the effects of this replacement across various domains, as the primary objective was to demonstrate the efficacy of the concept across a broad spectrum of tasks, without delving into in-depth analysis. Our work seeks to empirically investigate the impact of such a replacement in an offline RL setup and analyze the effects of different aspects on performance. Through large-scale experiments conducted across a diverse range of tasks using different algorithms, we aim to gain deeper insights into the implications of this approach. Our results reveal that incorporating this change can lead to superior performance over state-of-the-art solutions for some algorithms in certain tasks, while maintaining comparable performance levels in other tasks, however for other algorithms this modification might lead to the dramatic performance drop. This findings are crucial for further application of classification approach in research and practical tasks.
[ "['Denis Tarasov' 'Kirill Brilliantov' 'Dmitrii Kharlapenko']" ]
null
null
2406.06313
null
null
http://arxiv.org/pdf/2406.06313v1
2024-06-10T14:31:38Z
2024-06-10T14:31:38Z
ProAct: Progressive Training for Hybrid Clipped Activation Function to Enhance Resilience of DNNs
Deep Neural Networks (DNNs) are extensively employed in safety-critical applications where ensuring hardware reliability is a primary concern. To enhance the reliability of DNNs against hardware faults, activation restriction techniques significantly mitigate the fault effects at the DNN structure level, irrespective of accelerator architectures. State-of-the-art methods offer either neuron-wise or layer-wise clipping activation functions. They attempt to determine optimal clipping thresholds using heuristic and learning-based approaches. Layer-wise clipped activation functions cannot preserve DNNs resilience at high bit error rates. On the other hand, neuron-wise clipping activation functions introduce considerable memory overhead due to the addition of parameters, which increases their vulnerability to faults. Moreover, the heuristic-based optimization approach demands numerous fault injections during the search process, resulting in time-consuming threshold identification. On the other hand, learning-based techniques that train thresholds for entire layers concurrently often yield sub-optimal results. In this work, first, we demonstrate that it is not essential to incorporate neuron-wise activation functions throughout all layers in DNNs. Then, we propose a hybrid clipped activation function that integrates neuron-wise and layer-wise methods that apply neuron-wise clipping only in the last layer of DNNs. Additionally, to attain optimal thresholds in the clipping activation function, we introduce ProAct, a progressive training methodology. This approach iteratively trains the thresholds on a layer-by-layer basis, aiming to obtain optimal threshold values in each layer separately.
[ "['Seyedhamidreza Mousavi' 'Mohammad Hasan Ahmadilivani' 'Jaan Raik'\n 'Maksim Jenihhin' 'Masoud Daneshtalab']" ]
null
null
2406.06316
null
null
http://arxiv.org/pdf/2406.06316v1
2024-06-10T14:33:02Z
2024-06-10T14:33:02Z
Tx-LLM: A Large Language Model for Therapeutics
Developing therapeutics is a lengthy and expensive process that requires the satisfaction of many different criteria, and AI models capable of expediting the process would be invaluable. However, the majority of current AI approaches address only a narrowly defined set of tasks, often circumscribed within a particular domain. To bridge this gap, we introduce Tx-LLM, a generalist large language model (LLM) fine-tuned from PaLM-2 which encodes knowledge about diverse therapeutic modalities. Tx-LLM is trained using a collection of 709 datasets that target 66 tasks spanning various stages of the drug discovery pipeline. Using a single set of weights, Tx-LLM simultaneously processes a wide variety of chemical or biological entities(small molecules, proteins, nucleic acids, cell lines, diseases) interleaved with free-text, allowing it to predict a broad range of associated properties, achieving competitive with state-of-the-art (SOTA) performance on 43 out of 66 tasks and exceeding SOTA on 22. Among these, Tx-LLM is particularly powerful and exceeds best-in-class performance on average for tasks combining molecular SMILES representations with text such as cell line names or disease names, likely due to context learned during pretraining. We observe evidence of positive transfer between tasks with diverse drug types (e.g.,tasks involving small molecules and tasks involving proteins), and we study the impact of model size, domain finetuning, and prompting strategies on performance. We believe Tx-LLM represents an important step towards LLMs encoding biochemical knowledge and could have a future role as an end-to-end tool across the drug discovery development pipeline.
[ "['Juan Manuel Zambrano Chaves' 'Eric Wang' 'Tao Tu'\n 'Eeshit Dhaval Vaishnav' 'Byron Lee' 'S. Sara Mahdavi'\n 'Christopher Semturs' 'David Fleet' 'Vivek Natarajan' 'Shekoofeh Azizi']" ]
null
null
2406.06340
null
null
http://arxiv.org/pdf/2406.06340v1
2024-06-10T15:01:03Z
2024-06-10T15:01:03Z
Optimisation of federated learning settings under statistical heterogeneity variations
Federated Learning (FL) enables local devices to collaboratively learn a shared predictive model by only periodically sharing model parameters with a central aggregator. However, FL can be disadvantaged by statistical heterogeneity produced by the diversity in each local devices data distribution, which creates different levels of Independent and Identically Distributed (IID) data. Furthermore, this can be more complex when optimising different combinations of FL parameters and choosing optimal aggregation. In this paper, we present an empirical analysis of different FL training parameters and aggregators over various levels of statistical heterogeneity on three datasets. We propose a systematic data partition strategy to simulate different levels of statistical heterogeneity and a metric to measure the level of IID. Additionally, we empirically identify the best FL model and key parameters for datasets of different characteristics. On the basis of these, we present recommended guidelines for FL parameters and aggregators to optimise model performance under different levels of IID and with different datasets
[ "['Basem Suleiman' 'Muhammad Johan Alibasa' 'Rizka Widyarini Purwanto'\n 'Lewis Jeffries' 'Ali Anaissi' 'Jacky Song']" ]
null
null
2406.06348
null
null
http://arxiv.org/pdf/2406.06348v1
2024-06-10T15:08:14Z
2024-06-10T15:08:14Z
Causal Discovery over High-Dimensional Structured Hypothesis Spaces with Causal Graph Partitioning
The aim in many sciences is to understand the mechanisms that underlie the observed distribution of variables, starting from a set of initial hypotheses. Causal discovery allows us to infer mechanisms as sets of cause and effect relationships in a generalized way -- without necessarily tailoring to a specific domain. Causal discovery algorithms search over a structured hypothesis space, defined by the set of directed acyclic graphs, to find the graph that best explains the data. For high-dimensional problems, however, this search becomes intractable and scalable algorithms for causal discovery are needed to bridge the gap. In this paper, we define a novel causal graph partition that allows for divide-and-conquer causal discovery with theoretical guarantees. We leverage the idea of a superstructure -- a set of learned or existing candidate hypotheses -- to partition the search space. We prove under certain assumptions that learning with a causal graph partition always yields the Markov Equivalence Class of the true causal graph. We show our algorithm achieves comparable accuracy and a faster time to solution for biologically-tuned synthetic networks and networks up to ${10^4}$ variables. This makes our method applicable to gene regulatory network inference and other domains with high-dimensional structured hypothesis spaces.
[ "['Ashka Shah' 'Adela DePavia' 'Nathaniel Hudson' 'Ian Foster'\n 'Rick Stevens']" ]
null
null
2406.06350
null
null
http://arxiv.org/pdf/2406.06350v1
2024-06-10T15:12:53Z
2024-06-10T15:12:53Z
Error Analysis and Numerical Algorithm for PDE Approximation with Hidden-Layer Concatenated Physics Informed Neural Networks
We present the hidden-layer concatenated physics informed neural network (HLConcPINN) method, which combines hidden-layer concatenated feed-forward neural networks, a modified block time marching strategy, and a physics informed approach for approximating partial differential equations (PDEs). We analyze the convergence properties and establish the error bounds of this method for two types of PDEs: parabolic (exemplified by the heat and Burgers' equations) and hyperbolic (exemplified by the wave and nonlinear Klein-Gordon equations). We show that its approximation error of the solution can be effectively controlled by the training loss for dynamic simulations with long time horizons. The HLConcPINN method in principle allows an arbitrary number of hidden layers not smaller than two and any of the commonly-used smooth activation functions for the hidden layers beyond the first two, with theoretical guarantees. This generalizes several recent neural-network techniques, which have theoretical guarantees but are confined to two hidden layers in the network architecture and the $tanh$ activation function. Our theoretical analyses subsequently inform the formulation of appropriate training loss functions for these PDEs, leading to physics informed neural network (PINN) type computational algorithms that differ from the standard PINN formulation. Ample numerical experiments are presented based on the proposed algorithm to validate the effectiveness of this method and confirm aspects of the theoretical analyses.
[ "['Yianxia Qian' 'Yongchao Zhang' 'Suchuan Dong']" ]
null
null
2406.06351
null
null
http://arxiv.org/pdf/2406.06351v1
2024-06-10T15:13:07Z
2024-06-10T15:13:07Z
Cascading Unknown Detection with Known Classification for Open Set Recognition
Deep learners tend to perform well when trained under the closed set assumption but struggle when deployed under open set conditions. This motivates the field of Open Set Recognition in which we seek to give deep learners the ability to recognize whether a data sample belongs to the known classes trained on or comes from the surrounding infinite world. Existing open set recognition methods typically rely upon a single function for the dual task of distinguishing between knowns and unknowns as well as making known class distinction. This dual process leaves performance on the table as the function is not specialized for either task. In this work, we introduce Cascading Unknown Detection with Known Classification (Cas-DC), where we instead learn specialized functions in a cascading fashion for both known/unknown detection and fine class classification amongst the world of knowns. Our experiments and analysis demonstrate that Cas-DC handily outperforms modern methods in open set recognition when compared using AUROC scores and correct classification rate at various true positive rates.
[ "['Daniel Brignac' 'Abhijit Mahalanobis']" ]
null
null
2406.06354
null
null
http://arxiv.org/pdf/2406.06354v1
2024-06-10T15:14:33Z
2024-06-10T15:14:33Z
On the Minimal Degree Bias in Generalization on the Unseen for non-Boolean Functions
We investigate the out-of-domain generalization of random feature (RF) models and Transformers. We first prove that in the `generalization on the unseen (GOTU)' setting, where training data is fully seen in some part of the domain but testing is made on another part, and for RF models in the small feature regime, the convergence takes place to interpolators of minimal degree as in the Boolean case (Abbe et al., 2023). We then consider the sparse target regime and explain how this regime relates to the small feature regime, but with a different regularization term that can alter the picture in the non-Boolean case. We show two different outcomes for the sparse regime with q-ary data tokens: (1) if the data is embedded with roots of unities, then a min-degree interpolator is learned like in the Boolean case for RF models, (2) if the data is not embedded as such, e.g., simply as integers, then RF models and Transformers may not learn minimal degree interpolators. This shows that the Boolean setting and its roots of unities generalization are special cases where the minimal degree interpolator offers a rare characterization of how learning takes place. For more general integer and real-valued settings, a more nuanced picture remains to be fully characterized.
[ "['Denys Pushkin' 'Raphaël Berthier' 'Emmanuel Abbe']" ]
null
null
2406.06382
null
null
http://arxiv.org/pdf/2406.06382v1
2024-06-10T15:42:03Z
2024-06-10T15:42:03Z
Diffusion-RPO: Aligning Diffusion Models through Relative Preference Optimization
Aligning large language models with human preferences has emerged as a critical focus in language modeling research. Yet, integrating preference learning into Text-to-Image (T2I) generative models is still relatively uncharted territory. The Diffusion-DPO technique made initial strides by employing pairwise preference learning in diffusion models tailored for specific text prompts. We introduce Diffusion-RPO, a new method designed to align diffusion-based T2I models with human preferences more effectively. This approach leverages both prompt-image pairs with identical prompts and those with semantically related content across various modalities. Furthermore, we have developed a new evaluation metric, style alignment, aimed at overcoming the challenges of high costs, low reproducibility, and limited interpretability prevalent in current evaluations of human preference alignment. Our findings demonstrate that Diffusion-RPO outperforms established methods such as Supervised Fine-Tuning and Diffusion-DPO in tuning Stable Diffusion versions 1.5 and XL-1.0, achieving superior results in both automated evaluations of human preferences and style alignment. Our code is available at https://github.com/yigu1008/Diffusion-RPO
[ "['Yi Gu' 'Zhendong Wang' 'Yueqin Yin' 'Yujia Xie' 'Mingyuan Zhou']" ]
null
null
2406.06385
null
null
http://arxiv.org/pdf/2406.06385v2
2024-06-20T15:18:50Z
2024-06-10T15:44:22Z
Low-Rank Quantization-Aware Training for LLMs
Large language models (LLMs) are omnipresent, however their practical deployment is challenging due to their ever increasing computational and memory demands. Quantization is one of the most effective ways to make them more compute and memory efficient. Quantization-aware training (QAT) methods, generally produce the best quantized performance, however it comes at the cost of potentially long training time and excessive memory usage, making it impractical when applying for LLMs. Inspired by parameter-efficient fine-tuning (PEFT) and low-rank adaptation (LoRA) literature, we propose LR-QAT -- a lightweight and memory-efficient QAT algorithm for LLMs. LR-QAT employs several components to save memory without sacrificing predictive performance: (a) low-rank auxiliary weights that are aware of the quantization grid; (b) a downcasting operator using fixed-point or double-packed integers and (c) checkpointing. Unlike most related work, our method (i) is inference-efficient, leading to no additional overhead compared to traditional PTQ; (ii) can be seen as a general extended pretraining framework, meaning that the resulting model can still be utilized for any downstream task afterwards; (iii) can be applied across a wide range of quantization settings, such as different choices quantization granularity, activation quantization, and seamlessly combined with many PTQ techniques. We apply LR-QAT to LLaMA-2/3 and Mistral model families and validate its effectiveness on several downstream tasks. Our method outperforms common post-training quantization (PTQ) approaches and reaches the same model performance as full-model QAT at the fraction of its memory usage. Specifically, we can train a 7B LLM on a single consumer grade GPU with 24GB of memory.
[ "['Yelysei Bondarenko' 'Riccardo Del Chiaro' 'Markus Nagel']" ]
null
null
2406.06391
null
null
http://arxiv.org/pdf/2406.06391v1
2024-06-10T15:46:25Z
2024-06-10T15:46:25Z
Towards Lifelong Learning of Large Language Models: A Survey
As the applications of large language models (LLMs) expand across diverse fields, the ability of these models to adapt to ongoing changes in data, tasks, and user preferences becomes crucial. Traditional training methods, relying on static datasets, are increasingly inadequate for coping with the dynamic nature of real-world information. Lifelong learning, also known as continual or incremental learning, addresses this challenge by enabling LLMs to learn continuously and adaptively over their operational lifetime, integrating new knowledge while retaining previously learned information and preventing catastrophic forgetting. This survey delves into the sophisticated landscape of lifelong learning, categorizing strategies into two primary groups: Internal Knowledge and External Knowledge. Internal Knowledge includes continual pretraining and continual finetuning, each enhancing the adaptability of LLMs in various scenarios. External Knowledge encompasses retrieval-based and tool-based lifelong learning, leveraging external data sources and computational tools to extend the model's capabilities without modifying core parameters. The key contributions of our survey are: (1) Introducing a novel taxonomy categorizing the extensive literature of lifelong learning into 12 scenarios; (2) Identifying common techniques across all lifelong learning scenarios and classifying existing literature into various technique groups within each scenario; (3) Highlighting emerging techniques such as model expansion and data selection, which were less explored in the pre-LLM era. Through a detailed examination of these groups and their respective categories, this survey aims to enhance the adaptability, reliability, and overall performance of LLMs in real-world applications.
[ "['Junhao Zheng' 'Shengjie Qiu' 'Chengming Shi' 'Qianli Ma']" ]
null
null
2406.06397
null
null
http://arxiv.org/pdf/2406.06397v1
2024-06-10T15:50:45Z
2024-06-10T15:50:45Z
Contrastive learning of T cell receptor representations
Computational prediction of the interaction of T cell receptors (TCRs) and their ligands is a grand challenge in immunology. Despite advances in high-throughput assays, specificity-labelled TCR data remains sparse. In other domains, the pre-training of language models on unlabelled data has been successfully used to address data bottlenecks. However, it is unclear how to best pre-train protein language models for TCR specificity prediction. Here we introduce a TCR language model called SCEPTR (Simple Contrastive Embedding of the Primary sequence of T cell Receptors), capable of data-efficient transfer learning. Through our model, we introduce a novel pre-training strategy combining autocontrastive learning and masked-language modelling, which enables SCEPTR to achieve its state-of-the-art performance. In contrast, existing protein language models and a variant of SCEPTR pre-trained without autocontrastive learning are outperformed by sequence alignment-based methods. We anticipate that contrastive learning will be a useful paradigm to decode the rules of TCR specificity.
[ "['Yuta Nagano' 'Andrew Pyo' 'Martina Milighetti' 'James Henderson'\n 'John Shawe-Taylor' 'Benny Chain' 'Andreas Tiffeau-Mayer']" ]
null
null
2406.06403
null
null
http://arxiv.org/pdf/2406.06403v1
2024-06-10T15:56:52Z
2024-06-10T15:56:52Z
Meta Learning Text-to-Speech Synthesis in over 7000 Languages
In this work, we take on the challenging task of building a single text-to-speech synthesis system that is capable of generating speech in over 7000 languages, many of which lack sufficient data for traditional TTS development. By leveraging a novel integration of massively multilingual pretraining and meta learning to approximate language representations, our approach enables zero-shot speech synthesis in languages without any available data. We validate our system's performance through objective measures and human evaluation across a diverse linguistic landscape. By releasing our code and models publicly, we aim to empower communities with limited linguistic resources and foster further innovation in the field of speech technology.
[ "['Florian Lux' 'Sarina Meyer' 'Lyonel Behringer' 'Frank Zalkow' 'Phat Do'\n 'Matt Coler' 'Emanuël A. P. Habets' 'Ngoc Thang Vu']" ]
null
null
2406.06407
null
null
http://arxiv.org/pdf/2406.06407v1
2024-06-10T15:59:08Z
2024-06-10T15:59:08Z
A Taxonomy of Challenges to Curating Fair Datasets
Despite extensive efforts to create fairer machine learning (ML) datasets, there remains a limited understanding of the practical aspects of dataset curation. Drawing from interviews with 30 ML dataset curators, we present a comprehensive taxonomy of the challenges and trade-offs encountered throughout the dataset curation lifecycle. Our findings underscore overarching issues within the broader fairness landscape that impact data curation. We conclude with recommendations aimed at fostering systemic changes to better facilitate fair dataset curation practices.
[ "['Dora Zhao' 'Morgan Klaus Scheuerman' 'Pooja Chitre'\n 'Jerone T. A. Andrews' 'Georgia Panagiotidou' 'Shawn Walker'\n 'Kathleen H. Pine' 'Alice Xiang']" ]
null
null
2406.06408
null
null
http://arxiv.org/pdf/2406.06408v1
2024-06-10T16:02:48Z
2024-06-10T16:02:48Z
Differentially Private Best-Arm Identification
Best Arm Identification (BAI) problems are progressively used for data-sensitive applications, such as designing adaptive clinical trials, tuning hyper-parameters, and conducting user studies. Motivated by the data privacy concerns invoked by these applications, we study the problem of BAI with fixed confidence in both the local and central models, i.e. $epsilon$-local and $epsilon$-global Differential Privacy (DP). First, to quantify the cost of privacy, we derive lower bounds on the sample complexity of any $delta$-correct BAI algorithm satisfying $epsilon$-global DP or $epsilon$-local DP. Our lower bounds suggest the existence of two privacy regimes. In the high-privacy regime, the hardness depends on a coupled effect of privacy and novel information-theoretic quantities involving the Total Variation. In the low-privacy regime, the lower bounds reduce to the non-private lower bounds. We propose $epsilon$-local DP and $epsilon$-global DP variants of a Top Two algorithm, namely CTB-TT and AdaP-TT*, respectively. For $epsilon$-local DP, CTB-TT is asymptotically optimal by plugging in a private estimator of the means based on Randomised Response. For $epsilon$-global DP, our private estimator of the mean runs in arm-dependent adaptive episodes and adds Laplace noise to ensure a good privacy-utility trade-off. By adapting the transportation costs, the expected sample complexity of AdaP-TT* reaches the asymptotic lower bound up to multiplicative constants.
[ "['Achraf Azize' 'Marc Jourdan' 'Aymen Al Marjani' 'Debabrota Basu']" ]
null
null
2406.06417
null
null
http://arxiv.org/pdf/2406.06417v1
2024-06-10T16:09:16Z
2024-06-10T16:09:16Z
Explainable Graph Neural Networks Under Fire
Predictions made by graph neural networks (GNNs) usually lack interpretability due to their complex computational behavior and the abstract nature of graphs. In an attempt to tackle this, many GNN explanation methods have emerged. Their goal is to explain a model's predictions and thereby obtain trust when GNN models are deployed in decision critical applications. Most GNN explanation methods work in a post-hoc manner and provide explanations in the form of a small subset of important edges and/or nodes. In this paper we demonstrate that these explanations can unfortunately not be trusted, as common GNN explanation methods turn out to be highly susceptible to adversarial perturbations. That is, even small perturbations of the original graph structure that preserve the model's predictions may yield drastically different explanations. This calls into question the trustworthiness and practical utility of post-hoc explanation methods for GNNs. To be able to attack GNN explanation models, we devise a novel attack method dubbed textit{GXAttack}, the first textit{optimization-based} adversarial attack method for post-hoc GNN explanations under such settings. Due to the devastating effectiveness of our attack, we call for an adversarial evaluation of future GNN explainers to demonstrate their robustness.
[ "['Zhong Li' 'Simon Geisler' 'Yuhang Wang' 'Stephan Günnemann'\n 'Matthijs van Leeuwen']" ]
null
null
2406.06419
null
null
http://arxiv.org/pdf/2406.06419v1
2024-06-10T16:12:00Z
2024-06-10T16:12:00Z
Foundation Inference Models for Markov Jump Processes
Markov jump processes are continuous-time stochastic processes which describe dynamical systems evolving in discrete state spaces. These processes find wide application in the natural sciences and machine learning, but their inference is known to be far from trivial. In this work we introduce a methodology for zero-shot inference of Markov jump processes (MJPs), on bounded state spaces, from noisy and sparse observations, which consists of two components. First, a broad probability distribution over families of MJPs, as well as over possible observation times and noise mechanisms, with which we simulate a synthetic dataset of hidden MJPs and their noisy observation process. Second, a neural network model that processes subsets of the simulated observations, and that is trained to output the initial condition and rate matrix of the target MJP in a supervised way. We empirically demonstrate that one and the same (pretrained) model can infer, in a zero-shot fashion, hidden MJPs evolving in state spaces of different dimensionalities. Specifically, we infer MJPs which describe (i) discrete flashing ratchet systems, which are a type of Brownian motors, and the conformational dynamics in (ii) molecular simulations, (iii) experimental ion channel data and (iv) simple protein folding models. What is more, we show that our model performs on par with state-of-the-art models which are finetuned to the target datasets.
[ "['David Berghaus' 'Kostadin Cvejoski' 'Patrick Seifner' 'Cesar Ojeda'\n 'Ramses J. Sanchez']" ]
null
null
2406.06420
null
null
http://arxiv.org/pdf/2406.06420v1
2024-06-10T16:12:32Z
2024-06-10T16:12:32Z
An Improved Empirical Fisher Approximation for Natural Gradient Descent
Approximate Natural Gradient Descent (NGD) methods are an important family of optimisers for deep learning models, which use approximate Fisher information matrices to pre-condition gradients during training. The empirical Fisher (EF) method approximates the Fisher information matrix empirically by reusing the per-sample gradients collected during back-propagation. Despite its ease of implementation, the EF approximation has its theoretical and practical limitations. This paper first investigates the inversely-scaled projection issue of EF, which is shown to be a major cause of the poor empirical approximation quality. An improved empirical Fisher (iEF) method, motivated as a generalised NGD method from a loss reduction perspective, is proposed to address this issue, meanwhile retaining the practical convenience of EF. The exact iEF and EF methods are experimentally evaluated using practical deep learning setups, including widely-used setups for parameter-efficient fine-tuning of pre-trained models (T5-base with LoRA and Prompt-Tuning on GLUE tasks, and ViT with LoRA for CIFAR100). Optimisation experiments show that applying exact iEF as an optimiser provides strong convergence and generalisation. It achieves the best test performance and the lowest training loss for majority of the tasks, even when compared with well-tuned AdamW/Adafactor baselines. Additionally, under a novel empirical evaluation framework, the proposed iEF method shows consistently better approximation quality to the exact Natural Gradient updates than both EF and the more expensive sampled Fisher (SF). Further investigation also shows that the superior approximation quality of iEF is robust to damping across tasks and training stages. Improving existing approximate NGD optimisers with iEF is expected to lead to better convergence ability and stronger robustness to choice of damping.
[ "['Xiaodong Wu' 'Wenyi Yu' 'Chao Zhang' 'Philip Woodland']" ]
null
null
2406.06425
null
null
http://arxiv.org/pdf/2406.06425v1
2024-06-10T16:14:50Z
2024-06-10T16:14:50Z
Multivariate Stochastic Dominance via Optimal Transport and Applications to Models Benchmarking
Stochastic dominance is an important concept in probability theory, econometrics and social choice theory for robustly modeling agents' preferences between random outcomes. While many works have been dedicated to the univariate case, little has been done in the multivariate scenario, wherein an agent has to decide between different multivariate outcomes. By exploiting a characterization of multivariate first stochastic dominance in terms of couplings, we introduce a statistic that assesses multivariate almost stochastic dominance under the framework of Optimal Transport with a smooth cost. Further, we introduce an entropic regularization of this statistic, and establish a central limit theorem (CLT) and consistency of the bootstrap procedure for the empirical statistic. Armed with this CLT, we propose a hypothesis testing framework as well as an efficient implementation using the Sinkhorn algorithm. We showcase our method in comparing and benchmarking Large Language Models that are evaluated on multiple metrics. Our multivariate stochastic dominance test allows us to capture the dependencies between the metrics in order to make an informed and statistically significant decision on the relative performance of the models.
[ "['Gabriel Rioux' 'Apoorva Nitsure' 'Mattia Rigotti' 'Kristjan Greenewald'\n 'Youssef Mroueh']" ]
null
null
2406.06433
null
null
http://arxiv.org/pdf/2406.06433v3
2024-06-12T21:51:22Z
2024-06-10T16:24:35Z
DISCO: An End-to-End Bandit Framework for Personalised Discount Allocation
Personalised discount codes provide a powerful mechanism for managing customer relationships and operational spend in e-commerce. Bandits are well suited for this product area, given the partial information nature of the problem, as well as the need for adaptation to the changing business environment. Here, we introduce DISCO, an end-to-end contextual bandit framework for personalised discount code allocation at ASOS. DISCO adapts the traditional Thompson Sampling algorithm by integrating it within an integer program, thereby allowing for operational cost control. Because bandit learning is often worse with high dimensional actions, we focused on building low dimensional action and context representations that were nonetheless capable of good accuracy. Additionally, we sought to build a model that preserved the relationship between price and sales, in which customers increasing their purchasing in response to lower prices ("negative price elasticity"). These aims were achieved by using radial basis functions to represent the continuous (i.e. infinite armed) action space, in combination with context embeddings extracted from a neural network. These feature representations were used within a Thompson Sampling framework to facilitate exploration, and further integrated with an integer program to allocate discount codes across ASOS's customer base. These modelling decisions result in a reward model that (a) enables pooled learning across similar actions, (b) is highly accurate, including in extrapolation, and (c) preserves the expected negative price elasticity. Through offline analysis, we show that DISCO is able to effectively enact exploration and improves its performance over time, despite the global constraint. Finally, we subjected DISCO to a rigorous online A/B test, and find that it achieves a significant improvement of >1% in average basket value, relative to the legacy systems.
[ "['Jason Shuo Zhang' 'Benjamin Howson' 'Panayiota Savva' 'Eleanor Loh']" ]
null
null
2406.06438
null
null
http://arxiv.org/pdf/2406.06438v1
2024-06-10T16:31:34Z
2024-06-10T16:31:34Z
Multimodal Contextualized Semantic Parsing from Speech
We introduce Semantic Parsing in Contextual Environments (SPICE), a task designed to enhance artificial agents' contextual awareness by integrating multimodal inputs with prior contexts. SPICE goes beyond traditional semantic parsing by offering a structured, interpretable framework for dynamically updating an agent's knowledge with new information, mirroring the complexity of human communication. We develop the VG-SPICE dataset, crafted to challenge agents with visual scene graph construction from spoken conversational exchanges, highlighting speech and visual data integration. We also present the Audio-Vision Dialogue Scene Parser (AViD-SP) developed for use on VG-SPICE. These innovations aim to improve multimodal information processing and integration. Both the VG-SPICE dataset and the AViD-SP model are publicly available.
[ "['Jordan Voas' 'Raymond Mooney' 'David Harwath']" ]
null
null
2406.06443
null
null
http://arxiv.org/pdf/2406.06443v1
2024-06-10T16:34:43Z
2024-06-10T16:34:43Z
LLM Dataset Inference: Did you train on my dataset?
The proliferation of large language models (LLMs) in the real world has come with a rise in copyright cases against companies for training their models on unlicensed data from the internet. Recent works have presented methods to identify if individual text sequences were members of the model's training data, known as membership inference attacks (MIAs). We demonstrate that the apparent success of these MIAs is confounded by selecting non-members (text sequences not used for training) belonging to a different distribution from the members (e.g., temporally shifted recent Wikipedia articles compared with ones used to train the model). This distribution shift makes membership inference appear successful. However, most MIA methods perform no better than random guessing when discriminating between members and non-members from the same distribution (e.g., in this case, the same period of time). Even when MIAs work, we find that different MIAs succeed at inferring membership of samples from different distributions. Instead, we propose a new dataset inference method to accurately identify the datasets used to train large language models. This paradigm sits realistically in the modern-day copyright landscape, where authors claim that an LLM is trained over multiple documents (such as a book) written by them, rather than one particular paragraph. While dataset inference shares many of the challenges of membership inference, we solve it by selectively combining the MIAs that provide positive signal for a given distribution, and aggregating them to perform a statistical test on a given dataset. Our approach successfully distinguishes the train and test sets of different subsets of the Pile with statistically significant p-values < 0.1, without any false positives.
[ "['Pratyush Maini' 'Hengrui Jia' 'Nicolas Papernot' 'Adam Dziedzic']" ]
null
null
2406.06446
null
null
http://arxiv.org/pdf/2406.06446v1
2024-06-10T16:36:02Z
2024-06-10T16:36:02Z
Deep Generative Modeling Reshapes Compression and Transmission: From Efficiency to Resiliency
Information theory and machine learning are inextricably linked and have even been referred to as "two sides of the same coin". One particularly elegant connection is the essential equivalence between probabilistic generative modeling and data compression or transmission. In this article, we reveal the dual-functionality of deep generative models that reshapes both data compression for efficiency and transmission error concealment for resiliency. We present how the contextual predictive capabilities of powerful generative models can be well positioned to be strong compressors and estimators. In this sense, we advocate for viewing the deep generative modeling problem through the lens of end-to-end communications, and evaluate the compression and error restoration capabilities of foundation generative models. We show that the kernel of many large generative models is powerful predictor that can capture complex relationships among semantic latent variables, and the communication viewpoints provide novel insights into semantic feature tokenization, contextual learning, and usage of deep generative models. In summary, our article highlights the essential connections of generative AI to source and channel coding techniques, and motivates researchers to make further explorations in this emerging topic.
[ "['Jincheng Dai' 'Xiaoqi Qin' 'Sixian Wang' 'Lexi Xu' 'Kai Niu'\n 'Ping Zhang']" ]
null
null
2406.06449
null
null
http://arxiv.org/pdf/2406.06449v1
2024-06-10T16:39:39Z
2024-06-10T16:39:39Z
Cometh: A continuous-time discrete-state graph diffusion model
Discrete-state denoising diffusion models led to state-of-the-art performance in graph generation, especially in the molecular domain. Recently, they have been transposed to continuous time, allowing more flexibility in the reverse process and a better trade-off between sampling efficiency and quality. Here, to leverage the benefits of both approaches, we propose Cometh, a continuous-time discrete-state graph diffusion model, integrating graph data into a continuous-time diffusion model framework. Empirically, we show that integrating continuous time leads to significant improvements across various metrics over state-of-the-art discrete-state diffusion models on a large set of molecular and non-molecular benchmark datasets.
[ "['Antoine Siraudin' 'Fragkiskos D. Malliaros' 'Christopher Morris']" ]
null
null
2406.06452
null
null
http://arxiv.org/pdf/2406.06452v1
2024-06-10T16:40:55Z
2024-06-10T16:40:55Z
Estimating Heterogeneous Treatment Effects by Combining Weak Instruments and Observational Data
Accurately predicting conditional average treatment effects (CATEs) is crucial in personalized medicine and digital platform analytics. Since often the treatments of interest cannot be directly randomized, observational data is leveraged to learn CATEs, but this approach can incur significant bias from unobserved confounding. One strategy to overcome these limitations is to seek latent quasi-experiments in instrumental variables (IVs) for the treatment, for example, a randomized intent to treat or a randomized product recommendation. This approach, on the other hand, can suffer from low compliance, i.e., IV weakness. Some subgroups may even exhibit zero compliance meaning we cannot instrument for their CATEs at all. In this paper we develop a novel approach to combine IV and observational data to enable reliable CATE estimation in the presence of unobserved confounding in the observational data and low compliance in the IV data, including no compliance for some subgroups. We propose a two-stage framework that first learns biased CATEs from the observational data, and then applies a compliance-weighted correction using IV data, effectively leveraging IV strength variability across covariates. We characterize the convergence rates of our method and validate its effectiveness through a simulation study. Additionally, we demonstrate its utility with real data by analyzing the heterogeneous effects of 401(k) plan participation on wealth.
[ "['Miruna Oprescu' 'Nathan Kallus']" ]
null
null
2406.06459
null
null
http://arxiv.org/pdf/2406.06459v1
2024-06-10T16:53:58Z
2024-06-10T16:53:58Z
How Useful is Intermittent, Asynchronous Expert Feedback for Bayesian Optimization?
Bayesian optimization (BO) is an integral part of automated scientific discovery -- the so-called self-driving lab -- where human inputs are ideally minimal or at least non-blocking. However, scientists often have strong intuition, and thus human feedback is still useful. Nevertheless, prior works in enhancing BO with expert feedback, such as by incorporating it in an offline or online but blocking (arrives at each BO iteration) manner, are incompatible with the spirit of self-driving labs. In this work, we study whether a small amount of randomly arriving expert feedback that is being incorporated in a non-blocking manner can improve a BO campaign. To this end, we run an additional, independent computing thread on top of the BO loop to handle the feedback-gathering process. The gathered feedback is used to learn a Bayesian preference model that can readily be incorporated into the BO thread, to steer its exploration-exploitation process. Experiments on toy and chemistry datasets suggest that even just a few intermittent, asynchronous expert feedback can be useful for improving or constraining BO. This can especially be useful for its implication in improving self-driving labs, e.g. making them more data-efficient and less costly.
[ "['Agustinus Kristiadi' 'Felix Strieth-Kalthoff'\n 'Sriram Ganapathi Subramanian' 'Vincent Fortuin' 'Pascal Poupart'\n 'Geoff Pleiss']" ]
null
null
2406.06462
null
null
http://arxiv.org/pdf/2406.06462v2
2024-06-24T07:05:01Z
2024-06-10T16:58:48Z
VCR: Visual Caption Restoration
We introduce Visual Caption Restoration (VCR), a novel vision-language task that challenges models to accurately restore partially obscured texts using pixel-level hints within images. This task stems from the observation that text embedded in images is intrinsically different from common visual elements and natural language due to the need to align the modalities of vision, text, and text embedded in images. While numerous works have integrated text embedded in images into visual question-answering tasks, approaches to these tasks generally rely on optical character recognition or masked language modeling, thus reducing the task to mainly text-based processing. However, text-based processing becomes ineffective in VCR as accurate text restoration depends on the combined information from provided images, context, and subtle cues from the tiny exposed areas of masked texts. We develop a pipeline to generate synthetic images for the VCR task using image-caption pairs, with adjustable caption visibility to control the task difficulty. With this pipeline, we construct a dataset for VCR called VCR-Wiki using images with captions from Wikipedia, comprising 2.11M English and 346K Chinese entities in both easy and hard split variants. Our results reveal that current vision language models significantly lag behind human performance in the VCR task, and merely fine-tuning the models on our dataset does not lead to notable improvements. We release VCR-Wiki and the data construction code to facilitate future research.
[ "['Tianyu Zhang' 'Suyuchen Wang' 'Lu Li' 'Ge Zhang' 'Perouz Taslakian'\n 'Sai Rajeswar' 'Jie Fu' 'Bang Liu' 'Yoshua Bengio']" ]
null
null
2406.06465
null
null
http://arxiv.org/pdf/2406.06465v1
2024-06-10T17:02:08Z
2024-06-10T17:02:08Z
AID: Adapting Image2Video Diffusion Models for Instruction-guided Video Prediction
Text-guided video prediction (TVP) involves predicting the motion of future frames from the initial frame according to an instruction, which has wide applications in virtual reality, robotics, and content creation. Previous TVP methods make significant breakthroughs by adapting Stable Diffusion for this task. However, they struggle with frame consistency and temporal stability primarily due to the limited scale of video datasets. We observe that pretrained Image2Video diffusion models possess good priors for video dynamics but they lack textual control. Hence, transferring Image2Video models to leverage their video dynamic priors while injecting instruction control to generate controllable videos is both a meaningful and challenging task. To achieve this, we introduce the Multi-Modal Large Language Model (MLLM) to predict future video states based on initial frames and text instructions. More specifically, we design a dual query transformer (DQFormer) architecture, which integrates the instructions and frames into the conditional embeddings for future frame prediction. Additionally, we develop Long-Short Term Temporal Adapters and Spatial Adapters that can quickly transfer general video diffusion models to specific scenarios with minimal training costs. Experimental results show that our method significantly outperforms state-of-the-art techniques on four datasets: Something Something V2, Epic Kitchen-100, Bridge Data, and UCF-101. Notably, AID achieves 91.2% and 55.5% FVD improvements on Bridge and SSv2 respectively, demonstrating its effectiveness in various domains. More examples can be found at our website https://chenhsing.github.io/AID.
[ "['Zhen Xing' 'Qi Dai' 'Zejia Weng' 'Zuxuan Wu' 'Yu-Gang Jiang']" ]
null
null
2406.06467
null
null
http://arxiv.org/pdf/2406.06467v1
2024-06-10T17:05:12Z
2024-06-10T17:05:12Z
How Far Can Transformers Reason? The Locality Barrier and Inductive Scratchpad
Can Transformers predict new syllogisms by composing established ones? More generally, what type of targets can be learned by such models from scratch? Recent works show that Transformers can be Turing-complete in terms of expressivity, but this does not address the learnability objective. This paper puts forward the notion of 'distribution locality' to capture when weak learning is efficiently achievable by regular Transformers, where the locality measures the least number of tokens required in addition to the tokens histogram to correlate nontrivially with the target. As shown experimentally and theoretically under additional assumptions, distributions with high locality cannot be learned efficiently. In particular, syllogisms cannot be composed on long chains. Furthermore, we show that (i) an agnostic scratchpad cannot help to break the locality barrier, (ii) an educated scratchpad can help if it breaks the locality at each step, (iii) a notion of 'inductive scratchpad' can both break the locality and improve the out-of-distribution generalization, e.g., generalizing to almost double input size for some arithmetic tasks.
[ "['Emmanuel Abbe' 'Samy Bengio' 'Aryo Lotfi' 'Colin Sandon' 'Omid Saremi']" ]
null
null
2406.06469
null
null
http://arxiv.org/pdf/2406.06469v1
2024-06-10T17:07:25Z
2024-06-10T17:07:25Z
Husky: A Unified, Open-Source Language Agent for Multi-Step Reasoning
Language agents perform complex tasks by using tools to execute each step precisely. However, most existing agents are based on proprietary models or designed to target specific tasks, such as mathematics or multi-hop question answering. We introduce Husky, a holistic, open-source language agent that learns to reason over a unified action space to address a diverse set of complex tasks involving numerical, tabular, and knowledge-based reasoning. Husky iterates between two stages: 1) generating the next action to take towards solving a given task and 2) executing the action using expert models and updating the current solution state. We identify a thorough ontology of actions for addressing complex tasks and curate high-quality data to train expert models for executing these actions. Our experiments show that Husky outperforms prior language agents across 14 evaluation datasets. Moreover, we introduce HuskyQA, a new evaluation set which stress tests language agents for mixed-tool reasoning, with a focus on retrieving missing knowledge and performing numerical reasoning. Despite using 7B models, Husky matches or even exceeds frontier LMs such as GPT-4 on these tasks, showcasing the efficacy of our holistic approach in addressing complex reasoning problems. Our code and models are available at https://github.com/agent-husky/Husky-v1.
[ "['Joongwon Kim' 'Bhargavi Paranjape' 'Tushar Khot' 'Hannaneh Hajishirzi']" ]
null
null
2406.06470
null
null
http://arxiv.org/pdf/2406.06470v1
2024-06-10T17:09:38Z
2024-06-10T17:09:38Z
GKAN: Graph Kolmogorov-Arnold Networks
We introduce Graph Kolmogorov-Arnold Networks (GKAN), an innovative neural network architecture that extends the principles of the recently proposed Kolmogorov-Arnold Networks (KAN) to graph-structured data. By adopting the unique characteristics of KANs, notably the use of learnable univariate functions instead of fixed linear weights, we develop a powerful model for graph-based learning tasks. Unlike traditional Graph Convolutional Networks (GCNs) that rely on a fixed convolutional architecture, GKANs implement learnable spline-based functions between layers, transforming the way information is processed across the graph structure. We present two different ways to incorporate KAN layers into GKAN: architecture 1 -- where the learnable functions are applied to input features after aggregation and architecture 2 -- where the learnable functions are applied to input features before aggregation. We evaluate GKAN empirically using a semi-supervised graph learning task on a real-world dataset (Cora). We find that architecture generally performs better. We find that GKANs achieve higher accuracy in semi-supervised learning tasks on graphs compared to the traditional GCN model. For example, when considering 100 features, GCN provides an accuracy of 53.5 while a GKAN with a comparable number of parameters gives an accuracy of 61.76; with 200 features, GCN provides an accuracy of 61.24 while a GKAN with a comparable number of parameters gives an accuracy of 67.66. We also present results on the impact of various parameters such as the number of hidden nodes, grid-size, and the polynomial-degree of the spline on the performance of GKAN.
[ "['Mehrdad Kiamari' 'Mohammad Kiamari' 'Bhaskar Krishnamachari']" ]
null
null
2406.06477
null
null
http://arxiv.org/pdf/2406.06477v1
2024-04-17T14:41:14Z
2024-04-17T14:41:14Z
OmniLytics+: A Secure, Efficient, and Affordable Blockchain Data Market for Machine Learning through Off-Chain Processing
The rapid development of large machine learning (ML) models requires a massive amount of training data, resulting in booming demands of data sharing and trading through data markets. Traditional centralized data markets suffer from low level of security, and emerging decentralized platforms are faced with efficiency and privacy challenges. In this paper, we propose OmniLytics+, the first decentralized data market, built upon blockchain and smart contract technologies, to simultaneously achieve 1) data (resp., model) privacy for the data (resp. model) owner; 2) robustness against malicious data owners; 3) efficient data validation and aggregation. Specifically, adopting the zero-knowledge (ZK) rollup paradigm, OmniLytics+ proposes to secret share encrypted local gradients, computed from the encrypted global model, with a set of untrusted off-chain servers, who collaboratively generate a ZK proof on the validity of the gradient. In this way, the storage and processing overheads are securely offloaded from blockchain verifiers, significantly improving the privacy, efficiency, and affordability over existing rollup solutions. We implement the proposed OmniLytics+ data market as an Ethereum smart contract [41]. Extensive experiments demonstrate the effectiveness of OmniLytics+ in training large ML models in presence of malicious data owner, and the substantial advantages of OmniLytics+ in gas cost and execution time over baselines.
[ "['Songze Li' 'Mingzhe Liu' 'Mengqi Chen']" ]
null
null
2406.06479
null
null
http://arxiv.org/pdf/2406.06479v2
2024-06-19T21:34:07Z
2024-06-10T17:20:13Z
Graph-Based Bidirectional Transformer Decision Threshold Adjustment Algorithm for Class-Imbalanced Molecular Data
Data sets with imbalanced class sizes, often where one class size is much smaller than that of others, occur extremely often in various applications, including those with biological foundations, such as drug discovery and disease diagnosis. Thus, it is extremely important to be able to identify data elements of classes of various sizes, as a failure to detect can result in heavy costs. However, many data classification algorithms do not perform well on imbalanced data sets as they often fail to detect elements belonging to underrepresented classes. In this paper, we propose the BTDT-MBO algorithm, incorporating Merriman-Bence-Osher (MBO) techniques and a bidirectional transformer, as well as distance correlation and decision threshold adjustments, for data classification problems on highly imbalanced molecular data sets, where the sizes of the classes vary greatly. The proposed method not only integrates adjustments in the classification threshold for the MBO algorithm in order to help deal with the class imbalance, but also uses a bidirectional transformer model based on an attention mechanism for self-supervised learning. Additionally, the method implements distance correlation as a weight function for the similarity graph-based framework on which the adjusted MBO algorithm operates. The proposed model is validated using six molecular data sets, and we also provide a thorough comparison to other competing algorithms. The computational experiments show that the proposed method performs better than competing techniques even when the class imbalance ratio is very high.
[ "['Nicole Hayes' 'Ekaterina Merkurjev' 'Guo-Wei Wei']" ]
null
null
2406.06482
null
null
http://arxiv.org/pdf/2406.06482v1
2024-06-10T17:22:09Z
2024-06-10T17:22:09Z
Quantum Equilibrium Propagation for efficient training of quantum systems based on Onsager reciprocity
The widespread adoption of machine learning and artificial intelligence in all branches of science and technology has created a need for energy-efficient, alternative hardware platforms. While such neuromorphic approaches have been proposed and realised for a wide range of platforms, physically extracting the gradients required for training remains challenging as generic approaches only exist in certain cases. Equilibrium propagation (EP) is such a procedure that has been introduced and applied to classical energy-based models which relax to an equilibrium. Here, we show a direct connection between EP and Onsager reciprocity and exploit this to derive a quantum version of EP. This can be used to optimize loss functions that depend on the expectation values of observables of an arbitrary quantum system. Specifically, we illustrate this new concept with supervised and unsupervised learning examples in which the input or the solvable task is of quantum mechanical nature, e.g., the recognition of quantum many-body ground states, quantum phase exploration, sensing and phase boundary exploration. We propose that in the future quantum EP may be used to solve tasks such as quantum phase discovery with a quantum simulator even for Hamiltonians which are numerically hard to simulate or even partially unknown. Our scheme is relevant for a variety of quantum simulation platforms such as ion chains, superconducting qubit arrays, neutral atom Rydberg tweezer arrays and strongly interacting atoms in optical lattices.
[ "['Clara C. Wanjura' 'Florian Marquardt']" ]
null
null
2406.06484
null
null
http://arxiv.org/pdf/2406.06484v1
2024-06-10T17:24:42Z
2024-06-10T17:24:42Z
Parallelizing Linear Transformers with the Delta Rule over Sequence Length
Transformers with linear attention (i.e., linear transformers) and state-space models have recently been suggested as a viable linear-time alternative to transformers with softmax attention. However, these models still underperform transformers especially on tasks that require in-context retrieval. While more expressive variants of linear transformers which replace the additive outer-product update in linear transformers with the delta rule have been found to be more effective at associative recall, existing algorithms for training such models do not parallelize over sequence length and are thus inefficient to train on modern hardware. This work describes a hardware-efficient algorithm for training linear transformers with the delta rule, which exploits a memory-efficient representation for computing products of Householder matrices. This algorithm allows us to scale up DeltaNet to standard language modeling settings. We train a 1.3B model for 100B tokens and find that it outperforms recent linear-time baselines such as Mamba and GLA in terms of perplexity and zero-shot performance on downstream tasks (including on tasks that focus on recall). We also experiment with two hybrid models which combine DeltaNet layers with (1) sliding-window attention layers every other layer or (2) two global attention layers, and find that these hybrid models outperform strong transformer baselines.
[ "['Songlin Yang' 'Bailin Wang' 'Yu Zhang' 'Yikang Shen' 'Yoon Kim']" ]
null
null
2406.06486
null
null
http://arxiv.org/pdf/2406.06486v1
2024-06-10T17:25:46Z
2024-06-10T17:25:46Z
Continuum Attention for Neural Operators
Transformers, and the attention mechanism in particular, have become ubiquitous in machine learning. Their success in modeling nonlocal, long-range correlations has led to their widespread adoption in natural language processing, computer vision, and time-series problems. Neural operators, which map spaces of functions into spaces of functions, are necessarily both nonlinear and nonlocal if they are universal; it is thus natural to ask whether the attention mechanism can be used in the design of neural operators. Motivated by this, we study transformers in the function space setting. We formulate attention as a map between infinite dimensional function spaces and prove that the attention mechanism as implemented in practice is a Monte Carlo or finite difference approximation of this operator. The function space formulation allows for the design of transformer neural operators, a class of architectures designed to learn mappings between function spaces, for which we prove a universal approximation result. The prohibitive cost of applying the attention operator to functions defined on multi-dimensional domains leads to the need for more efficient attention-based architectures. For this reason we also introduce a function space generalization of the patching strategy from computer vision, and introduce a class of associated neural operators. Numerical results, on an array of operator learning problems, demonstrate the promise of our approaches to function space formulations of attention and their use in neural operators.
[ "['Edoardo Calvello' 'Nikola B. Kovachki' 'Matthew E. Levine'\n 'Andrew M. Stuart']" ]
null
null
2406.06487
null
null
http://arxiv.org/pdf/2406.06487v1
2024-06-10T17:26:39Z
2024-06-10T17:26:39Z
When is Multicalibration Post-Processing Necessary?
Calibration is a well-studied property of predictors which guarantees meaningful uncertainty estimates. Multicalibration is a related notion -- originating in algorithmic fairness -- which requires predictors to be simultaneously calibrated over a potentially complex and overlapping collection of protected subpopulations (such as groups defined by ethnicity, race, or income). We conduct the first comprehensive study evaluating the usefulness of multicalibration post-processing across a broad set of tabular, image, and language datasets for models spanning from simple decision trees to 90 million parameter fine-tuned LLMs. Our findings can be summarized as follows: (1) models which are calibrated out of the box tend to be relatively multicalibrated without any additional post-processing; (2) multicalibration post-processing can help inherently uncalibrated models; and (3) traditional calibration measures may sometimes provide multicalibration implicitly. More generally, we also distill many independent observations which may be useful for practical and effective applications of multicalibration post-processing in real-world contexts.
[ "['Dutch Hansen' 'Siddartha Devic' 'Preetum Nakkiran' 'Vatsal Sharan']" ]
null
null
2406.06494
null
null
http://arxiv.org/pdf/2406.06494v1
2024-06-10T17:30:17Z
2024-06-10T17:30:17Z
Scaling Continuous Latent Variable Models as Probabilistic Integral Circuits
Probabilistic integral circuits (PICs) have been recently introduced as probabilistic models enjoying the key ingredient behind expressive generative models: continuous latent variables (LVs). PICs are symbolic computational graphs defining continuous LV models as hierarchies of functions that are summed and multiplied together, or integrated over some LVs. They are tractable if LVs can be analytically integrated out, otherwise they can be approximated by tractable probabilistic circuits (PC) encoding a hierarchical numerical quadrature process, called QPCs. So far, only tree-shaped PICs have been explored, and training them via numerical quadrature requires memory-intensive processing at scale. In this paper, we address these issues, and present: (i) a pipeline for building DAG-shaped PICs out of arbitrary variable decompositions, (ii) a procedure for training PICs using tensorized circuit architectures, and (iii) neural functional sharing techniques to allow scalable training. In extensive experiments, we showcase the effectiveness of functional sharing and the superiority of QPCs over traditional PCs.
[ "['Gennaro Gala' 'Cassio de Campos' 'Antonio Vergari' 'Erik Quaeghebeur']" ]
null
null
2406.06495
null
null
http://arxiv.org/pdf/2406.06495v1
2024-06-10T17:31:07Z
2024-06-10T17:31:07Z
Boosting Robustness in Preference-Based Reinforcement Learning with Dynamic Sparsity
For autonomous agents to successfully integrate into human-centered environments, agents should be able to learn from and adapt to humans in their native settings. Preference-based reinforcement learning (PbRL) is a promising approach that learns reward functions from human preferences. This enables RL agents to adapt their behavior based on human desires. However, humans live in a world full of diverse information, most of which is not relevant to completing a particular task. It becomes essential that agents learn to focus on the subset of task-relevant environment features. Unfortunately, prior work has largely ignored this aspect; primarily focusing on improving PbRL algorithms in standard RL environments that are carefully constructed to contain only task-relevant features. This can result in algorithms that may not effectively transfer to a more noisy real-world setting. To that end, this work proposes R2N (Robust-to-Noise), the first PbRL algorithm that leverages principles of dynamic sparse training to learn robust reward models that can focus on task-relevant features. We study the effectiveness of R2N in the Extremely Noisy Environment setting, an RL problem setting where up to 95% of the state features are irrelevant distractions. In experiments with a simulated teacher, we demonstrate that R2N can adapt the sparse connectivity of its neural networks to focus on task-relevant features, enabling R2N to significantly outperform several state-of-the-art PbRL algorithms in multiple locomotion and control environments.
[ "['Calarina Muslimani' 'Bram Grooten'\n 'Deepak Ranganatha Sastry Mamillapalli' 'Mykola Pechenizkiy'\n 'Decebal Constantin Mocanu' 'Matthew E. Taylor']" ]
null
null
2406.06496
null
null
http://arxiv.org/pdf/2406.06496v2
2024-06-14T19:47:20Z
2024-06-10T17:31:36Z
Direct Preference Optimization for Suppressing Hallucinated Prior Exams in Radiology Report Generation
Recent advances in generative vision-language models (VLMs) have exciting potential implications for AI in radiology, yet VLMs are also known to produce hallucinations, nonsensical text, and other unwanted behaviors that can waste clinicians' time and cause patient harm. Drawing on recent work on direct preference optimization (DPO), we propose a simple method for modifying the behavior of pretrained VLMs performing radiology report generation by suppressing unwanted types of generations. We apply our method to the prevention of hallucinations of prior exams, addressing a long-established problem behavior in models performing chest X-ray report generation. Across our experiments, we find that DPO fine-tuning achieves a 3.2-4.8x reduction in lines hallucinating prior exams while maintaining model performance on clinical accuracy metrics. Our work is, to the best of our knowledge, the first work to apply DPO to medical VLMs, providing a data- and compute- efficient way to suppress problem behaviors while maintaining overall clinical accuracy.
[ "['Oishi Banerjee' 'Hong-Yu Zhou' 'Subathra Adithan' 'Stephen Kwak'\n 'Kay Wu' 'Pranav Rajpurkar']" ]
null
null
2406.06500
null
null
http://arxiv.org/pdf/2406.06500v1
2024-06-10T17:34:44Z
2024-06-10T17:34:44Z
Adaptive Opponent Policy Detection in Multi-Agent MDPs: Real-Time Strategy Switch Identification Using Running Error Estimation
In Multi-agent Reinforcement Learning (MARL), accurately perceiving opponents' strategies is essential for both cooperative and adversarial contexts, particularly within dynamic environments. While Proximal Policy Optimization (PPO) and related algorithms such as Actor-Critic with Experience Replay (ACER), Trust Region Policy Optimization (TRPO), and Deep Deterministic Policy Gradient (DDPG) perform well in single-agent, stationary environments, they suffer from high variance in MARL due to non-stationary and hidden policies of opponents, leading to diminished reward performance. Additionally, existing methods in MARL face significant challenges, including the need for inter-agent communication, reliance on explicit reward information, high computational demands, and sampling inefficiencies. These issues render them less effective in continuous environments where opponents may abruptly change their policies without prior notice. Against this background, we present OPS-DeMo (Online Policy Switch-Detection Model), an online algorithm that employs dynamic error decay to detect changes in opponents' policies. OPS-DeMo continuously updates its beliefs using an Assumed Opponent Policy (AOP) Bank and selects corresponding responses from a pre-trained Response Policy Bank. Each response policy is trained against consistently strategizing opponents, reducing training uncertainty and enabling the effective use of algorithms like PPO in multi-agent environments. Comparative assessments show that our approach outperforms PPO-trained models in dynamic scenarios like the Predator-Prey setting, providing greater robustness to sudden policy shifts and enabling more informed decision-making through precise opponent policy insights.
[ "['Mohidul Haque Mridul' 'Mohammad Foysal Khan' 'Redwan Ahmed Rizvee'\n 'Md Mosaddek Khan']" ]
null
null
2406.06504
null
null
http://arxiv.org/pdf/2406.06504v1
2024-06-10T17:43:13Z
2024-06-10T17:43:13Z
Equivariant Neural Tangent Kernels
Equivariant neural networks have in recent years become an important technique for guiding architecture selection for neural networks with many applications in domains ranging from medical image analysis to quantum chemistry. In particular, as the most general linear equivariant layers with respect to the regular representation, group convolutions have been highly impactful in numerous applications. Although equivariant architectures have been studied extensively, much less is known about the training dynamics of equivariant neural networks. Concurrently, neural tangent kernels (NTKs) have emerged as a powerful tool to analytically understand the training dynamics of wide neural networks. In this work, we combine these two fields for the first time by giving explicit expressions for NTKs of group convolutional neural networks. In numerical experiments, we demonstrate superior performance for equivariant NTKs over non-equivariant NTKs on a classification task for medical images.
[ "['Philipp Misof' 'Pan Kessel' 'Jan E. Gerken']" ]
null
null
2406.06506
null
null
http://arxiv.org/pdf/2406.06506v1
2024-06-10T17:44:11Z
2024-06-10T17:44:11Z
Online Newton Method for Bandit Convex Optimisation
We introduce a computationally efficient algorithm for zeroth-order bandit convex optimisation and prove that in the adversarial setting its regret is at most $d^{3.5} sqrt{n} mathrm{polylog}(n, d)$ with high probability where $d$ is the dimension and $n$ is the time horizon. In the stochastic setting the bound improves to $M d^{2} sqrt{n} mathrm{polylog}(n, d)$ where $M in [d^{-1/2}, d^{-1 / 4}]$ is a constant that depends on the geometry of the constraint set and the desired computational properties.
[ "['Hidde Fokkema' 'Dirk van der Hoeven' 'Tor Lattimore' 'Jack J. Mayo']" ]
null
null
2406.06507
null
null
http://arxiv.org/pdf/2406.06507v2
2024-06-20T23:07:12Z
2024-06-10T17:44:59Z
Verification-Guided Shielding for Deep Reinforcement Learning
In recent years, Deep Reinforcement Learning (DRL) has emerged as an effective approach to solving real-world tasks. However, despite their successes, DRL-based policies suffer from poor reliability, which limits their deployment in safety-critical domains. Various methods have been put forth to address this issue by providing formal safety guarantees. Two main approaches include shielding and verification. While shielding ensures the safe behavior of the policy by employing an external online component (i.e., a ``shield'') that overrides potentially dangerous actions, this approach has a significant computational cost as the shield must be invoked at runtime to validate every decision. On the other hand, verification is an offline process that can identify policies that are unsafe, prior to their deployment, yet, without providing alternative actions when such a policy is deemed unsafe. In this work, we present verification-guided shielding -- a novel approach that bridges the DRL reliability gap by integrating these two methods. Our approach combines both formal and probabilistic verification tools to partition the input domain into safe and unsafe regions. In addition, we employ clustering and symbolic representation procedures that compress the unsafe regions into a compact representation. This, in turn, allows to temporarily activate the shield solely in (potentially) unsafe regions, in an efficient manner. Our novel approach allows to significantly reduce runtime overhead while still preserving formal safety guarantees. We extensively evaluate our approach on two benchmarks from the robotic navigation domain, as well as provide an in-depth analysis of its scalability and completeness.
[ "['Davide Corsi' 'Guy Amir' 'Andoni Rodriguez' 'Cesar Sanchez' 'Guy Katz'\n 'Roy Fox']" ]
null
null
2406.06509
null
null
http://arxiv.org/pdf/2406.06509v2
2024-06-24T17:53:33Z
2024-06-10T17:48:36Z
Robust Distribution Learning with Local and Global Adversarial Corruptions
We consider learning in an adversarial environment, where an $varepsilon$-fraction of samples from a distribution $P$ are arbitrarily modified (global corruptions) and the remaining perturbations have average magnitude bounded by $rho$ (local corruptions). Given access to $n$ such corrupted samples, we seek a computationally efficient estimator $hat{P}_n$ that minimizes the Wasserstein distance $mathsf{W}_1(hat{P}_n,P)$. In fact, we attack the fine-grained task of minimizing $mathsf{W}_1(Pi_# hat{P}_n, Pi_# P)$ for all orthogonal projections $Pi in mathbb{R}^{d times d}$, with performance scaling with $mathrm{rank}(Pi) = k$. This allows us to account simultaneously for mean estimation ($k=1$), distribution estimation ($k=d$), as well as the settings interpolating between these two extremes. We characterize the optimal population-limit risk for this task and then develop an efficient finite-sample algorithm with error bounded by $sqrt{varepsilon k} + rho + tilde{O}(dsqrt{k}n^{-1/(k lor 2)})$ when $P$ has bounded covariance. This guarantee holds uniformly in $k$ and is minimax optimal up to the sub-optimality of the plug-in estimator when $rho = varepsilon = 0$. Our efficient procedure relies on a novel trace norm approximation of an ideal yet intractable 2-Wasserstein projection estimator. We apply this algorithm to robust stochastic optimization, and, in the process, uncover a new method for overcoming the curse of dimensionality in Wasserstein distributionally robust optimization.
[ "['Sloan Nietert' 'Ziv Goldfeld' 'Soroosh Shafiee']" ]
null
null
2406.06514
null
null
http://arxiv.org/pdf/2406.06514v2
2024-06-11T02:32:16Z
2024-06-10T17:54:57Z
Random Features Approximation for Control-Affine Systems
Modern data-driven control applications call for flexible nonlinear models that are amenable to principled controller synthesis and realtime feedback. Many nonlinear dynamical systems of interest are control affine. We propose two novel classes of nonlinear feature representations which capture control affine structure while allowing for arbitrary complexity in the state dependence. Our methods make use of random features (RF) approximations, inheriting the expressiveness of kernel methods at a lower computational cost. We formalize the representational capabilities of our methods by showing their relationship to the Affine Dot Product (ADP) kernel proposed by Casta~neda et al. (2021) and a novel Affine Dense (AD) kernel that we introduce. We further illustrate the utility by presenting a case study of data-driven optimization-based control using control certificate functions (CCF). Simulation experiments on a double pendulum empirically demonstrate the advantages of our methods.
[ "['Kimia Kazemian' 'Yahya Sattar' 'Sarah Dean']" ]
null
null
2406.06516
null
null
http://arxiv.org/pdf/2406.06516v1
2024-06-10T17:55:43Z
2024-06-10T17:55:43Z
Distribution-Free Predictive Inference under Unknown Temporal Drift
Distribution-free prediction sets play a pivotal role in uncertainty quantification for complex statistical models. Their validity hinges on reliable calibration data, which may not be readily available as real-world environments often undergo unknown changes over time. In this paper, we propose a strategy for choosing an adaptive window and use the data therein to construct prediction sets. The window is selected by optimizing an estimated bias-variance tradeoff. We provide sharp coverage guarantees for our method, showing its adaptivity to the underlying temporal drift. We also illustrate its efficacy through numerical experiments on synthetic and real data.
[ "['Elise Han' 'Chengpiao Huang' 'Kaizheng Wang']" ]
null
null
2406.06518
null
null
http://arxiv.org/pdf/2406.06518v1
2024-06-10T17:58:02Z
2024-06-10T17:58:02Z
Data Augmentation for Multivariate Time Series Classification: An Experimental Study
Our study investigates the impact of data augmentation on the performance of multivariate time series models, focusing on datasets from the UCR archive. Despite the limited size of these datasets, we achieved classification accuracy improvements in 10 out of 13 datasets using the Rocket and InceptionTime models. This highlights the essential role of sufficient data in training effective models, paralleling the advancements seen in computer vision. Our work delves into adapting and applying existing methods in innovative ways to the domain of multivariate time series classification. Our comprehensive exploration of these techniques sets a new standard for addressing data scarcity in time series analysis, emphasizing that diverse augmentation strategies are crucial for unlocking the potential of both traditional and deep learning models. Moreover, by meticulously analyzing and applying a variety of augmentation techniques, we demonstrate that strategic data enrichment can enhance model accuracy. This not only establishes a benchmark for future research in time series analysis but also underscores the importance of adopting varied augmentation approaches to improve model performance in the face of limited data availability.
[ "['Romain Ilbert' 'Thai V. Hoang' 'Zonghua Zhang']" ]
null
null
2406.06520
null
null
http://arxiv.org/pdf/2406.06520v1
2024-06-10T17:58:48Z
2024-06-10T17:58:48Z
Decentralized Personalized Federated Learning
This work tackles the challenges of data heterogeneity and communication limitations in decentralized federated learning. We focus on creating a collaboration graph that guides each client in selecting suitable collaborators for training personalized models that leverage their local data effectively. Our approach addresses these issues through a novel, communication-efficient strategy that enhances resource efficiency. Unlike traditional methods, our formulation identifies collaborators at a granular level by considering combinatorial relations of clients, enhancing personalization while minimizing communication overhead. We achieve this through a bi-level optimization framework that employs a constrained greedy algorithm, resulting in a resource-efficient collaboration graph for personalized learning. Extensive evaluation against various baselines across diverse datasets demonstrates the superiority of our method, named DPFL. DPFL consistently outperforms other approaches, showcasing its effectiveness in handling real-world data heterogeneity, minimizing communication overhead, enhancing resource efficiency, and building personalized models in decentralized federated learning scenarios.
[ "['Salma Kharrat' 'Marco Canini' 'Samuel Horvath']" ]
null
null
2406.06531
null
null
http://arxiv.org/pdf/2406.06531v1
2024-04-11T13:21:49Z
2024-04-11T13:21:49Z
Quantum Reinforcement Learning in Non-Abelian Environments: Unveiling Novel Formulations and Quantum Advantage Exploration
This paper delves into recent advancements in Quantum Reinforcement Learning (QRL), particularly focusing on non-commutative environments, which represent uncharted territory in this field. Our research endeavors to redefine the boundaries of decision-making by introducing formulations and strategies that harness the inherent properties of quantum systems. At the core of our investigation characterization of the agent's state space within a Hilbert space ($mathcal{H}$). Here, quantum states emerge as complex superpositions of classical state introducing non-commutative quantum actions governed by unitary operators, necessitating a reimagining of state transitions. Complementing this framework is a refined reward function, rooted in quantum mechanics as a Hermitian operator on $mathcal{H}$. This reward function serves as the foundation for the agent's decision-making process. By leveraging the quantum Bellman equation, we establish a methodology for maximizing expected cumulative reward over an infinite horizon, considering the entangled dynamics of quantum systems. We also connect the Quantum Bellman Equation to the Degree of Non Commutativity of the Environment, evident in Pure Algebra. We design a quantum advantage function. This ingeniously designed function exploits latent quantum parallelism inherent in the system, enhancing the agent's decision-making capabilities and paving the way for exploration of quantum advantage in uncharted territories. Furthermore, we address the significant challenge of quantum exploration directly, recognizing the limitations of traditional strategies in this complex environment.
[ "['Shubhayan Ghosal']" ]
null
null
2406.06537
null
null
http://arxiv.org/pdf/2406.06537v1
2024-04-23T12:36:07Z
2024-04-23T12:36:07Z
Interactive Generation of Laparoscopic Videos with Diffusion Models
Generative AI, in general, and synthetic visual data generation, in specific, hold much promise for benefiting surgical training by providing photorealism to simulation environments. Current training methods primarily rely on reading materials and observing live surgeries, which can be time-consuming and impractical. In this work, we take a significant step towards improving the training process. Specifically, we use diffusion models in combination with a zero-shot video diffusion method to interactively generate realistic laparoscopic images and videos by specifying a surgical action through text and guiding the generation with tool positions through segmentation masks. We demonstrate the performance of our approach using the publicly available Cholec dataset family and evaluate the fidelity and factual correctness of our generated images using a surgical action recognition model as well as the pixel-wise F1-score for the spatial control of tool generation. We achieve an FID of 38.097 and an F1-score of 0.71.
[ "['Ivan Iliash' 'Simeon Allmendinger' 'Felix Meissen' 'Niklas Kühl'\n 'Daniel Rückert']" ]
null
null
2406.06538
null
null
http://arxiv.org/abs/2406.06538v1
2024-04-23T16:23:18Z
2024-04-23T16:23:18Z
Understanding attention-based encoder-decoder networks: a case study with chess scoresheet recognition
Deep neural networks are largely used for complex prediction tasks. There is plenty of empirical evidence of their successful end-to-end training for a diversity of tasks. Success is often measured based solely on the final performance of the trained network, and explanations on when, why and how they work are less emphasized. In this paper we study encoder-decoder recurrent neural networks with attention mechanisms for the task of reading handwritten chess scoresheets. Rather than prediction performance, our concern is to better understand how learning occurs in these type of networks. We characterize the task in terms of three subtasks, namely input-output alignment, sequential pattern recognition, and handwriting recognition, and experimentally investigate which factors affect their learning. We identify competition, collaboration and dependence relations between the subtasks, and argue that such knowledge might help one to better balance factors to properly train a network.
[ "['Sergio Y. Hayashi' 'Nina S. T. Hirata']" ]
null
null
2406.06542
null
null
http://arxiv.org/pdf/2406.06542v1
2024-05-01T16:24:53Z
2024-05-01T16:24:53Z
vMCU: Coordinated Memory Management and Kernel Optimization for DNN Inference on MCUs
IoT devices based on microcontroller units (MCU) provide ultra-low power consumption and ubiquitous computation for near-sensor deep learning models (DNN). However, the memory of MCU is usually 2-3 orders of magnitude smaller than mobile devices, which makes it challenging to map DNNs onto MCUs. Previous work separates memory management and kernel implementation for MCU and relies on coarse-grained memory management techniques such as inplace update to reduce memory consumption. In this paper, we propose to coordinate memory management and kernel optimization for DNN inference on MCUs to enable fine-grained memory management. The key idea is to virtualize the limited memory of MCU as a large memory pool. Each kernel divides the memory pool into kernel-specific segments and handles segment load and store while computing DNN layers. Memory consumption can be reduced because using the fine-grained segment-level memory control, we can overlap the memory footprint of different tensors without the need to materialize them at the same time. Following this idea, we implement ours{} for DNN inference on MCU. Evaluation for single layers on ARM Cortex-M4 and Cortex-M7 processors shows that ours{} can reduce from $12.0%$ to $49.5%$ RAM usage and from $20.6%$ to $53.0%$ energy consumption compared to state-of-the-art work. For full DNN evaluation, ours{} can reduce the memory bottleneck by $61.5%$, enabling more models to be deployed on low-end MCUs.
[ "['Size Zheng' 'Renze Chen' 'Meng Li' 'Zihao Ye' 'Luis Ceze' 'Yun Liang']" ]
null
null
2406.06543
null
null
http://arxiv.org/pdf/2406.06543v1
2024-05-06T10:30:05Z
2024-05-06T10:30:05Z
SparrowSNN: A Hardware/software Co-design for Energy Efficient ECG Classification
Heart disease is one of the leading causes of death worldwide. Given its high risk and often asymptomatic nature, real-time continuous monitoring is essential. Unlike traditional artificial neural networks (ANNs), spiking neural networks (SNNs) are well-known for their energy efficiency, making them ideal for wearable devices and energy-constrained edge computing platforms. However, current energy measurement of SNN implementations for detecting heart diseases typically rely on empirical values, often overlooking hardware overhead. Additionally, the integer and fire activations in SNNs require multiple memory accesses and repeated computations, which can further compromise energy efficiency. In this paper, we propose sparrowSNN, a redesign of the standard SNN workflow from a hardware perspective, and present a dedicated ASIC design for SNNs, optimized for ultra-low power wearable devices used in heartbeat classification. Using the MIT-BIH dataset, our SNN achieves a state-of-the-art accuracy of 98.29% for SNNs, with energy consumption of 31.39nJ per inference and power usage of 6.1uW, making sparrowSNN the highest accuracy with the lowest energy use among comparable systems. We also compare the energy-to-accuracy trade-offs between SNNs and quantized ANNs, offering recommendations on insights on how best to use SNNs.
[ "['Zhanglu Yan' 'Zhenyu Bai' 'Tulika Mitra' 'Weng-Fai Wong']" ]
null
null
2406.06552
null
null
http://arxiv.org/pdf/2406.06552v1
2024-05-28T14:24:36Z
2024-05-28T14:24:36Z
Optimizing Sharpe Ratio: Risk-Adjusted Decision-Making in Multi-Armed Bandits
Sharpe Ratio (SR) is a critical parameter in characterizing financial time series as it jointly considers the reward and the volatility of any stock/portfolio through its variance. Deriving online algorithms for optimizing the SR is particularly challenging since even offline policies experience constant regret with respect to the best expert Even-Dar et al (2006). Thus, instead of optimizing the usual definition of SR, we optimize regularized square SR (RSSR). We consider two settings for the RSSR, Regret Minimization (RM) and Best Arm Identification (BAI). In this regard, we propose a novel multi-armed bandit (MAB) algorithm for RM called UCB-RSSR for RSSR maximization. We derive a path-dependent concentration bound for the estimate of the RSSR. Based on that, we derive the regret guarantees of UCB-RSSR and show that it evolves as O(log n) for the two-armed bandit case played for a horizon n. We also consider a fixed budget setting for well-known BAI algorithms, i.e., sequential halving and successive rejects, and propose SHVV, SHSR, and SuRSR algorithms. We derive the upper bound for the error probability of all proposed BAI algorithms. We demonstrate that UCB-RSSR outperforms the only other known SR optimizing bandit algorithm, U-UCB Cassel et al (2023). We also establish its efficacy with respect to other benchmarks derived from the GRA-UCB and MVTS algorithms. We further demonstrate the performance of proposed BAI algorithms for multiple different setups. Our research highlights that our proposed algorithms will find extensive applications in risk-aware portfolio management problems. Consequently, our research highlights that our proposed algorithms will find extensive applications in risk-aware portfolio management problems.
[ "['Sabrina Khurshid' 'Mohammed Shahid Abdulla' 'Gourab Ghatak']" ]
null
null
2406.06553
null
null
http://arxiv.org/pdf/2406.06553v1
2024-05-30T10:03:58Z
2024-05-30T10:03:58Z
Ensemble Model With Bert,Roberta and Xlnet For Molecular property prediction
This paper presents a novel approach for predicting molecular properties with high accuracy without the need for extensive pre-training. Employing ensemble learning and supervised fine-tuning of BERT, RoBERTa, and XLNet, our method demonstrates significant effectiveness compared to existing advanced models. Crucially, it addresses the issue of limited computational resources faced by experimental groups, enabling them to accurately predict molecular properties. This innovation provides a cost-effective and resource-efficient solution, potentially advancing further research in the molecular domain.
[ "['Junling Hu']" ]
null
null
2406.06555
null
null
http://arxiv.org/pdf/2406.06555v1
2024-06-01T07:06:57Z
2024-06-01T07:06:57Z
An Evaluation Benchmark for Autoformalization in Lean4
Large Language Models (LLMs) hold the potential to revolutionize autoformalization. The introduction of Lean4, a mathematical programming language, presents an unprecedented opportunity to rigorously assess the autoformalization capabilities of LLMs. This paper introduces a novel evaluation benchmark designed for Lean4, applying it to test the abilities of state-of-the-art LLMs, including GPT-3.5, GPT-4, and Gemini Pro. Our comprehensive analysis reveals that, despite recent advancements, these LLMs still exhibit limitations in autoformalization, particularly in more complex areas of mathematics. These findings underscore the need for further development in LLMs to fully harness their potential in scientific research and development. This study not only benchmarks current LLM capabilities but also sets the stage for future enhancements in autoformalization.
[ "['Aryan Gulati' 'Devanshu Ladsaria' 'Shubhra Mishra' 'Jasdeep Sidhu'\n 'Brando Miranda']" ]
null
null
2406.06559
null
null
http://arxiv.org/pdf/2406.06559v1
2024-06-02T06:24:38Z
2024-06-02T06:24:38Z
Harnessing Business and Media Insights with Large Language Models
This paper introduces Fortune Analytics Language Model (FALM). FALM empowers users with direct access to comprehensive business analysis, including market trends, company performance metrics, and expert insights. Unlike generic LLMs, FALM leverages a curated knowledge base built from professional journalism, enabling it to deliver precise and in-depth answers to intricate business questions. Users can further leverage natural language queries to directly visualize financial data, generating insightful charts and graphs to understand trends across diverse business sectors clearly. FALM fosters user trust and ensures output accuracy through three novel methods: 1) Time-aware reasoning guarantees accurate event registration and prioritizes recent updates. 2) Thematic trend analysis explicitly examines topic evolution over time, providing insights into emerging business landscapes. 3) Content referencing and task decomposition enhance answer fidelity and data visualization accuracy. We conduct both automated and human evaluations, demonstrating FALM's significant performance improvements over baseline methods while prioritizing responsible AI practices. These benchmarks establish FALM as a cutting-edge LLM in the business and media domains, with exceptional accuracy and trustworthiness.
[ "['Yujia Bao' 'Ankit Parag Shah' 'Neeru Narang' 'Jonathan Rivers'\n 'Rajeev Maksey' 'Lan Guan' 'Louise N. Barrere' 'Shelley Evenson'\n 'Rahul Basole' 'Connie Miao' 'Ankit Mehta' 'Fabien Boulay' 'Su Min Park'\n 'Natalie E. Pearson' 'Eldhose Joy' 'Tiger He' 'Sumiran Thakur'\n 'Koustav Ghosal' 'Josh On' 'Phoebe Morrison' 'Tim Major' 'Eva Siqi Wang'\n 'Gina Escobar' 'Jiaheng Wei' 'Tharindu Cyril Weerasooriya' 'Queena Song'\n 'Daria Lashkevich' 'Clare Chen' 'Gyuhak Kim' 'Dengpan Yin' 'Don Hejna'\n 'Mo Nomeli' 'Wei Wei']" ]