categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 2406.10867 | null | null | http://arxiv.org/pdf/2406.10867v1 | 2024-06-16T09:32:19Z | 2024-06-16T09:32:19Z | Geometric-informed GFlowNets for Structure-Based Drug Design | The rise of cost involved with drug discovery and current speed of which they are discover, underscore the need for more efficient structure-based drug design (SBDD) methods. We employ Generative Flow Networks (GFlowNets), to effectively explore the vast combinatorial space of drug-like molecules, which traditional virtual screening methods fail to cover. We introduce a novel modification to the GFlowNet framework by incorporating trigonometrically consistent embeddings, previously utilized in tasks involving protein conformation and protein-ligand interactions, to enhance the model's ability to generate molecules tailored to specific protein pockets. We have modified the existing protein conditioning used by GFlowNets, blending geometric information from both protein and ligand embeddings to achieve more geometrically consistent embeddings. Experiments conducted using CrossDocked2020 demonstrated an improvement in the binding affinity between generated molecules and protein pockets for both single and multi-objective tasks, compared to previous work. Additionally, we propose future work aimed at further increasing the geometric information captured in protein-ligand interactions. | [
"['Grayson Lee' 'Tony Shen' 'Martin Ester']"
]
|
null | null | 2406.10871 | null | null | http://arxiv.org/pdf/2406.10871v1 | 2024-06-16T09:46:58Z | 2024-06-16T09:46:58Z | Graph Neural Reaction Diffusion Models | The integration of Graph Neural Networks (GNNs) and Neural Ordinary and Partial Differential Equations has been extensively studied in recent years. GNN architectures powered by neural differential equations allow us to reason about their behavior, and develop GNNs with desired properties such as controlled smoothing or energy conservation. In this paper we take inspiration from Turing instabilities in a Reaction Diffusion (RD) system of partial differential equations, and propose a novel family of GNNs based on neural RD systems. We textcolor{black}{demonstrate} that our RDGNN is powerful for the modeling of various data types, from homophilic, to heterophilic, and spatio-temporal datasets. We discuss the theoretical properties of our RDGNN, its implementation, and show that it improves or offers competitive performance to state-of-the-art methods. | [
"['Moshe Eliasof' 'Eldad Haber' 'Eran Treister']"
]
|
null | null | 2406.10876 | null | null | http://arxiv.org/pdf/2406.10876v1 | 2024-06-16T09:59:29Z | 2024-06-16T09:59:29Z | Deep neural networks with ReLU, leaky ReLU, and softplus activation
provably overcome the curse of dimensionality for space-time solutions of
semilinear partial differential equations | It is a challenging topic in applied mathematics to solve high-dimensional nonlinear partial differential equations (PDEs). Standard approximation methods for nonlinear PDEs suffer under the curse of dimensionality (COD) in the sense that the number of computational operations of the approximation method grows at least exponentially in the PDE dimension and with such methods it is essentially impossible to approximately solve high-dimensional PDEs even when the fastest currently available computers are used. However, in the last years great progress has been made in this area of research through suitable deep learning (DL) based methods for PDEs in which deep neural networks (DNNs) are used to approximate solutions of PDEs. Despite the remarkable success of such DL methods in simulations, it remains a fundamental open problem of research to prove (or disprove) that such methods can overcome the COD in the approximation of PDEs. However, there are nowadays several partial error analysis results for DL methods for high-dimensional nonlinear PDEs in the literature which prove that DNNs can overcome the COD in the sense that the number of parameters of the approximating DNN grows at most polynomially in both the reciprocal of the prescribed approximation accuracy $varepsilon>0$ and the PDE dimension $dinmathbb{N}$. In the main result of this article we prove that for all $T,pin(0,infty)$ it holds that solutions $u_dcolon[0,T]timesmathbb{R}^dtomathbb{R}$, $dinmathbb{N}$, of semilinear heat equations with Lipschitz continuous nonlinearities can be approximated in the $L^p$-sense on space-time regions without the COD by DNNs with the rectified linear unit (ReLU), the leaky ReLU, or the softplus activation function. In previous articles similar results have been established not for space-time regions but for the solutions $u_d(T,cdot)$, $dinmathbb{N}$, at the terminal time $T$. | [
"['Julia Ackermann' 'Arnulf Jentzen' 'Benno Kuckuck' 'Joshua Lee Padgett']"
]
|
null | null | 2406.10884 | null | null | http://arxiv.org/pdf/2406.10884v1 | 2024-06-16T10:31:45Z | 2024-06-16T10:31:45Z | Linkage on Security, Privacy and Fairness in Federated Learning: New
Balances and New Perspectives | Federated learning is fast becoming a popular paradigm for applications involving mobile devices, banking systems, healthcare, and IoT systems. Hence, over the past five years, researchers have undertaken extensive studies on the privacy leaks, security threats, and fairness associated with these emerging models. For the most part, these three critical concepts have been studied in isolation; however, recent research has revealed that there may be an intricate interplay between them. For instance, some researchers have discovered that pursuing fairness may compromise privacy, or that efforts to enhance security can impact fairness. These emerging insights shed light on the fundamental connections between privacy, security, and fairness within federated learning, and, by delving deeper into these interconnections, we may be able to significantly augment research and development across the field. Consequently, the aim of this survey is to offer comprehensive descriptions of the privacy, security, and fairness issues in federated learning. Moreover, we analyze the complex relationships between these three dimensions of cyber safety and pinpoint the fundamental elements that influence each of them. We contend that there exists a trade-off between privacy and fairness and between security and gradient sharing. On this basis, fairness can function as a bridge between privacy and security to build models that are either more secure or more private. Building upon our observations, we identify the trade-offs between privacy and fairness and between security and fairness within the context of federated learning. The survey then concludes with promising directions for future research in this vanguard field. | [
"['Linlin Wang' 'Tianqing Zhu' 'Wanlei Zhou' 'Philip S. Yu']"
]
|
null | null | 2406.10886 | null | null | http://arxiv.org/pdf/2406.10886v1 | 2024-06-16T10:36:41Z | 2024-06-16T10:36:41Z | Distilling Opinions at Scale: Incremental Opinion Summarization using
XL-OPSUMM | Opinion summarization in e-commerce encapsulates the collective views of numerous users about a product based on their reviews. Typically, a product on an e-commerce platform has thousands of reviews, each review comprising around 10-15 words. While Large Language Models (LLMs) have shown proficiency in summarization tasks, they struggle to handle such a large volume of reviews due to context limitations. To mitigate, we propose a scalable framework called Xl-OpSumm that generates summaries incrementally. However, the existing test set, AMASUM has only 560 reviews per product on average. Due to the lack of a test set with thousands of reviews, we created a new test set called Xl-Flipkart by gathering data from the Flipkart website and generating summaries using GPT-4. Through various automatic evaluations and extensive analysis, we evaluated the framework's efficiency on two datasets, AMASUM and Xl-Flipkart. Experimental results show that our framework, Xl-OpSumm powered by Llama-3-8B-8k, achieves an average ROUGE-1 F1 gain of 4.38% and a ROUGE-L F1 gain of 3.70% over the next best-performing model. | [
"['Sri Raghava Muddu' 'Rupasai Rangaraju' 'Tejpalsingh Siledar'\n 'Swaroop Nath' 'Pushpak Bhattacharyya' 'Swaprava Nath' 'Suman Banerjee'\n 'Amey Patil' 'Muthusamy Chelliah' 'Sudhanshu Shekhar Singh'\n 'Nikesh Garera']"
]
|
null | null | 2406.10889 | null | null | http://arxiv.org/pdf/2406.10889v1 | 2024-06-16T10:42:21Z | 2024-06-16T10:42:21Z | VELOCITI: Can Video-Language Models Bind Semantic Concepts through Time? | Compositionality is a fundamental aspect of vision-language understanding and is especially required for videos since they contain multiple entities (e.g. persons, actions, and scenes) interacting dynamically over time. Existing benchmarks focus primarily on perception capabilities. However, they do not study binding, the ability of a model to associate entities through appropriate relationships. To this end, we propose VELOCITI, a new benchmark building on complex movie clips and dense semantic role label annotations to test perception and binding in video language models (contrastive and Video-LLMs). Our perception-based tests require discriminating video-caption pairs that share similar entities, and the binding tests require models to associate the correct entity to a given situation while ignoring the different yet plausible entities that also appear in the same video. While current state-of-the-art models perform moderately well on perception tests, accuracy is near random when both entities are present in the same video, indicating that they fail at binding tests. Even the powerful Gemini 1.5 Flash has a substantial gap (16-28%) with respect to human accuracy in such binding tests. | [
"['Darshana Saravanan' 'Darshan Singh' 'Varun Gupta' 'Zeeshan Khan'\n 'Vineet Gandhi' 'Makarand Tapaswi']"
]
|
null | null | 2406.10890 | null | null | http://arxiv.org/pdf/2406.10890v1 | 2024-06-16T10:47:21Z | 2024-06-16T10:47:21Z | RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language
Models | Large language models (LLMs) inevitably memorize sensitive, copyrighted, and harmful knowledge from the training corpus; therefore, it is crucial to erase this knowledge from the models. Machine unlearning is a promising solution for efficiently removing specific knowledge by post hoc modifying models. In this paper, we propose a Real-World Knowledge Unlearning benchmark (RWKU) for LLM unlearning. RWKU is designed based on the following three key factors: (1) For the task setting, we consider a more practical and challenging unlearning setting, where neither the forget corpus nor the retain corpus is accessible. (2) For the knowledge source, we choose 200 real-world famous people as the unlearning targets and show that such popular knowledge is widely present in various LLMs. (3) For the evaluation framework, we design the forget set and the retain set to evaluate the model's capabilities across various real-world applications. Regarding the forget set, we provide four four membership inference attack (MIA) methods and nine kinds of adversarial attack probes to rigorously test unlearning efficacy. Regarding the retain set, we assess locality and utility in terms of neighbor perturbation, general ability, reasoning ability, truthfulness, factuality, and fluency. We conduct extensive experiments across two unlearning scenarios, two models and six baseline methods and obtain some meaningful findings. We release our benchmark and code publicly at http://rwku-bench.github.io for future work. | [
"['Zhuoran Jin' 'Pengfei Cao' 'Chenhao Wang' 'Zhitao He' 'Hongbang Yuan'\n 'Jiachun Li' 'Yubo Chen' 'Kang Liu' 'Jun Zhao']"
]
|
null | null | 2406.10891 | null | null | http://arxiv.org/pdf/2406.10891v2 | 2024-06-18T12:54:48Z | 2024-06-16T10:49:23Z | Benchmarking Label Noise in Instance Segmentation: Spatial Noise Matters | Obtaining accurate labels for instance segmentation is particularly challenging due to the complex nature of the task. Each image necessitates multiple annotations, encompassing not only the object's class but also its precise spatial boundaries. These requirements elevate the likelihood of errors and inconsistencies in both manual and automated annotation processes. By simulating different noise conditions, we provide a realistic scenario for assessing the robustness and generalization capabilities of instance segmentation models in different segmentation tasks, introducing COCO-N and Cityscapes-N. We also propose a benchmark for weakly annotation noise, dubbed COCO-WAN, which utilizes foundation models and weak annotations to simulate semi-automated annotation tools and their noisy labels. This study sheds light on the quality of segmentation masks produced by various models and challenges the efficacy of popular methods designed to address learning with label noise. | [
"['Eden Grad' 'Moshe Kimhi' 'Lion Halika' 'Chaim Baskin']"
]
|
null | null | 2406.10892 | null | null | http://arxiv.org/pdf/2406.10892v1 | 2024-06-16T10:49:41Z | 2024-06-16T10:49:41Z | DIPPER: Direct Preference Optimization to Accelerate Primitive-Enabled
Hierarchical Reinforcement Learning | Learning control policies to perform complex robotics tasks from human preference data presents significant challenges. On the one hand, the complexity of such tasks typically requires learning policies to perform a variety of subtasks, then combining them to achieve the overall goal. At the same time, comprehensive, well-engineered reward functions are typically unavailable in such problems, while limited human preference data often is; making efficient use of such data to guide learning is therefore essential. Methods for learning to perform complex robotics tasks from human preference data must overcome both these challenges simultaneously. In this work, we introduce DIPPER: Direct Preference Optimization to Accelerate Primitive-Enabled Hierarchical Reinforcement Learning, an efficient hierarchical approach that leverages direct preference optimization to learn a higher-level policy and reinforcement learning to learn a lower-level policy. DIPPER enjoys improved computational efficiency due to its use of direct preference optimization instead of standard preference-based approaches such as reinforcement learning from human feedback, while it also mitigates the well-known hierarchical reinforcement learning issues of non-stationarity and infeasible subgoal generation due to our use of primitive-informed regularization inspired by a novel bi-level optimization formulation of the hierarchical reinforcement learning problem. To validate our approach, we perform extensive experimental analysis on a variety of challenging robotics tasks, demonstrating that DIPPER outperforms hierarchical and non-hierarchical baselines, while ameliorating the non-stationarity and infeasible subgoal generation issues of hierarchical reinforcement learning. | [
"['Utsav Singh' 'Souradip Chakraborty' 'Wesley A. Suttle' 'Brian M. Sadler'\n 'Vinay P Namboodiri' 'Amrit Singh Bedi']"
]
|
null | null | 2406.10903 | null | null | http://arxiv.org/pdf/2406.10903v1 | 2024-06-16T11:56:50Z | 2024-06-16T11:56:50Z | New Solutions on LLM Acceleration, Optimization, and Application | Large Language Models (LLMs) have become extremely potent instruments with exceptional capacities for comprehending and producing human-like text in a wide range of applications. However, the increasing size and complexity of LLMs present significant challenges in both training and deployment, leading to substantial computational and storage costs as well as heightened energy consumption. In this paper, we provide a review of recent advancements and research directions aimed at addressing these challenges and enhancing the efficiency of LLM-based systems. We begin by discussing algorithm-level acceleration techniques focused on optimizing LLM inference speed and resource utilization. We also explore LLM-hardware co-design strategies with a vision to improve system efficiency by tailoring hardware architectures to LLM requirements. Further, we delve into LLM-to-accelerator compilation approaches, which involve customizing hardware accelerators for efficient LLM deployment. Finally, as a case study to leverage LLMs for assisting circuit design, we examine LLM-aided design methodologies for an important task: High-Level Synthesis (HLS) functional verification, by creating a new dataset that contains a large number of buggy and bug-free codes, which can be essential for training LLMs to specialize on HLS verification and debugging. For each aspect mentioned above, we begin with a detailed background study, followed by the presentation of several novel solutions proposed to overcome specific challenges. We then outline future research directions to drive further advancements. Through these efforts, we aim to pave the way for more efficient and scalable deployment of LLMs across a diverse range of applications. | [
"['Yingbing Huang' 'Lily Jiaxin Wan' 'Hanchen Ye' 'Manvi Jha'\n 'Jinghua Wang' 'Yuhong Li' 'Xiaofan Zhang' 'Deming Chen']"
]
|
null | null | 2406.10906 | null | null | http://arxiv.org/pdf/2406.10906v1 | 2024-06-16T12:06:58Z | 2024-06-16T12:06:58Z | Breaking the Attention Bottleneck | Attention-based transformers have become the standard architecture in many deep learning fields, primarily due to their ability to model long-range dependencies and handle variable-length input sequences. However, the attention mechanism with its quadratic complexity is a significant bottleneck in the transformer architecture. This algorithm is only uni-directional in the decoder and converges to a static pattern in over-parametrized decoder-only models. I address this issue by developing a generative function as attention or activation replacement. It still has the auto-regressive character by comparing each token with the previous one. In my test setting with nanoGPT this yields a smaller loss while having a smaller model. The loss further drops by incorporating an average context vector. This concept of attention replacement is distributed under the GNU AGPL v3 license at https://gitlab.com/Bachstelze/causal_generation. | [
"['Kalle Hilsenbek']"
]
|
null | null | 2406.10914 | null | null | http://arxiv.org/pdf/2406.10914v1 | 2024-06-16T12:35:05Z | 2024-06-16T12:35:05Z | First-Order Manifold Data Augmentation for Regression Learning | Data augmentation (DA) methods tailored to specific domains generate synthetic samples by applying transformations that are appropriate for the characteristics of the underlying data domain, such as rotations on images and time warping on time series data. In contrast, domain-independent approaches, e.g. mixup, are applicable to various data modalities, and as such they are general and versatile. While regularizing classification tasks via DA is a well-explored research topic, the effect of DA on regression problems received less attention. To bridge this gap, we study the problem of domain-independent augmentation for regression, and we introduce FOMA: a new data-driven domain-independent data augmentation method. Essentially, our approach samples new examples from the tangent planes of the train distribution. Augmenting data in this way aligns with the network tendency towards capturing the dominant features of its input signals. We evaluate FOMA on in-distribution generalization and out-of-distribution robustness benchmarks, and we show that it improves the generalization of several neural architectures. We also find that strong baselines based on mixup are less effective in comparison to our approach. Our code is publicly available athttps://github.com/azencot-group/FOMA. | [
"['Ilya Kaufman' 'Omri Azencot']"
]
|
null | null | 2406.10917 | null | null | http://arxiv.org/pdf/2406.10917v1 | 2024-06-16T12:45:44Z | 2024-06-16T12:45:44Z | Bayesian Intervention Optimization for Causal Discovery | Causal discovery is crucial for understanding complex systems and informing decisions. While observational data can uncover causal relationships under certain assumptions, it often falls short, making active interventions necessary. Current methods, such as Bayesian and graph-theoretical approaches, do not prioritize decision-making and often rely on ideal conditions or information gain, which is not directly related to hypothesis testing. We propose a novel Bayesian optimization-based method inspired by Bayes factors that aims to maximize the probability of obtaining decisive and correct evidence. Our approach uses observational data to estimate causal models under different hypotheses, evaluates potential interventions pre-experimentally, and iteratively updates priors to refine interventions. We demonstrate the effectiveness of our method through various experiments. Our contributions provide a robust framework for efficient causal discovery through active interventions, enhancing the practical application of theoretical advancements. | [
"['Yuxuan Wang' 'Mingzhou Liu' 'Xinwei Sun' 'Wei Wang' 'Yizhou Wang']"
]
|
null | null | 2406.10918 | null | null | http://arxiv.org/pdf/2406.10918v3 | 2024-06-25T10:50:09Z | 2024-06-16T12:46:40Z | Embodied Question Answering via Multi-LLM Systems | Embodied Question Answering (EQA) is an important problem, which involves an agent exploring the environment to answer user queries. In the existing literature, EQA has exclusively been studied in single-agent scenarios, where exploration can be time-consuming and costly. In this work, we consider EQA in a multi-agent framework involving multiple large language models (LLM) based agents independently answering queries about a household environment. To generate one answer for each query, we use the individual responses to train a Central Answer Model (CAM) that aggregates responses for a robust answer. Using CAM, we observe a $50%$ higher EQA accuracy when compared against aggregation methods for ensemble LLM, such as voting schemes and debates. CAM does not require any form of agent communication, alleviating it from the associated costs. We ablate CAM with various nonlinear (neural network, random forest, decision tree, XGBoost) and linear (logistic regression classifier, SVM) algorithms. Finally, we present a feature importance analysis for CAM via permutation feature importance (PFI), quantifying CAMs reliance on each independent agent and query context. | [
"['Bhrij Patel' 'Vishnu Sashank Dorbala' 'Dinesh Manocha'\n 'Amrit Singh Bedi']"
]
|
null | null | 2406.10920 | null | null | http://arxiv.org/pdf/2406.10920v1 | 2024-06-16T12:53:17Z | 2024-06-16T12:53:17Z | Hamilton-Jacobi Based Policy-Iteration via Deep Operator Learning | The framework of deep operator network (DeepONet) has been widely exploited thanks to its capability of solving high dimensional partial differential equations. In this paper, we incorporate DeepONet with a recently developed policy iteration scheme to numerically solve optimal control problems and the corresponding Hamilton--Jacobi--Bellman (HJB) equations. A notable feature of our approach is that once the neural network is trained, the solution to the optimal control problem and HJB equations with different terminal functions can be inferred quickly thanks to the unique feature of operator learning. Furthermore, a quantitative analysis of the accuracy of the algorithm is carried out via comparison principles of viscosity solutions. The effectiveness of the method is verified with various examples, including 10-dimensional linear quadratic regulator problems (LQRs). | [
"['Jae Yong Lee' 'Yeoneung Kim']"
]
|
null | null | 2406.10923 | null | null | http://arxiv.org/pdf/2406.10923v1 | 2024-06-16T12:58:31Z | 2024-06-16T12:58:31Z | Investigating Video Reasoning Capability of Large Language Models with
Tropes in Movies | Large Language Models (LLMs) have demonstrated effectiveness not only in language tasks but also in video reasoning. This paper introduces a novel dataset, Tropes in Movies (TiM), designed as a testbed for exploring two critical yet previously overlooked video reasoning skills: (1) Abstract Perception: understanding and tokenizing abstract concepts in videos, and (2) Long-range Compositional Reasoning: planning and integrating intermediate reasoning steps for understanding long-range videos with numerous frames. Utilizing tropes from movie storytelling, TiM evaluates the reasoning capabilities of state-of-the-art LLM-based approaches. Our experiments show that current methods, including Captioner-Reasoner, Large Multimodal Model Instruction Fine-tuning, and Visual Programming, only marginally outperform a random baseline when tackling the challenges of Abstract Perception and Long-range Compositional Reasoning. To address these deficiencies, we propose Face-Enhanced Viper of Role Interactions (FEVoRI) and Context Query Reduction (ConQueR), which enhance Visual Programming by fostering role interaction awareness and progressively refining movie contexts and trope queries during reasoning processes, significantly improving performance by 15 F1 points. However, this performance still lags behind human levels (40 vs. 65 F1). Additionally, we introduce a new protocol to evaluate the necessity of Abstract Perception and Long-range Compositional Reasoning for task resolution. This is done by analyzing the code generated through Visual Programming using an Abstract Syntax Tree (AST), thereby confirming the increased complexity of TiM. The dataset and code are available at: https://ander1119.github.io/TiM | [
"['Hung-Ting Su' 'Chun-Tong Chao' 'Ya-Ching Hsu' 'Xudong Lin' 'Yulei Niu'\n 'Hung-Yi Lee' 'Winston H. Hsu']"
]
|
null | null | 2406.10937 | null | null | http://arxiv.org/pdf/2406.10937v2 | 2024-06-19T08:34:21Z | 2024-06-16T13:37:08Z | Understanding Understanding: A Pragmatic Framework Motivated by Large
Language Models | Motivated by the rapid ascent of Large Language Models (LLMs) and debates about the extent to which they possess human-level qualities, we propose a framework for testing whether any agent (be it a machine or a human) understands a subject matter. In Turing-test fashion, the framework is based solely on the agent's performance, and specifically on how well it answers questions. Elements of the framework include circumscribing the set of questions (the "scope of understanding"), requiring general competence ("passing grade"), avoiding "ridiculous answers", but still allowing wrong and "I don't know" answers to some questions. Reaching certainty about these conditions requires exhaustive testing of the questions which is impossible for nontrivial scopes, but we show how high confidence can be achieved via random sampling and the application of probabilistic confidence bounds. We also show that accompanying answers with explanations can improve the sample complexity required to achieve acceptable bounds, because an explanation of an answer implies the ability to answer many similar questions. According to our framework, current LLMs cannot be said to understand nontrivial domains, but as the framework provides a practical recipe for testing understanding, it thus also constitutes a tool for building AI agents that do understand. | [
"['Kevin Leyton-Brown' 'Yoav Shoham']"
]
|
null | null | 2406.10942 | null | null | http://arxiv.org/pdf/2406.10942v1 | 2024-06-16T13:44:41Z | 2024-06-16T13:44:41Z | Effective Generative AI: The Human-Algorithm Centaur | Advanced analytics science methods have enabled combining the power of artificial and human intelligence, creating textit{centaurs} that allow superior decision-making. Centaurs are hybrid human-algorithm AI models that combine both formal analytics and human intuition in a symbiotic manner within their learning and reasoning process. We argue that the future of AI development and use in many domains needs to focus on centaurs as opposed to traditional AI approaches. This paradigm shift from traditional AI methods to centaur-based AI methods raises some fundamental questions: How are centaurs different from traditional human-in-the-loop methods? What are the most effective methods for creating centaurs? When should centaurs be used, and when should the lead be given to traditional AI models? Doesn't the incorporation of human intuition -- which at times can be misleading -- in centaurs' decision-making process degrade its performance compared to traditional AI methods? This work aims to address these fundamental questions, focusing on recent advancements in generative AI, and especially in Large Language Models (LLMs), as a main case study to illustrate centaurs' critical essentiality to future AI endeavors. | [
"['Soroush Saghafian' 'Lihi Idan']"
]
|
null | null | 2406.10948 | null | null | http://arxiv.org/pdf/2406.10948v1 | 2024-06-16T14:05:47Z | 2024-06-16T14:05:47Z | Incorporating uncertainty quantification into travel mode choice
modeling: a Bayesian neural network (BNN) approach and an uncertainty-guided
active survey framework | Existing deep learning approaches for travel mode choice modeling fail to inform modelers about their prediction uncertainty. Even when facing scenarios that are out of the distribution of training data, which implies high prediction uncertainty, these approaches still provide deterministic answers, potentially leading to misguidance. To address this limitation, this study introduces the concept of uncertainty from the field of explainable artificial intelligence into travel mode choice modeling. We propose a Bayesian neural network-based travel mode prediction model (BTMP) that quantifies the uncertainty of travel mode predictions, enabling the model itself to "know" and "tell" what it doesn't know. With BTMP, we further propose an uncertainty-guided active survey framework, which dynamically formulates survey questions representing travel mode choice scenarios with high prediction uncertainty. Through iterative collection of responses to these dynamically tailored survey questions, BTMP is iteratively trained to achieve the desired accuracy faster with fewer questions, thereby reducing survey costs. Experimental validation using synthetic datasets confirms the effectiveness of BTMP in quantifying prediction uncertainty. Furthermore, experiments, utilizing both synthetic and real-world data, demonstrate that the BTMP model, trained with the uncertainty-guided active survey framework, requires 20% to 50% fewer survey responses to match the performance of the model trained on randomly collected survey data. Overall, the proposed BTMP model and active survey framework innovatively incorporate uncertainty quantification into travel mode choice modeling, providing model users with essential insights into prediction reliability while optimizing data collection for deep learning model training in a cost-efficient manner. | [
"['Shuwen Zheng' 'Zhou Fang' 'Liang Zhao']"
]
|
null | null | 2406.10954 | null | null | http://arxiv.org/pdf/2406.10954v1 | 2024-06-16T14:17:13Z | 2024-06-16T14:17:13Z | Towards Efficient Target-Level Machine Unlearning Based on Essential
Graph | Machine unlearning is an emerging technology that has come to attract widespread attention. A number of factors, including regulations and laws, privacy, and usability concerns, have resulted in this need to allow a trained model to forget some of its training data. Existing studies of machine unlearning mainly focus on unlearning requests that forget a cluster of instances or all instances from one class. While these approaches are effective in removing instances, they do not scale to scenarios where partial targets within an instance need to be forgotten. For example, one would like to only unlearn a person from all instances that simultaneously contain the person and other targets. Directly migrating instance-level unlearning to target-level unlearning will reduce the performance of the model after the unlearning process, or fail to erase information completely. To address these concerns, we have proposed a more effective and efficient unlearning scheme that focuses on removing partial targets from the model, which we name "target unlearning". Specifically, we first construct an essential graph data structure to describe the relationships between all important parameters that are selected based on the model explanation method. After that, we simultaneously filter parameters that are also important for the remaining targets and use the pruning-based unlearning method, which is a simple but effective solution to remove information about the target that needs to be forgotten. Experiments with different training models on various datasets demonstrate the effectiveness of the proposed approach. | [
"['Heng Xu' 'Tianqing Zhu' 'Lefeng Zhang' 'Wanlei Zhou' 'Wei Zhao']"
]
|
null | null | 2406.10956 | null | null | http://arxiv.org/pdf/2406.10956v1 | 2024-06-16T14:17:57Z | 2024-06-16T14:17:57Z | Robust Channel Learning for Large-Scale Radio Speaker Verification | Recent research in speaker verification has increasingly focused on achieving robust and reliable recognition under challenging channel conditions and noisy environments. Identifying speakers in radio communications is particularly difficult due to inherent limitations such as constrained bandwidth and pervasive noise interference. To address this issue, we present a Channel Robust Speaker Learning (CRSL) framework that enhances the robustness of the current speaker verification pipeline, considering data source, data augmentation, and the efficiency of model transfer processes. Our framework introduces an augmentation module that mitigates bandwidth variations in radio speech datasets by manipulating the bandwidth of training inputs. It also addresses unknown noise by introducing noise within the manifold space. Additionally, we propose an efficient fine-tuning method that reduces the need for extensive additional training time and large amounts of data. Moreover, we develop a toolkit for assembling a large-scale radio speech corpus and establish a benchmark specifically tailored for radio scenario speaker verification studies. Experimental results demonstrate that our proposed methodology effectively enhances performance and mitigates degradation caused by radio transmission in speaker verification tasks. The code will be available on Github. | [
"['Wenhao Yang' 'Jianguo Wei' 'Wenhuan Lu' 'Lei Li' 'Xugang Lu']"
]
|
null | null | 2406.10959 | null | null | http://arxiv.org/pdf/2406.10959v2 | 2024-06-20T04:47:42Z | 2024-06-16T14:31:26Z | On Convergence and Rate of Convergence of Policy Improvement Algorithms | In this paper we provide a simple proof from scratch for the convergence of Policy Improvement Algorithm (PIA) for a continuous time entropy-regularized stochastic control problem. Such convergence has been established by Huang-Wang-Zhou(2023) by using sophisticated PDE estimates for the iterative PDEs involved in the PIA. Our approach builds on some Feynman-Kac type probabilistic representation formulae for solutions of PDEs and their derivatives. Moreover, in the infinite horizon model with a large discount factor and in the finite horizon model, we obtain the exponential rate of convergence with similar arguments. Finally, in the one dimensional setting, we extend the convergence result to the diffusion control case. | [
"['Jin Ma' 'Gaozhan Wang' 'Jianfeng Zhang']"
]
|
null | null | 2406.10976 | null | null | http://arxiv.org/pdf/2406.10976v1 | 2024-06-16T15:23:07Z | 2024-06-16T15:23:07Z | Promoting Data and Model Privacy in Federated Learning through Quantized
LoRA | Conventional federated learning primarily aims to secure the privacy of data distributed across multiple edge devices, with the global model dispatched to edge devices for parameter updates during the learning process. However, the development of large language models (LLMs) requires substantial data and computational resources, rendering them valuable intellectual properties for their developers and owners. To establish a mechanism that protects both data and model privacy in a federated learning context, we introduce a method that just needs to distribute a quantized version of the model's parameters during training. This method enables accurate gradient estimations for parameter updates while preventing clients from accessing a model whose performance is comparable to the centrally hosted one. Moreover, we combine this quantization strategy with LoRA, a popular and parameter-efficient fine-tuning method, to significantly reduce communication costs in federated learning. The proposed framework, named textsc{FedLPP}, successfully ensures both data and model privacy in the federated learning context. Additionally, the learned central model exhibits good generalization and can be trained in a resource-efficient manner. | [
"['JianHao Zhu' 'Changze Lv' 'Xiaohua Wang' 'Muling Wu' 'Wenhao Liu'\n 'Tianlong Li' 'Zixuan Ling' 'Cenyuan Zhang' 'Xiaoqing Zheng'\n 'Xuanjing Huang']"
]
|
null | null | 2406.10993 | null | null | http://arxiv.org/pdf/2406.10993v1 | 2024-06-16T16:10:51Z | 2024-06-16T16:10:51Z | CoSTA: Code-Switched Speech Translation using Aligned Speech-Text
Interleaving | Code-switching is a widely prevalent linguistic phenomenon in multilingual societies like India. Building speech-to-text models for code-switched speech is challenging due to limited availability of datasets. In this work, we focus on the problem of spoken translation (ST) of code-switched speech in Indian languages to English text. We present a new end-to-end model architecture COSTA that scaffolds on pretrained automatic speech recognition (ASR) and machine translation (MT) modules (that are more widely available for many languages). Speech and ASR text representations are fused using an aligned interleaving scheme and are fed further as input to a pretrained MT module; the whole pipeline is then trained end-to-end for spoken translation using synthetically created ST data. We also release a new evaluation benchmark for code-switched Bengali-English, Hindi-English, Marathi-English and Telugu- English speech to English text. COSTA significantly outperforms many competitive cascaded and end-to-end multimodal baselines by up to 3.5 BLEU points. | [
"['Bhavani Shankar' 'Preethi Jyothi' 'Pushpak Bhattacharyya']"
]
|
null | null | 2406.10995 | null | null | http://arxiv.org/pdf/2406.10995v1 | 2024-06-16T16:15:20Z | 2024-06-16T16:15:20Z | Concept-skill Transferability-based Data Selection for Large
Vision-Language Models | Instruction tuning, or supervised finetuning on extensive task-specific data, is necessary for Large Vision-Language Models (LVLMs) to generalize well across a broad range of vision-language (VL) tasks. However, training on large VL datasets can become prohibitively expensive. In this work, we introduce COINCIDE, an effective and scalable data selection technique that uses a small model as a reference model to select visual instruction tuning data for efficient finetuning of a target LVLM, focusing on diversity and transferability. Specifically, we cluster the training data using internal activations from a small model, which identifies VL concept-skill compositions needed by a target LVLM. We then sample data from these diverse clusters by considering their density and transferability, or the ability to transfer well to other concept-skill compositions. This approach ensures the diversity of these compositions, which is vital for LVLM generalization. Extensive experiments demonstrate that COINCIDE achieves superior performance and data selection efficiency against 8 strong baselines on two distinct datasets: LLaVA-1.5 and Vision-Flan. Using only 20% of the LLaVA-1.5 dataset, COINCIDE achieves performance comparable to the LVLM finetuned on the whole dataset, with 70% reduction of the wall-clock running time. On the Vision-Flan dataset, our method achieves superior results with only 16.7% of the training data. | [
"['Jaewoo Lee' 'Boyang Li' 'Sung Ju Hwang']"
]
|
null | null | 2406.10997 | null | null | http://arxiv.org/pdf/2406.10997v1 | 2024-06-16T16:18:45Z | 2024-06-16T16:18:45Z | Two-level overlapping additive Schwarz preconditioner for training
scientific machine learning applications | We introduce a novel two-level overlapping additive Schwarz preconditioner for accelerating the training of scientific machine learning applications. The design of the proposed preconditioner is motivated by the nonlinear two-level overlapping additive Schwarz preconditioner. The neural network parameters are decomposed into groups (subdomains) with overlapping regions. In addition, the network's feed-forward structure is indirectly imposed through a novel subdomain-wise synchronization strategy and a coarse-level training step. Through a series of numerical experiments, which consider physics-informed neural networks and operator learning approaches, we demonstrate that the proposed two-level preconditioner significantly speeds up the convergence of the standard (LBFGS) optimizer while also yielding more accurate machine learning models. Moreover, the devised preconditioner is designed to take advantage of model-parallel computations, which can further reduce the training time. | [
"['Youngkyu Lee' 'Alena Kopaničáková' 'George Em Karniadakis']"
]
|
null | null | 2406.11010 | null | null | http://arxiv.org/pdf/2406.11010v1 | 2024-06-16T17:02:27Z | 2024-06-16T17:02:27Z | WeShap: Weak Supervision Source Evaluation with Shapley Values | Efficient data annotation stands as a significant bottleneck in training contemporary machine learning models. The Programmatic Weak Supervision (PWS) pipeline presents a solution by utilizing multiple weak supervision sources to automatically label data, thereby expediting the annotation process. Given the varied contributions of these weak supervision sources to the accuracy of PWS, it is imperative to employ a robust and efficient metric for their evaluation. This is crucial not only for understanding the behavior and performance of the PWS pipeline but also for facilitating corrective measures. In our study, we introduce WeShap values as an evaluation metric, which quantifies the average contribution of weak supervision sources within a proxy PWS pipeline, leveraging the theoretical underpinnings of Shapley values. We demonstrate efficient computation of WeShap values using dynamic programming, achieving quadratic computational complexity relative to the number of weak supervision sources. Our experiments demonstrate the versatility of WeShap values across various applications, including the identification of beneficial or detrimental labeling functions, refinement of the PWS pipeline, and rectification of mislabeled data. Furthermore, WeShap values aid in comprehending the behavior of the PWS pipeline and scrutinizing specific instances of mislabeled data. Although initially derived from a specific proxy PWS pipeline, we empirically demonstrate the generalizability of WeShap values to other PWS pipeline configurations. Our findings indicate a noteworthy average improvement of 4.8 points in downstream model accuracy through the revision of the PWS pipeline compared to previous state-of-the-art methods, underscoring the efficacy of WeShap values in enhancing data quality for training machine learning models. | [
"['Naiqing Guan' 'Nick Koudas']"
]
|
null | null | 2406.11011 | null | null | http://arxiv.org/pdf/2406.11011v2 | 2024-06-29T23:05:32Z | 2024-06-16T17:09:24Z | Data Shapley in One Training Run | Data Shapley provides a principled framework for attributing data's contribution within machine learning contexts. However, existing approaches require re-training models on different data subsets, which is computationally intensive, foreclosing their application to large-scale models. Furthermore, they produce the same attribution score for any models produced by running the learning algorithm, meaning they cannot perform targeted attribution towards a specific model obtained from a single run of the algorithm. This paper introduces In-Run Data Shapley, which addresses these limitations by offering scalable data attribution for a target model of interest. In its most efficient implementation, our technique incurs negligible additional runtime compared to standard model training. This dramatic efficiency improvement makes it possible to perform data attribution for the foundation model pretraining stage for the first time. We present several case studies that offer fresh insights into pretraining data's contribution and discuss their implications for copyright in generative AI and pretraining data curation. | [
"['Jiachen T. Wang' 'Prateek Mittal' 'Dawn Song' 'Ruoxi Jia']"
]
|
null | null | 2406.11014 | null | null | http://arxiv.org/pdf/2406.11014v1 | 2024-06-16T17:13:58Z | 2024-06-16T17:13:58Z | Latent Communication in Artificial Neural Networks | As NNs permeate various scientific and industrial domains, understanding the universality and reusability of their representations becomes crucial. At their core, these networks create intermediate neural representations, indicated as latent spaces, of the input data and subsequently leverage them to perform specific downstream tasks. This dissertation focuses on the universality and reusability of neural representations. Do the latent representations crafted by a NN remain exclusive to a particular trained instance, or can they generalize across models, adapting to factors such as randomness during training, model architecture, or even data domain? This adaptive quality introduces the notion of Latent Communication -- a phenomenon that describes when representations can be unified or reused across neural spaces. A salient observation from our research is the emergence of similarities in latent representations, even when these originate from distinct or seemingly unrelated NNs. By exploiting a partial correspondence between the two data distributions that establishes a semantic link, we found that these representations can either be projected into a universal representation, coined as Relative Representation, or be directly translated from one space to another. Latent Communication allows for a bridge between independently trained NN, irrespective of their training regimen, architecture, or the data modality they were trained on -- as long as the data semantic content stays the same (e.g., images and their captions). This holds true for both generation, classification and retrieval downstream tasks; in supervised, weakly supervised, and unsupervised settings; and spans various data modalities including images, text, audio, and graphs -- showcasing the universality of the Latent Communication phenomenon. [...] | [
"['Luca Moschella']"
]
|
null | null | 2406.11016 | null | null | http://arxiv.org/pdf/2406.11016v1 | 2024-06-16T17:19:23Z | 2024-06-16T17:19:23Z | Optimized Speculative Sampling for GPU Hardware Accelerators | In this work, we optimize speculative sampling for parallel hardware accelerators to improve sampling speed. We notice that substantial portions of the intermediate matrices necessary for speculative sampling can be computed concurrently. This allows us to distribute the workload across multiple GPU threads, enabling simultaneous operations on matrix segments within thread blocks. Additionally, we use fast on-chip memory to store intermediate results, thereby minimizing the frequency of slow read and write operations across different types of memory. This results in profiling time improvements ranging from 6% to 13% relative to the baseline implementation, without compromising accuracy. To further accelerate speculative sampling, probability distributions parameterized by softmax are approximated by sigmoid. This approximation approach results in significantly greater relative improvements in profiling time, ranging from 37% to 94%, with a slight decline in accuracy. We conduct extensive experiments on both automatic speech recognition and summarization tasks to validate the effectiveness of our optimization methods. | [
"['Dominik Wagner' 'Seanie Lee' 'Ilja Baumann' 'Philipp Seeberger'\n 'Korbinian Riedhammer' 'Tobias Bocklet']"
]
|
null | null | 2406.11023 | null | null | http://arxiv.org/pdf/2406.11023v1 | 2024-06-16T17:36:53Z | 2024-06-16T17:36:53Z | Physics-Informed Deep Learning and Partial Transfer Learning for Bearing
Fault Diagnosis in the Presence of Highly Missing Data | One of the most significant obstacles in bearing fault diagnosis is a lack of labeled data for various fault types. Also, sensor-acquired data frequently lack labels and have a large amount of missing data. This paper tackles these issues by presenting the PTPAI method, which uses a physics-informed deep learning-based technique to generate synthetic labeled data. Labeled synthetic data makes up the source domain, whereas unlabeled data with missing data is present in the target domain. Consequently, imbalanced class problems and partial-set fault diagnosis hurdles emerge. To address these challenges, the RF-Mixup approach is used to handle imbalanced classes. As domain adaptation strategies, the MK-MMSD and CDAN are employed to mitigate the disparity in distribution between synthetic and actual data. Furthermore, the partial-set challenge is tackled by applying weighting methods at the class and instance levels. Experimental outcomes on the CWRU and JNU datasets indicate that the proposed approach effectively addresses these problems. | [
"['Mohammadreza Kavianpour' 'Parisa Kavianpour' 'Amin Ramezani']"
]
|
null | null | 2406.11028 | null | null | http://arxiv.org/abs/2406.11028v1 | 2024-06-16T17:58:29Z | 2024-06-16T17:58:29Z | Universal Cross-Lingual Text Classification | Text classification, an integral task in natural language processing, involves the automatic categorization of text into predefined classes. Creating supervised labeled datasets for low-resource languages poses a considerable challenge. Unlocking the language potential of low-resource languages requires robust datasets with supervised labels. However, such datasets are scarce, and the label space is often limited. In our pursuit to address this gap, we aim to optimize existing labels/datasets in different languages. This research proposes a novel perspective on Universal Cross-Lingual Text Classification, leveraging a unified model across languages. Our approach involves blending supervised data from different languages during training to create a universal model. The supervised data for a target classification task might come from different languages covering different labels. The primary goal is to enhance label and language coverage, aiming for a label set that represents a union of labels from various languages. We propose the usage of a strong multilingual SBERT as our base model, making our novel training strategy feasible. This strategy contributes to the adaptability and effectiveness of the model in cross-lingual language transfer scenarios, where it can categorize text in languages not encountered during training. Thus, the paper delves into the intricacies of cross-lingual text classification, with a particular focus on its application for low-resource languages, exploring methodologies and implications for the development of a robust and adaptable universal cross-lingual model. | [
"['Riya Savant' 'Anushka Shelke' 'Sakshi Todmal' 'Sanskruti Kanphade'\n 'Ananya Joshi' 'Raviraj Joshi']"
]
|
null | null | 2406.11029 | null | null | http://arxiv.org/abs/2406.11029v1 | 2024-06-16T17:59:05Z | 2024-06-16T17:59:05Z | Curating Stopwords in Marathi: A TF-IDF Approach for Improved Text
Analysis and Information Retrieval | Stopwords are commonly used words in a language that are often considered to be of little value in determining the meaning or significance of a document. These words occur frequently in most texts and don't provide much useful information for tasks like sentiment analysis and text classification. English, which is a high-resource language, takes advantage of the availability of stopwords, whereas low-resource Indian languages like Marathi are very limited, standardized, and can be used in available packages, but the number of available words in those packages is low. Our work targets the curation of stopwords in the Marathi language using the MahaCorpus, with 24.8 million sentences. We make use of the TF-IDF approach coupled with human evaluation to curate a strong stopword list of 400 words. We apply the stop word removal to the text classification task and show its efficacy. The work also presents a simple recipe for stopword curation in a low-resource language. The stopwords are integrated into the mahaNLP library and publicly available on https://github.com/l3cube-pune/MarathiNLP . | [
"['Rohan Chavan' 'Gaurav Patil' 'Vishal Madle' 'Raviraj Joshi']"
]
|
null | null | 2406.11044 | null | null | http://arxiv.org/pdf/2406.11044v1 | 2024-06-16T19:02:31Z | 2024-06-16T19:02:31Z | Evaluating the Performance of Large Language Models via Debates | Large Language Models (LLMs) are rapidly evolving and impacting various fields, necessitating the development of effective methods to evaluate and compare their performance. Most current approaches for performance evaluation are either based on fixed, domain-specific questions that lack the flexibility required in many real-world applications where tasks are not always from a single domain, or rely on human input, making them unscalable. We propose an automated benchmarking framework based on debates between LLMs, judged by another LLM. This method assesses not only domain knowledge, but also skills such as problem definition and inconsistency recognition. We evaluate the performance of various state-of-the-art LLMs using the debate framework and achieve rankings that align closely with popular rankings based on human input, eliminating the need for costly human crowdsourcing. | [
"['Behrad Moniri' 'Hamed Hassani' 'Edgar Dobriban']"
]
|
null | null | 2406.11045 | null | null | http://arxiv.org/pdf/2406.11045v1 | 2024-06-16T19:07:06Z | 2024-06-16T19:07:06Z | Kolmogorov Arnold Informed neural network: A physics-informed deep
learning framework for solving PDEs based on Kolmogorov Arnold Networks | AI for partial differential equations (PDEs) has garnered significant attention, particularly with the emergence of Physics-informed neural networks (PINNs). The recent advent of Kolmogorov-Arnold Network (KAN) indicates that there is potential to revisit and enhance the previously MLP-based PINNs. Compared to MLPs, KANs offer interpretability and require fewer parameters. PDEs can be described in various forms, such as strong form, energy form, and inverse form. While mathematically equivalent, these forms are not computationally equivalent, making the exploration of different PDE formulations significant in computational physics. Thus, we propose different PDE forms based on KAN instead of MLP, termed Kolmogorov-Arnold-Informed Neural Network (KINN). We systematically compare MLP and KAN in various numerical examples of PDEs, including multi-scale, singularity, stress concentration, nonlinear hyperelasticity, heterogeneous, and complex geometry problems. Our results demonstrate that KINN significantly outperforms MLP in terms of accuracy and convergence speed for numerous PDEs in computational solid mechanics, except for the complex geometry problem. This highlights KINN's potential for more efficient and accurate PDE solutions in AI for PDEs. | [
"['Yizheng Wang' 'Jia Sun' 'Jinshuai Bai' 'Cosmin Anitescu'\n 'Mohammad Sadegh Eshaghi' 'Xiaoying Zhuang' 'Timon Rabczuk' 'Yinghua Liu']"
]
|
null | null | 2406.11048 | null | null | http://arxiv.org/pdf/2406.11048v1 | 2024-06-16T19:18:06Z | 2024-06-16T19:18:06Z | Leveraging Foundation Models for Multi-modal Federated Learning with
Incomplete Modality | Federated learning (FL) has obtained tremendous progress in providing collaborative training solutions for distributed data silos with privacy guarantees. However, few existing works explore a more realistic scenario where the clients hold multiple data modalities. In this paper, we aim to solve a novel challenge in multi-modal federated learning (MFL) -- modality missing -- the clients may lose part of the modalities in their local data sets. To tackle the problems, we propose a novel multi-modal federated learning method, Federated Multi-modal contrastiVe training with Pre-trained completion (FedMVP), which integrates the large-scale pre-trained models to enhance the federated training. In the proposed FedMVP framework, each client deploys a large-scale pre-trained model with frozen parameters for modality completion and representation knowledge transfer, enabling efficient and robust local training. On the server side, we utilize generated data to uniformly measure the representation similarity among the uploaded client models and construct a graph perspective to aggregate them according to their importance in the system. We demonstrate that the model achieves superior performance over two real-world image-text classification datasets and is robust to the performance degradation caused by missing modality. | [
"['Liwei Che' 'Jiaqi Wang' 'Xinyue Liu' 'Fenglong Ma']"
]
|
null | null | 2406.11061 | null | null | http://arxiv.org/pdf/2406.11061v1 | 2024-06-16T20:26:38Z | 2024-06-16T20:26:38Z | Generalization and Knowledge Transfer in Abstract Visual Reasoning
Models | We study generalization and knowledge reuse capabilities of deep neural networks in the domain of abstract visual reasoning (AVR), employing Raven's Progressive Matrices (RPMs), a recognized benchmark task for assessing AVR abilities. Two knowledge transfer scenarios referring to the I-RAVEN dataset are investigated. Firstly, inspired by generalization assessment capabilities of the PGM dataset and popularity of I-RAVEN, we introduce Attributeless-I-RAVEN, a benchmark with four generalization regimes that allow to test generalization of abstract rules applied to held-out attributes. Secondly, we construct I-RAVEN-Mesh, a dataset that enriches RPMs with a novel component structure comprising line-based patterns, facilitating assessment of progressive knowledge acquisition in transfer learning setting. The developed benchmarks reveal shortcomings of the contemporary deep learning models, which we partly address with Pathways of Normalized Group Convolution (PoNG) model, a novel neural architecture for solving AVR tasks. PoNG excels in both presented challenges, as well as the standard I-RAVEN and PGM setups. | [
"['Mikołaj Małkiński' 'Jacek Mańdziuk']"
]
|
null | null | 2406.11068 | null | null | http://arxiv.org/pdf/2406.11068v1 | 2024-06-16T20:52:44Z | 2024-06-16T20:52:44Z | A Unified View of Abstract Visual Reasoning Problems | The field of Abstract Visual Reasoning (AVR) encompasses a wide range of problems, many of which are inspired by human IQ tests. The variety of AVR tasks has resulted in state-of-the-art AVR methods being task-specific approaches. Furthermore, contemporary methods consider each AVR problem instance not as a whole, but in the form of a set of individual panels with particular locations and roles (context vs. answer panels) pre-assigned according to the task-specific arrangements. While these highly specialized approaches have recently led to significant progress in solving particular AVR tasks, considering each task in isolation hinders the development of universal learning systems in this domain. In this paper, we introduce a unified view of AVR tasks, where each problem instance is rendered as a single image, with no a priori assumptions about the number of panels, their location, or role. The main advantage of the proposed unified view is the ability to develop universal learning models applicable to various AVR tasks. What is more, the proposed approach inherently facilitates transfer learning in the AVR domain, as various types of problems share a common representation. The experiments conducted on four AVR datasets with Raven's Progressive Matrices and Visual Analogy Problems, and one real-world visual analogy dataset show that the proposed unified representation of AVR tasks poses a challenge to state-of-the-art Deep Learning (DL) AVR models and, more broadly, contemporary DL image recognition methods. In order to address this challenge, we introduce the Unified Model for Abstract Visual Reasoning (UMAVR) capable of dealing with various types of AVR problems in a unified manner. UMAVR outperforms existing AVR methods in selected single-task learning experiments, and demonstrates effective knowledge reuse in transfer learning and curriculum learning setups. | [
"['Mikołaj Małkiński' 'Jacek Mańdziuk']"
]
|
null | null | 2406.11070 | null | null | http://arxiv.org/pdf/2406.11070v1 | 2024-06-16T20:55:19Z | 2024-06-16T20:55:19Z | Fine-grained Classes and How to Find Them | In many practical applications, coarse-grained labels are readily available compared to fine-grained labels that reflect subtle differences between classes. However, existing methods cannot leverage coarse labels to infer fine-grained labels in an unsupervised manner. To bridge this gap, we propose FALCON, a method that discovers fine-grained classes from coarsely labeled data without any supervision at the fine-grained level. FALCON simultaneously infers unknown fine-grained classes and underlying relationships between coarse and fine-grained classes. Moreover, FALCON is a modular method that can effectively learn from multiple datasets labeled with different strategies. We evaluate FALCON on eight image classification tasks and a single-cell classification task. FALCON outperforms baselines by a large margin, achieving 22% improvement over the best baseline on the tieredImageNet dataset with over 600 fine-grained classes. | [
"['Matej Grcić' 'Artyom Gadetsky' 'Maria Brbić']"
]
|
null | null | 2406.11087 | null | null | http://arxiv.org/pdf/2406.11087v2 | 2024-06-20T05:43:50Z | 2024-06-16T22:11:41Z | MemDPT: Differential Privacy for Memory Efficient Language Models | Large language models have consistently demonstrated remarkable performance across a wide spectrum of applications. Nonetheless, the deployment of these models can inadvertently expose user privacy to potential risks. The substantial memory demands of these models during training represent a significant resource consumption challenge. The sheer size of these models imposes a considerable burden on memory resources, which is a matter of significant concern in practice. In this paper, we present an innovative training framework MemDPT that not only reduces the memory cost of large language models but also places a strong emphasis on safeguarding user data privacy. MemDPT provides edge network and reverse network designs to accommodate various differential privacy memory-efficient fine-tuning schemes. Our approach not only achieves $2 sim 3 times$ memory optimization but also provides robust privacy protection, ensuring that user data remains secure and confidential. Extensive experiments have demonstrated that MemDPT can effectively provide differential privacy efficient fine-tuning across various task scenarios. | [
"['Yanming Liu' 'Xinyue Peng' 'Jiannan Cao' 'Yuwei Zhang' 'Chen Ma'\n 'Songhang Deng' 'Mengchen Fu' 'Xuhong Zhang' 'Sheng Cheng' 'Xun Wang'\n 'Jianwei Yin' 'Tianyu Du']"
]
|
null | null | 2406.11092 | null | null | http://arxiv.org/pdf/2406.11092v1 | 2024-06-16T22:45:56Z | 2024-06-16T22:45:56Z | Guaranteed Sampling Flexibility for Low-tubal-rank Tensor Completion | While Bernoulli sampling is extensively studied in tensor completion, t-CUR sampling approximates low-tubal-rank tensors via lateral and horizontal subtensors. However, both methods lack sufficient flexibility for diverse practical applications. To address this, we introduce Tensor Cross-Concentrated Sampling (t-CCS), a novel and straightforward sampling model that advances the matrix cross-concentrated sampling concept within a tensor framework. t-CCS effectively bridges the gap between Bernoulli and t-CUR sampling, offering additional flexibility that can lead to computational savings in various contexts. A key aspect of our work is the comprehensive theoretical analysis provided. We establish a sufficient condition for the successful recovery of a low-rank tensor from its t-CCS samples. In support of this, we also develop a theoretical framework validating the feasibility of t-CUR via uniform random sampling and conduct a detailed theoretical sampling complexity analysis for tensor completion problems utilizing the general Bernoulli sampling model. Moreover, we introduce an efficient non-convex algorithm, the Iterative t-CUR Tensor Completion (ITCURTC) algorithm, specifically designed to tackle the t-CCS-based tensor completion. We have intensively tested and validated the effectiveness of the t-CCS model and the ITCURTC algorithm across both synthetic and real-world datasets. | [
"['Bowen Su' 'Juntao You' 'HanQin Cai' 'Longxiu Huang']"
]
|
null | null | 2406.11109 | null | null | http://arxiv.org/pdf/2406.11109v2 | 2024-06-18T06:21:16Z | 2024-06-17T00:18:31Z | Investigating Annotator Bias in Large Language Models for Hate Speech
Detection | Data annotation, the practice of assigning descriptive labels to raw data, is pivotal in optimizing the performance of machine learning models. However, it is a resource-intensive process susceptible to biases introduced by annotators. The emergence of sophisticated Large Language Models (LLMs), like ChatGPT presents a unique opportunity to modernize and streamline this complex procedure. While existing research extensively evaluates the efficacy of LLMs, as annotators, this paper delves into the biases present in LLMs, specifically GPT 3.5 and GPT 4o when annotating hate speech data. Our research contributes to understanding biases in four key categories: gender, race, religion, and disability. Specifically targeting highly vulnerable groups within these categories, we analyze annotator biases. Furthermore, we conduct a comprehensive examination of potential factors contributing to these biases by scrutinizing the annotated data. We introduce our custom hate speech detection dataset, HateSpeechCorpus, to conduct this research. Additionally, we perform the same experiments on the ETHOS (Mollas et al., 2022) dataset also for comparative analysis. This paper serves as a crucial resource, guiding researchers and practitioners in harnessing the potential of LLMs for dataannotation, thereby fostering advancements in this critical field. The HateSpeechCorpus dataset is available here: https://github.com/AmitDasRup123/HateSpeechCorpus | [
"['Amit Das' 'Zheng Zhang' 'Fatemeh Jamshidi' 'Vinija Jain' 'Aman Chadha'\n 'Nilanjana Raychawdhary' 'Mary Sandage' 'Lauramarie Pope' 'Gerry Dozier'\n 'Cheryl Seals']"
]
|
null | null | 2406.11110 | null | null | http://arxiv.org/pdf/2406.11110v1 | 2024-06-17T00:19:16Z | 2024-06-17T00:19:16Z | How Neural Networks Learn the Support is an Implicit Regularization
Effect of SGD | We investigate the ability of deep neural networks to identify the support of the target function. Our findings reveal that mini-batch SGD effectively learns the support in the first layer of the network by shrinking to zero the weights associated with irrelevant components of input. In contrast, we demonstrate that while vanilla GD also approximates the target function, it requires an explicit regularization term to learn the support in the first layer. We prove that this property of mini-batch SGD is due to a second-order implicit regularization effect which is proportional to $eta / b$ (step size / batch size). Our results are not only another proof that implicit regularization has a significant impact on training optimization dynamics but they also shed light on the structure of the features that are learned by the network. Additionally, they suggest that smaller batches enhance feature interpretability and reduce dependency on initialization. | [
"['Pierfrancesco Beneventano' 'Andrea Pinto' 'Tomaso Poggio']"
]
|
null | null | 2406.11118 | null | null | http://arxiv.org/pdf/2406.11118v1 | 2024-06-17T00:30:58Z | 2024-06-17T00:30:58Z | Incentivizing Quality Text Generation via Statistical Contracts | While the success of large language models (LLMs) increases demand for machine-generated text, current pay-per-token pricing schemes create a misalignment of incentives known in economics as moral hazard: Text-generating agents have strong incentive to cut costs by preferring a cheaper model over the cutting-edge one, and this can be done "behind the scenes" since the agent performs inference internally. In this work, we approach this issue from an economic perspective, by proposing a pay-for-performance, contract-based framework for incentivizing quality. We study a principal-agent game where the agent generates text using costly inference, and the contract determines the principal's payment for the text according to an automated quality evaluation. Since standard contract theory is inapplicable when internal inference costs are unknown, we introduce cost-robust contracts. As our main theoretical contribution, we characterize optimal cost-robust contracts through a direct correspondence to optimal composite hypothesis tests from statistics, generalizing a result of Saig et al. (NeurIPS'23). We evaluate our framework empirically by deriving contracts for a range of objectives and LLM evaluation benchmarks, and find that cost-robust contracts sacrifice only a marginal increase in objective value compared to their cost-aware counterparts. | [
"['Eden Saig' 'Ohad Einav' 'Inbal Talgam-Cohen']"
]
|
null | null | 2406.11128 | null | null | http://arxiv.org/pdf/2406.11128v1 | 2024-06-17T01:07:30Z | 2024-06-17T01:07:30Z | Model Adaptation for Time Constrained Embodied Control | When adopting a deep learning model for embodied agents, it is required that the model structure be optimized for specific tasks and operational conditions. Such optimization can be static such as model compression or dynamic such as adaptive inference. Yet, these techniques have not been fully investigated for embodied control systems subject to time constraints, which necessitate sequential decision-making for multiple tasks, each with distinct inference latency limitations. In this paper, we present MoDeC, a time constraint-aware embodied control framework using the modular model adaptation. We formulate model adaptation to varying operational conditions on resource and time restrictions as dynamic routing on a modular network, incorporating these conditions as part of multi-task objectives. Our evaluation across several vision-based embodied environments demonstrates the robustness of MoDeC, showing that it outperforms other model adaptation methods in both performance and adherence to time constraints in robotic manipulation and autonomous driving applications | [
"['Jaehyun Song' 'Minjong Yoo' 'Honguk Woo']"
]
|
null | null | 2406.11132 | null | null | http://arxiv.org/pdf/2406.11132v1 | 2024-06-17T01:23:11Z | 2024-06-17T01:23:11Z | RePrompt: Planning by Automatic Prompt Engineering for Large Language
Models Agents | In this past year, large language models (LLMs) have had remarkable success in domains outside the traditional natural language processing, and people are starting to explore the usage of LLMs in more general and close to application domains like code generation, travel planning, and robot controls. Connecting these LLMs with great capacity and external tools, people are building the so-called LLM agents, which are supposed to help people do all kinds of work in everyday life. In all these domains, the prompt to the LLMs has been shown to make a big difference in what the LLM would generate and thus affect the performance of the LLM agents. Therefore, automatic prompt engineering has become an important question for many researchers and users of LLMs. In this paper, we propose a novel method, textsc{RePrompt}, which does "gradient descent" to optimize the step-by-step instructions in the prompt of the LLM agents based on the chat history obtained from interactions with LLM agents. By optimizing the prompt, the LLM will learn how to plan in specific domains. We have used experiments in PDDL generation and travel planning to show that our method could generally improve the performance for different reasoning tasks when using the updated prompt as the initial prompt. | [
"['Weizhe Chen' 'Sven Koenig' 'Bistra Dilkina']"
]
|
null | null | 2406.11141 | null | null | http://arxiv.org/pdf/2406.11141v1 | 2024-06-17T02:01:17Z | 2024-06-17T02:01:17Z | Active search for Bifurcations | Bifurcations mark qualitative changes of long-term behavior in dynamical systems and can often signal sudden ("hard") transitions or catastrophic events (divergences). Accurately locating them is critical not just for deeper understanding of observed dynamic behavior, but also for designing efficient interventions. When the dynamical system at hand is complex, possibly noisy, and expensive to sample, standard (e.g. continuation based) numerical methods may become impractical. We propose an active learning framework, where Bayesian Optimization is leveraged to discover saddle-node or Hopf bifurcations, from a judiciously chosen small number of vector field observations. Such an approach becomes especially attractive in systems whose state x parameter space exploration is resource-limited. It also naturally provides a framework for uncertainty quantification (aleatoric and epistemic), useful in systems with inherent stochasticity. | [
"['Yorgos M. Psarellis' 'Themistoklis P. Sapsis' 'Ioannis G. Kevrekidis']"
]
|
null | null | 2406.11148 | null | null | http://arxiv.org/pdf/2406.11148v1 | 2024-06-17T02:27:14Z | 2024-06-17T02:27:14Z | Few-Shot Recognition via Stage-Wise Augmented Finetuning | Few-shot recognition aims to train a classification model with only a few labeled examples of pre-defined concepts, where annotation can be costly in a downstream task. In another related research area, zero-shot recognition, which assumes no access to any downstream-task data, has been greatly advanced by using pretrained Vision-Language Models (VLMs). In this area, retrieval-augmented learning (RAL) effectively boosts zero-shot accuracy by retrieving and learning from external data relevant to downstream concepts. Motivated by these advancements, our work explores RAL for few-shot recognition. While seemingly straightforward despite being under-explored in the literature (till now!), we present novel challenges and opportunities for applying RAL for few-shot recognition. First, perhaps surprisingly, simply finetuning the VLM on a large amount of retrieved data barely surpasses state-of-the-art zero-shot methods due to the imbalanced distribution of retrieved data and its domain gaps compared to few-shot annotated data. Second, finetuning a VLM on few-shot examples alone significantly outperforms prior methods, and finetuning on the mix of retrieved and few-shot data yields even better results. Third, to mitigate the imbalanced distribution and domain gap issue, we propose Stage-Wise Augmented fineTuning (SWAT) method, which involves end-to-end finetuning on mixed data for the first stage and retraining the classifier solely on the few-shot data in the second stage. Extensive experiments show that SWAT achieves the best performance on standard benchmark datasets, resoundingly outperforming prior works by ~10% in accuracy. Code is available at https://github.com/tian1327/SWAT. | [
"['Tian Liu' 'Huixin Zhang' 'Shubham Parashar' 'Shu Kong']"
]
|
null | null | 2406.11151 | null | null | http://arxiv.org/pdf/2406.11151v2 | 2024-06-19T02:43:48Z | 2024-06-17T02:30:55Z | Recent and Upcoming Developments in Randomized Numerical Linear Algebra
for Machine Learning | Large matrices arise in many machine learning and data analysis applications, including as representations of datasets, graphs, model weights, and first and second-order derivatives. Randomized Numerical Linear Algebra (RandNLA) is an area which uses randomness to develop improved algorithms for ubiquitous matrix problems. The area has reached a certain level of maturity; but recent hardware trends, efforts to incorporate RandNLA algorithms into core numerical libraries, and advances in machine learning, statistics, and random matrix theory, have lead to new theoretical and practical challenges. This article provides a self-contained overview of RandNLA, in light of these developments. | [
"['Michał Dereziński' 'Michael W. Mahoney']"
]
|
null | null | 2406.11159 | null | null | http://arxiv.org/pdf/2406.11159v1 | 2024-06-17T02:56:55Z | 2024-06-17T02:56:55Z | Distributed Stochastic Gradient Descent with Staleness: A Stochastic
Delay Differential Equation Based Framework | Distributed stochastic gradient descent (SGD) has attracted considerable recent attention due to its potential for scaling computational resources, reducing training time, and helping protect user privacy in machine learning. However, the staggers and limited bandwidth may induce random computational/communication delays, thereby severely hindering the learning process. Therefore, how to accelerate asynchronous SGD by efficiently scheduling multiple workers is an important issue. In this paper, a unified framework is presented to analyze and optimize the convergence of asynchronous SGD based on stochastic delay differential equations (SDDEs) and the Poisson approximation of aggregated gradient arrivals. In particular, we present the run time and staleness of distributed SGD without a memorylessness assumption on the computation times. Given the learning rate, we reveal the relevant SDDE's damping coefficient and its delay statistics, as functions of the number of activated clients, staleness threshold, the eigenvalues of the Hessian matrix of the objective function, and the overall computational/communication delay. The formulated SDDE allows us to present both the distributed SGD's convergence condition and speed by calculating its characteristic roots, thereby optimizing the scheduling policies for asynchronous/event-triggered SGD. It is interestingly shown that increasing the number of activated workers does not necessarily accelerate distributed SGD due to staleness. Moreover, a small degree of staleness does not necessarily slow down the convergence, while a large degree of staleness will result in the divergence of distributed SGD. Numerical results demonstrate the potential of our SDDE framework, even in complex learning tasks with non-convex objective functions. | [
"['Siyuan Yu' 'Wei Chen' 'H. Vincent Poor']"
]
|
null | null | 2406.11168 | null | null | http://arxiv.org/pdf/2406.11168v1 | 2024-06-17T03:17:33Z | 2024-06-17T03:17:33Z | Two-Timescale Optimization Framework for Decentralized Linear-Quadratic
Optimal Control | This study investigates a decentralized linear-quadratic optimal control problem, and several approximate separable constrained optimization problems are formulated for the first time based on the selection of sparsity promoting functions. First, for the optimization problem with weighted $ell_1$ sparsity promoting function, a two-timescale algorithm is adopted that is based on the BSUM (Block Successive Upper-bound Minimization) framework and a differential equation solver. Second, a piecewise quadratic sparsity promoting function is introduced, and the induced optimization problem demonstrates an accelerated convergence rate by performing the same two-timescale algorithm. Finally, the optimization problem with $ell_0$ sparsity promoting function is considered that is nonconvex and discontinuous, and can be approximated by successive coordinatewise convex optimization problems. | [
"['Lechen Feng' 'Yuan-Hua Ni' 'Xuebo Zhang']"
]
|
null | null | 2406.11171 | null | null | http://arxiv.org/pdf/2406.11171v2 | 2024-06-19T00:03:42Z | 2024-06-17T03:22:20Z | SUGARCREPE++ Dataset: Vision-Language Model Sensitivity to Semantic and
Lexical Alterations | Despite their remarkable successes, state-of-the-art large language models (LLMs), including vision-and-language models (VLMs) and unimodal language models (ULMs), fail to understand precise semantics. For example, semantically equivalent sentences expressed using different lexical compositions elicit diverging representations. The degree of this divergence and its impact on encoded semantics is not very well understood. In this paper, we introduce the SUGARCREPE++ dataset to analyze the sensitivity of VLMs and ULMs to lexical and semantic alterations. Each sample in SUGARCREPE++ dataset consists of an image and a corresponding triplet of captions: a pair of semantically equivalent but lexically different positive captions and one hard negative caption. This poses a 3-way semantic (in)equivalence problem to the language models. We comprehensively evaluate VLMs and ULMs that differ in architecture, pre-training objectives and datasets to benchmark the performance of SUGARCREPE++ dataset. Experimental results highlight the difficulties of VLMs in distinguishing between lexical and semantic variations, particularly in object attributes and spatial relations. Although VLMs with larger pre-training datasets, model sizes, and multiple pre-training objectives achieve better performance on SUGARCREPE++, there is a significant opportunity for improvement. We show that all the models which achieve better performance on compositionality datasets need not perform equally well on SUGARCREPE++, signifying that compositionality alone may not be sufficient for understanding semantic and lexical alterations. Given the importance of the property that the SUGARCREPE++ dataset targets, it serves as a new challenge to the vision-and-language community. | [
"['Sri Harsha Dumpala' 'Aman Jaiswal' 'Chandramouli Sastry'\n 'Evangelos Milios' 'Sageev Oore' 'Hassan Sajjad']"
]
|
null | null | 2406.11179 | null | null | http://arxiv.org/pdf/2406.11179v1 | 2024-06-17T03:36:47Z | 2024-06-17T03:36:47Z | Learning Iterative Reasoning through Energy Diffusion | We introduce iterative reasoning through energy diffusion (IRED), a novel framework for learning to reason for a variety of tasks by formulating reasoning and decision-making problems with energy-based optimization. IRED learns energy functions to represent the constraints between input conditions and desired outputs. After training, IRED adapts the number of optimization steps during inference based on problem difficulty, enabling it to solve problems outside its training distribution -- such as more complex Sudoku puzzles, matrix completion with large value magnitudes, and pathfinding in larger graphs. Key to our method's success is two novel techniques: learning a sequence of annealed energy landscapes for easier inference and a combination of score function and energy landscape supervision for faster and more stable training. Our experiments show that IRED outperforms existing methods in continuous-space reasoning, discrete-space reasoning, and planning tasks, particularly in more challenging scenarios. Code and visualizations at https://energy-based-model.github.io/ired/ | [
"['Yilun Du' 'Jiayuan Mao' 'Joshua B. Tenenbaum']"
]
|
null | null | 2406.11187 | null | null | http://arxiv.org/pdf/2406.11187v1 | 2024-06-17T03:49:44Z | 2024-06-17T03:49:44Z | Save It All: Enabling Full Parameter Tuning for Federated Large Language
Models via Cycle Black Gradient Descent | The advent of large language models (LLMs) has revolutionized the deep learning paradigm, yielding impressive results across a wide array of tasks. However, the pre-training or fine-tuning of LLMs within a federated learning (FL) framework poses substantial challenges, including considerable computational and memory resource demands, as well as communication bottlenecks between servers and clients. Existing solutions either make the unrealistic assumption that the entire model is exchanged for training, or apply parameter-effective fine-tuning methods from centralized learning to train LLMs in FL which tend to underperform during training or fine-tuning stages due to the limited search subspace of parameter updating. In this paper, we introduce a novel method for the efficient training and fine-tuning of LLMs in FL, with minimal resource consumption. Our approach, termed FedCyBGD, utilizes Cycle Block Gradient Descent to periodically update the model. In particular, we design a compression scheme for FedCyBGD, aiming to further decrease the model download cost. It enables full parameter training in FL with only selected block updates and uploads, thereby reducing communication, computation, and memory costs. Our method achieves state-of-the-art performance for FL LLM training, while significantly reducing associated costs. Codes are provided here. | [
"['Lin Wang' 'Zhichao Wang' 'Xiaoying Tang']"
]
|
null | null | 2406.11200 | null | null | http://arxiv.org/pdf/2406.11200v2 | 2024-06-18T01:39:57Z | 2024-06-17T04:20:02Z | AvaTaR: Optimizing LLM Agents for Tool-Assisted Knowledge Retrieval | Large language model (LLM) agents have demonstrated impressive capability in utilizing external tools and knowledge to boost accuracy and reduce hallucinations. However, developing the prompting techniques that make LLM agents able to effectively use external tools and knowledge is a heuristic and laborious task. Here, we introduce AvaTaR, a novel and automatic framework that optimizes an LLM agent to effectively use the provided tools and improve its performance on a given task/domain. During optimization, we design a comparator module to iteratively provide insightful and holistic prompts to the LLM agent via reasoning between positive and negative examples sampled from training data. We demonstrate AvaTaR on four complex multimodal retrieval datasets featuring textual, visual, and relational information. We find AvaTaR consistently outperforms state-of-the-art approaches across all four challenging tasks and exhibits strong generalization ability when applied to novel cases, achieving an average relative improvement of 14% on the Hit@1 metric. Code and dataset are available at https://github.com/zou-group/avatar. | [
"['Shirley Wu' 'Shiyu Zhao' 'Qian Huang' 'Kexin Huang' 'Michihiro Yasunaga'\n 'Kaidi Cao' 'Vassilis N. Ioannidis' 'Karthik Subbian' 'Jure Leskovec'\n 'James Zou']"
]
|
null | null | 2406.11206 | null | null | http://arxiv.org/pdf/2406.11206v1 | 2024-06-17T04:53:47Z | 2024-06-17T04:53:47Z | Retraining with Predicted Hard Labels Provably Increases Model Accuracy | The performance of a model trained with textit{noisy labels} is often improved by simply textit{retraining} the model with its own predicted textit{hard} labels (i.e., $1$/$0$ labels). Yet, a detailed theoretical characterization of this phenomenon is lacking. In this paper, we theoretically analyze retraining in a linearly separable setting with randomly corrupted labels given to us and prove that retraining can improve the population accuracy obtained by initially training with the given (noisy) labels. To the best of our knowledge, this is the first such theoretical result. Retraining finds application in improving training with label differential privacy (DP) which involves training with noisy labels. We empirically show that retraining selectively on the samples for which the predicted label matches the given label significantly improves label DP training at textit{no extra privacy cost}; we call this textit{consensus-based retraining}. For e.g., when training ResNet-18 on CIFAR-100 with $epsilon=3$ label DP, we obtain $6.4%$ improvement in accuracy with consensus-based retraining. | [
"['Rudrajit Das' 'Inderjit S. Dhillon' 'Alessandro Epasto' 'Adel Javanmard'\n 'Jieming Mao' 'Vahab Mirrokni' 'Sujay Sanghavi' 'Peilin Zhong']"
]
|
null | null | 2406.11209 | null | null | http://arxiv.org/abs/2406.11209v1 | 2024-06-17T05:01:09Z | 2024-06-17T05:01:09Z | What Operations can be Performed Directly on Compressed Arrays, and with
What Error? | In response to the rapidly escalating costs of computing with large matrices and tensors caused by data movement, several lossy compression methods have been developed to significantly reduce data volumes. Unfortunately, all these methods require the data to be decompressed before further computations are done. In this work, we develop a lossy compressor that allows a dozen fairly fundamental operations directly on compressed data while offering good compression ratios and modest errors. We implement a new compressor PyBlaz based on the familiar GPU-powered PyTorch framework, and evaluate it on three non-trivial applications, choosing different number systems for internal representation. Our results demonstrate that the compressed-domain operations achieve good scalability with problem sizes while incurring errors well within acceptable limits. To our best knowledge, this is the first such lossy compressor that supports compressed-domain operations while achieving acceptable performance as well as error. | [
"['Tripti Agarwal' 'Harvey Dam' 'Dorra Ben Khalifa' 'Matthieu Martel'\n 'P. Sadayappan' 'Ganesh Gopalakrishnan']"
]
|
null | null | 2406.11230 | null | null | http://arxiv.org/pdf/2406.11230v1 | 2024-06-17T05:54:06Z | 2024-06-17T05:54:06Z | Multimodal Needle in a Haystack: Benchmarking Long-Context Capability of
Multimodal Large Language Models | Multimodal Large Language Models (MLLMs) have shown significant promise in various applications, leading to broad interest from researchers and practitioners alike. However, a comprehensive evaluation of their long-context capabilities remains underexplored. To address these gaps, we introduce the MultiModal Needle-in-a-haystack (MMNeedle) benchmark, specifically designed to assess the long-context capabilities of MLLMs. Besides multi-image input, we employ image stitching to further increase the input context length, and develop a protocol to automatically generate labels for sub-image level retrieval. Essentially, MMNeedle evaluates MLLMs by stress-testing their capability to locate a target sub-image (needle) within a set of images (haystack) based on textual instructions and descriptions of image contents. This setup necessitates an advanced understanding of extensive visual contexts and effective information retrieval within long-context image inputs. With this benchmark, we evaluate state-of-the-art MLLMs, encompassing both API-based and open-source models. The findings reveal that GPT-4o consistently surpasses other models in long-context scenarios, but suffers from hallucination problems in negative samples, i.e., when needles are not in the haystacks. Our comprehensive long-context evaluation of MLLMs also sheds lights on the considerable performance gap between API-based and open-source models. All the code, data, and instructions required to reproduce the main results are available at https://github.com/Wang-ML-Lab/multimodal-needle-in-a-haystack. | [
"['Hengyi Wang' 'Haizhou Shi' 'Shiwei Tan' 'Weiyi Qin' 'Wenyuan Wang'\n 'Tunyu Zhang' 'Akshay Nambi' 'Tanuja Ganu' 'Hao Wang']"
]
|
null | null | 2406.11231 | null | null | http://arxiv.org/pdf/2406.11231v1 | 2024-06-17T05:55:35Z | 2024-06-17T05:55:35Z | Enabling robots to follow abstract instructions and complete complex
dynamic tasks | Completing complex tasks in unpredictable settings like home kitchens challenges robotic systems. These challenges include interpreting high-level human commands, such as "make me a hot beverage" and performing actions like pouring a precise amount of water into a moving mug. To address these challenges, we present a novel framework that combines Large Language Models (LLMs), a curated Knowledge Base, and Integrated Force and Visual Feedback (IFVF). Our approach interprets abstract instructions, performs long-horizon tasks, and handles various uncertainties. It utilises GPT-4 to analyse the user's query and surroundings, then generates code that accesses a curated database of functions during execution. It translates abstract instructions into actionable steps. Each step involves generating custom code by employing retrieval-augmented generalisation to pull IFVF-relevant examples from the Knowledge Base. IFVF allows the robot to respond to noise and disturbances during execution. We use coffee making and plate decoration to demonstrate our approach, including components ranging from pouring to drawer opening, each benefiting from distinct feedback types and methods. This novel advancement marks significant progress toward a scalable, efficient robotic framework for completing complex tasks in uncertain environments. Our findings are illustrated in an accompanying video and supported by an open-source GitHub repository (released upon paper acceptance). | [
"['Ruaridh Mon-Williams' 'Gen Li' 'Ran Long' 'Wenqian Du' 'Chris Lucas']"
]
|
null | null | 2406.11233 | null | null | http://arxiv.org/pdf/2406.11233v1 | 2024-06-17T06:00:24Z | 2024-06-17T06:00:24Z | Probing the Decision Boundaries of In-context Learning in Large Language
Models | In-context learning is a key paradigm in large language models (LLMs) that enables them to generalize to new tasks and domains by simply prompting these models with a few exemplars without explicit parameter updates. Many attempts have been made to understand in-context learning in LLMs as a function of model scale, pretraining data, and other factors. In this work, we propose a new mechanism to probe and understand in-context learning from the lens of decision boundaries for in-context binary classification. Decision boundaries are straightforward to visualize and provide important information about the qualitative behavior of the inductive biases of standard classifiers. To our surprise, we find that the decision boundaries learned by current LLMs in simple binary classification tasks are often irregular and non-smooth, regardless of linear separability in the underlying task. This paper investigates the factors influencing these decision boundaries and explores methods to enhance their generalizability. We assess various approaches, including training-free and fine-tuning methods for LLMs, the impact of model architecture, and the effectiveness of active prompting techniques for smoothing decision boundaries in a data-efficient manner. Our findings provide a deeper understanding of in-context learning dynamics and offer practical improvements for enhancing robustness and generalizability of in-context learning. | [
"['Siyan Zhao' 'Tung Nguyen' 'Aditya Grover']"
]
|
null | null | 2406.11235 | null | null | http://arxiv.org/pdf/2406.11235v1 | 2024-06-17T06:03:13Z | 2024-06-17T06:03:13Z | QTIP: Quantization with Trellises and Incoherence Processing | Post-training quantization (PTQ) reduces the memory footprint of LLMs by quantizing weights to low-precision datatypes. Since LLM inference is usually memory-bound, PTQ methods can improve inference throughput. Recent state-of-the-art PTQ approaches have converged on using vector quantization (VQ) to quantize multiple weights at once, which improves information utilization through better shaping. However, VQ requires a codebook with size exponential in the dimension. This limits current VQ-based PTQ works to low VQ dimensions ($le 8$) that in turn limit quantization quality. Here, we introduce QTIP, which instead uses trellis coded quantization (TCQ) to achieve ultra-high-dimensional quantization. TCQ uses a stateful decoder that separates the codebook size from the bitrate and effective dimension. QTIP introduces a spectrum of lookup-only to computed lookup-free trellis codes designed for a hardware-efficient "bitshift" trellis structure; these codes achieve state-of-the-art results in both quantization quality and inference speed. | [
"['Albert Tseng' 'Qingyao Sun' 'David Hou' 'Christopher De Sa']"
]
|
null | null | 2406.11240 | null | null | http://arxiv.org/pdf/2406.11240v1 | 2024-06-17T06:10:37Z | 2024-06-17T06:10:37Z | The Benefits of Power Regularization in Cooperative Reinforcement
Learning | Cooperative Multi-Agent Reinforcement Learning (MARL) algorithms, trained only to optimize task reward, can lead to a concentration of power where the failure or adversarial intent of a single agent could decimate the reward of every agent in the system. In the context of teams of people, it is often useful to explicitly consider how power is distributed to ensure no person becomes a single point of failure. Here, we argue that explicitly regularizing the concentration of power in cooperative RL systems can result in systems which are more robust to single agent failure, adversarial attacks, and incentive changes of co-players. To this end, we define a practical pairwise measure of power that captures the ability of any co-player to influence the ego agent's reward, and then propose a power-regularized objective which balances task reward and power concentration. Given this new objective, we show that there always exists an equilibrium where every agent is playing a power-regularized best-response balancing power and task reward. Moreover, we present two algorithms for training agents towards this power-regularized objective: Sample Based Power Regularization (SBPR), which injects adversarial data during training; and Power Regularization via Intrinsic Motivation (PRIM), which adds an intrinsic motivation to regulate power to the training objective. Our experiments demonstrate that both algorithms successfully balance task reward and power, leading to lower power behavior than the baseline of task-only reward and avoid catastrophic events in case an agent in the system goes off-policy. | [
"['Michelle Li' 'Michael Dennis']"
]
|
null | null | 2406.11244 | null | null | http://arxiv.org/pdf/2406.11244v1 | 2024-06-17T06:15:31Z | 2024-06-17T06:15:31Z | SpoT-Mamba: Learning Long-Range Dependency on Spatio-Temporal Graphs
with Selective State Spaces | Spatio-temporal graph (STG) forecasting is a critical task with extensive applications in the real world, including traffic and weather forecasting. Although several recent methods have been proposed to model complex dynamics in STGs, addressing long-range spatio-temporal dependencies remains a significant challenge, leading to limited performance gains. Inspired by a recently proposed state space model named Mamba, which has shown remarkable capability of capturing long-range dependency, we propose a new STG forecasting framework named SpoT-Mamba. SpoT-Mamba generates node embeddings by scanning various node-specific walk sequences. Based on the node embeddings, it conducts temporal scans to capture long-range spatio-temporal dependencies. Experimental results on the real-world traffic forecasting dataset demonstrate the effectiveness of SpoT-Mamba. | [
"['Jinhyeok Choi' 'Heehyeon Kim' 'Minhyeong An' 'Joyce Jiyoung Whang']"
]
|
null | null | 2406.11245 | null | null | http://arxiv.org/pdf/2406.11245v1 | 2024-06-17T06:16:07Z | 2024-06-17T06:16:07Z | Deep-Reinforcement-Learning-Based AoI-Aware Resource Allocation for
RIS-Aided IoV Networks | Reconfigurable Intelligent Surface (RIS) is a pivotal technology in communication, offering an alternative path that significantly enhances the link quality in wireless communication environments. In this paper, we propose a RIS-assisted internet of vehicles (IoV) network, considering the vehicle-to-everything (V2X) communication method. In addition, in order to improve the timeliness of vehicle-to-infrastructure (V2I) links and the stability of vehicle-to-vehicle (V2V) links, we introduce the age of information (AoI) model and the payload transmission probability model. Therefore, with the objective of minimizing the AoI of V2I links and prioritizing transmission of V2V links payload, we construct this optimization problem as an Markov decision process (MDP) problem in which the BS serves as an agent to allocate resources and control phase-shift for the vehicles using the soft actor-critic (SAC) algorithm, which gradually converges and maintains a high stability. A AoI-aware joint vehicular resource allocation and RIS phase-shift control scheme based on SAC algorithm is proposed and simulation results show that its convergence speed, cumulative reward, AoI performance, and payload transmission probability outperforms those of proximal policy optimization (PPO), deep deterministic policy gradient (DDPG), twin delayed deep deterministic policy gradient (TD3) and stochastic algorithms. | [
"['Kangwei Qi' 'Qiong Wu' 'Pingyi Fan' 'Nan Cheng' 'Wen Chen'\n 'Jiangzhou Wang' 'Khaled B. Letaief']"
]
|
null | null | 2406.11249 | null | null | http://arxiv.org/pdf/2406.11249v1 | 2024-06-17T06:20:39Z | 2024-06-17T06:20:39Z | Relational Learning in Pre-Trained Models: A Theory from Hypergraph
Recovery Perspective | Foundation Models (FMs) have demonstrated remarkable insights into the relational dynamics of the world, leading to the crucial question: how do these models acquire an understanding of world hybrid relations? Traditional statistical learning, particularly for prediction problems, may overlook the rich and inherently structured information from the data, especially regarding the relationships between objects. We introduce a mathematical model that formalizes relational learning as hypergraph recovery to study pre-training of FMs. In our framework, the world is represented as a hypergraph, with data abstracted as random samples from hyperedges. We theoretically examine the feasibility of a Pre-Trained Model (PTM) to recover this hypergraph and analyze the data efficiency in a minimax near-optimal style. By integrating rich graph theories into the realm of PTMs, our mathematical framework offers powerful tools for an in-depth understanding of pre-training from a unique perspective and can be used under various scenarios. As an example, we extend the framework to entity alignment in multimodal learning. | [
"['Yang Chen' 'Cong Fang' 'Zhouchen Lin' 'Bing Liu']"
]
|
null | null | 2406.11257 | null | null | http://arxiv.org/pdf/2406.11257v1 | 2024-06-17T06:47:29Z | 2024-06-17T06:47:29Z | ExCP: Extreme LLM Checkpoint Compression via Weight-Momentum Joint
Shrinking | Large language models (LLM) have recently attracted significant attention in the field of artificial intelligence. However, the training process of these models poses significant challenges in terms of computational and storage capacities, thus compressing checkpoints has become an urgent problem. In this paper, we propose a novel Extreme Checkpoint Compression (ExCP) framework, which significantly reduces the required storage of training checkpoints while achieving nearly lossless performance. We first calculate the residuals of adjacent checkpoints to obtain the essential but sparse information for higher compression ratio. To further excavate the redundancy parameters in checkpoints, we then propose a weight-momentum joint shrinking method to utilize another important information during the model optimization, i.e., momentum. In particular, we exploit the information of both model and optimizer to discard as many parameters as possible while preserving critical information to ensure optimal performance. Furthermore, we utilize non-uniform quantization to further compress the storage of checkpoints. We extensively evaluate our proposed ExCP framework on several models ranging from 410M to 7B parameters and demonstrate significant storage reduction while maintaining strong performance. For instance, we achieve approximately $70times$ compression for the Pythia-410M model, with the final performance being as accurate as the original model on various downstream tasks. Codes will be available at https://github.com/Gaffey/ExCP. | [
"['Wenshuo Li' 'Xinghao Chen' 'Han Shu' 'Yehui Tang' 'Yunhe Wang']"
]
|
null | null | 2406.11271 | null | null | http://arxiv.org/pdf/2406.11271v1 | 2024-06-17T07:21:36Z | 2024-06-17T07:21:36Z | MINT-1T: Scaling Open-Source Multimodal Data by 10x: A Multimodal
Dataset with One Trillion Tokens | Multimodal interleaved datasets featuring free-form interleaved sequences of images and text are crucial for training frontier large multimodal models (LMMs). Despite the rapid progression of open-source LMMs, there remains a pronounced scarcity of large-scale, diverse open-source multimodal interleaved datasets. In response, we introduce MINT-1T, the most extensive and diverse open-source Multimodal INTerleaved dataset to date. MINT-1T comprises one trillion text tokens and three billion images, a 10x scale-up from existing open-source datasets. Additionally, we include previously untapped sources such as PDFs and ArXiv papers. As scaling multimodal interleaved datasets requires substantial engineering effort, sharing the data curation process and releasing the dataset greatly benefits the community. Our experiments show that LMMs trained on MINT-1T rival the performance of models trained on the previous leading dataset, OBELICS. Our data and code will be released at https://github.com/mlfoundations/MINT-1T. | [
"['Anas Awadalla' 'Le Xue' 'Oscar Lo' 'Manli Shu' 'Hannah Lee'\n 'Etash Kumar Guha' 'Matt Jordan' 'Sheng Shen' 'Mohamed Awadalla'\n 'Silvio Savarese' 'Caiming Xiong' 'Ran Xu' 'Yejin Choi' 'Ludwig Schmidt']"
]
|
null | null | 2406.11281 | null | null | http://arxiv.org/pdf/2406.11281v1 | 2024-06-17T07:37:36Z | 2024-06-17T07:37:36Z | Statistical Learning of Distributionally Robust Stochastic Control in
Continuous State Spaces | We explore the control of stochastic systems with potentially continuous state and action spaces, characterized by the state dynamics $X_{t+1} = f(X_t, A_t, W_t)$. Here, $X$, $A$, and $W$ represent the state, action, and exogenous random noise processes, respectively, with $f$ denoting a known function that describes state transitions. Traditionally, the noise process ${W_t, t geq 0}$ is assumed to be independent and identically distributed, with a distribution that is either fully known or can be consistently estimated. However, the occurrence of distributional shifts, typical in engineering settings, necessitates the consideration of the robustness of the policy. This paper introduces a distributionally robust stochastic control paradigm that accommodates possibly adaptive adversarial perturbation to the noise distribution within a prescribed ambiguity set. We examine two adversary models: current-action-aware and current-action-unaware, leading to different dynamic programming equations. Furthermore, we characterize the optimal finite sample minimax rates for achieving uniform learning of the robust value function across continuum states under both adversary types, considering ambiguity sets defined by $f_k$-divergence and Wasserstein distance. Finally, we demonstrate the applicability of our framework across various real-world settings. | [
"['Shengbo Wang' 'Nian Si' 'Jose Blanchet' 'Zhengyuan Zhou']"
]
|
null | null | 2406.11290 | null | null | http://arxiv.org/pdf/2406.11290v1 | 2024-06-17T07:52:42Z | 2024-06-17T07:52:42Z | Iterative Utility Judgment Framework via LLMs Inspired by Relevance in
Philosophy | Utility and topical relevance are critical measures in information retrieval (IR), reflecting system and user perspectives, respectively. While topical relevance has long been emphasized, utility is a higher standard of relevance and is more useful for facilitating downstream tasks, e.g., in Retrieval-Augmented Generation (RAG). When we incorporate utility judgments into RAG, we realize that the topical relevance, utility, and answering in RAG are closely related to the three types of relevance that Schutz discussed from a philosophical perspective. They are topical relevance, interpretational relevance, and motivational relevance, respectively. Inspired by the dynamic iterations of the three types of relevance, we propose an Iterative utiliTy judgmEnt fraMework (ITEM) to promote each step of the cycle of RAG. We conducted extensive experiments on multi-grade passage retrieval and factoid question-answering datasets (i.e., TREC DL, WebAP, and NQ). Experimental results demonstrate significant improvements in utility judgments, ranking of topical relevance, and answer generation upon representative baselines, including multiple single-shot utility judging approaches. Our code and benchmark can be found at https://anonymous.4open.science/r/ITEM-B486/. | [
"['Hengran Zhang' 'Keping Bi' 'Jiafeng Guo' 'Xueqi Cheng']"
]
|
null | null | 2406.11301 | null | null | http://arxiv.org/pdf/2406.11301v1 | 2024-06-17T08:08:11Z | 2024-06-17T08:08:11Z | Optimizing and Testing Instruction-Following: Analyzing the Impact of
Fine-Grained Instruction Variants on instruction-tuned LLMs | The effective alignment of Large Language Models (LLMs) with precise instructions is essential for their application in diverse real-world scenarios. Current methods focus on enhancing the diversity and complexity of training and evaluation samples, yet they fall short in accurately assessing LLMs' ability to follow similar instruction variants. We introduce an effective data augmentation technique that decomposes complex instructions into simpler sub-components, modifies these, and reconstructs them into new variants, thereby preserves the original instruction's context and complexity while introducing variability, which is critical for training and evaluating LLMs' instruction-following precision. We developed the DeMoRecon dataset using this method to both fine-tune and evaluate LLMs. Our findings show that LLMs fine-tuned with DeMoRecon will gain significant performance boost on both ours and commonly used instructions-following benchmarks. | [
"['Jiuding Yang' 'Weidong Guo' 'Kaitong Yang' 'Xiangyang Li' 'Zhuwei Rao'\n 'Yu Xu' 'Di Niu']"
]
|
null | null | 2406.11308 | null | null | http://arxiv.org/pdf/2406.11308v1 | 2024-06-17T08:14:40Z | 2024-06-17T08:14:40Z | Management Decisions in Manufacturing using Causal Machine Learning --
To Rework, or not to Rework? | In this paper, we present a data-driven model for estimating optimal rework policies in manufacturing systems. We consider a single production stage within a multistage, lot-based system that allows for optional rework steps. While the rework decision depends on an intermediate state of the lot and system, the final product inspection, and thus the assessment of the actual yield, is delayed until production is complete. Repair steps are applied uniformly to the lot, potentially improving some of the individual items while degrading others. The challenge is thus to balance potential yield improvement with the rework costs incurred. Given the inherently causal nature of this decision problem, we propose a causal model to estimate yield improvement. We apply methods from causal machine learning, in particular double/debiased machine learning (DML) techniques, to estimate conditional treatment effects from data and derive policies for rework decisions. We validate our decision model using real-world data from opto-electronic semiconductor manufacturing, achieving a yield improvement of 2 - 3% during the color-conversion process of white light-emitting diodes (LEDs). | [
"['Philipp Schwarz' 'Oliver Schacht' 'Sven Klaassen' 'Daniel Grünbaum'\n 'Sebastian Imhof' 'Martin Spindler']"
]
|
null | null | 2406.11310 | null | null | http://arxiv.org/pdf/2406.11310v1 | 2024-06-17T08:16:28Z | 2024-06-17T08:16:28Z | Federated Active Learning Framework for Efficient Annotation Strategy in
Skin-lesion Classification | Federated Learning (FL) enables multiple institutes to train models collaboratively without sharing private data. Current FL research focuses on communication efficiency, privacy protection, and personalization and assumes that the data of FL have already been ideally collected. In medical scenarios, however, data annotation demands both expertise and intensive labor, which is a critical problem in FL. Active learning (AL), has shown promising performance in reducing the number of data annotations in medical image analysis. We propose a federated AL (FedAL) framework in which AL is executed periodically and interactively under FL. We exploit a local model in each hospital and a global model acquired from FL to construct an ensemble. We use ensemble-entropy-based AL as an efficient data-annotation strategy in FL. Therefore, our FedAL framework can decrease the amount of annotated data and preserve patient privacy while maintaining the performance of FL. To our knowledge, this is the first FedAL framework applied to medical images. We validated our framework on real-world dermoscopic datasets. Using only 50% of samples, our framework was able to achieve state-of-the-art performance on a skin-lesion classification task. Our framework performed better than several state-of-the-art AL methods under FL and achieved comparable performance to full-data FL. | [
"['Zhipeng Deng' 'Yuqiao Yang' 'Kenji Suzuki']"
]
|
null | null | 2406.11316 | null | null | http://arxiv.org/pdf/2406.11316v1 | 2024-06-17T08:26:51Z | 2024-06-17T08:26:51Z | Improved Algorithms for Contextual Dynamic Pricing | In contextual dynamic pricing, a seller sequentially prices goods based on contextual information. Buyers will purchase products only if the prices are below their valuations. The goal of the seller is to design a pricing strategy that collects as much revenue as possible. We focus on two different valuation models. The first assumes that valuations linearly depend on the context and are further distorted by noise. Under minor regularity assumptions, our algorithm achieves an optimal regret bound of $tilde{mathcal{O}}(T^{2/3})$, improving the existing results. The second model removes the linearity assumption, requiring only that the expected buyer valuation is $beta$-H"older in the context. For this model, our algorithm obtains a regret $tilde{mathcal{O}}(T^{d+2beta/d+3beta})$, where $d$ is the dimension of the context space. | [
"['Matilde Tullii' 'Solenne Gaucher' 'Nadav Merlis' 'Vianney Perchet']"
]
|
null | null | 2406.11318 | null | null | http://arxiv.org/pdf/2406.11318v1 | 2024-06-17T08:35:32Z | 2024-06-17T08:35:32Z | Reconfigurable Intelligent Surface Assisted VEC Based on Multi-Agent
Reinforcement Learning | Vehicular edge computing (VEC) is an emerging technology that enables vehicles to perform high-intensity tasks by executing tasks locally or offloading them to nearby edge devices. However, obstacles such as buildings may degrade the communications and incur communication interruptions, and thus the vehicle may not meet the requirement for task offloading. Reconfigurable intelligent surfaces (RIS) is introduced to support vehicle communication and provide an alternative communication path. The system performance can be improved by flexibly adjusting the phase-shift of the RIS. For RIS-assisted VEC system where tasks arrive randomly, we design a control scheme that considers offloading power, local power allocation and phase-shift optimization. To solve this non-convex problem, we propose a new deep reinforcement learning (DRL) framework that employs modified multi-agent deep deterministic policy gradient (MADDPG) approach to optimize the power allocation for vehicle users (VUs) and block coordinate descent (BCD) algorithm to optimize the phase-shift of the RIS. Simulation results show that our proposed scheme outperforms the centralized deep deterministic policy gradient (DDPG) scheme and random scheme. | [
"['Kangwei Qi' 'Qiong Wu' 'Pingyi Fan' 'Nan Cheng' 'Qiang Fan'\n 'Jiangzhou Wang']"
]
|
null | null | 2406.11325 | null | null | http://arxiv.org/pdf/2406.11325v2 | 2024-07-05T15:51:16Z | 2024-06-17T08:38:29Z | Deep-Learning-Based Channel Estimation for Distributed MIMO with 1-bit
Radio-Over-Fiber Fronthaul | We consider the problem of pilot-aided, uplink channel estimation in a distributed massive multiple-input multiple-output (MIMO) architecture, in which the access points are connected to a central processing unit via fiber-optical fronthaul links, carrying a two-level-quantized version of the received analog radio-frequency signal. We adapt to this architecture the deep-learning-based channel-estimation algorithm recently proposed by Nguyen et al. (2023), and explore its robustness to the additional signal distortions (beyond 1-bit quantization) introduced in the considered architecture by the automatic gain controllers (AGCs) and by the comparators. These components are used at the access points to generate the two-level analog waveform from the received signal. Via simulation results, we illustrate that the proposed channel-estimation method outperforms significantly the Bussgang linear minimum mean-square error channel estimator, and it is robust against the additional impairments introduced by the AGCs and the comparators. | [
"['Alireza Bordbar' 'Lise Aabel' 'Christian Häger' 'Christian Fager'\n 'Giuseppe Durisi']"
]
|
null | null | 2406.11331 | null | null | http://arxiv.org/pdf/2406.11331v1 | 2024-06-17T08:42:19Z | 2024-06-17T08:42:19Z | They're All Doctors: Synthesizing Diverse Counterfactuals to Mitigate
Associative Bias | Vision Language Models (VLMs) such as CLIP are powerful models; however they can exhibit unwanted biases, making them less safe when deployed directly in applications such as text-to-image, text-to-video retrievals, reverse search, or classification tasks. In this work, we propose a novel framework to generate synthetic counterfactual images to create a diverse and balanced dataset that can be used to fine-tune CLIP. Given a set of diverse synthetic base images from text-to-image models, we leverage off-the-shelf segmentation and inpainting models to place humans with diverse visual appearances in context. We show that CLIP trained on such datasets learns to disentangle the human appearance from the context of an image, i.e., what makes a doctor is not correlated to the person's visual appearance, like skin color or body type, but to the context, such as background, the attire they are wearing, or the objects they are holding. We demonstrate that our fine-tuned CLIP model, $CF_alpha$, improves key fairness metrics such as MaxSkew, MinSkew, and NDKL by 40-66% for image retrieval tasks, while still achieving similar levels of performance in downstream tasks. We show that, by design, our model retains maximal compatibility with the original CLIP models, and can be easily controlled to support different accuracy versus fairness trade-offs in a plug-n-play fashion. | [
"['Salma Abdel Magid' 'Jui-Hsien Wang' 'Kushal Kafle' 'Hanspeter Pfister']"
]
|
null | null | 2406.11340 | null | null | http://arxiv.org/pdf/2406.11340v2 | 2024-06-18T08:10:58Z | 2024-06-17T08:57:00Z | CM2-Net: Continual Cross-Modal Mapping Network for Driver Action
Recognition | Driver action recognition has significantly advanced in enhancing driver-vehicle interactions and ensuring driving safety by integrating multiple modalities, such as infrared and depth. Nevertheless, compared to RGB modality only, it is always laborious and costly to collect extensive data for all types of non-RGB modalities in car cabin environments. Therefore, previous works have suggested independently learning each non-RGB modality by fine-tuning a model pre-trained on RGB videos, but these methods are less effective in extracting informative features when faced with newly-incoming modalities due to large domain gaps. In contrast, we propose a Continual Cross-Modal Mapping Network (CM2-Net) to continually learn each newly-incoming modality with instructive prompts from the previously-learned modalities. Specifically, we have developed Accumulative Cross-modal Mapping Prompting (ACMP), to map the discriminative and informative features learned from previous modalities into the feature space of newly-incoming modalities. Then, when faced with newly-incoming modalities, these mapped features are able to provide effective prompts for which features should be extracted and prioritized. These prompts are accumulating throughout the continual learning process, thereby boosting further recognition performances. Extensive experiments conducted on the Drive&Act dataset demonstrate the performance superiority of CM2-Net on both uni- and multi-modal driver action recognition. | [
"['Ruoyu Wang' 'Chen Cai' 'Wenqian Wang' 'Jianjun Gao' 'Dan Lin'\n 'Wenyang Liu' 'Kim-Hui Yap']"
]
|
null | null | 2406.11353 | null | null | http://arxiv.org/pdf/2406.11353v1 | 2024-06-17T09:17:05Z | 2024-06-17T09:17:05Z | $\texttt{MoE-RBench}$: Towards Building Reliable Language Models with
Sparse Mixture-of-Experts | Mixture-of-Experts (MoE) has gained increasing popularity as a promising framework for scaling up large language models (LLMs). However, the reliability assessment of MoE lags behind its surging applications. Moreover, when transferred to new domains such as in fine-tuning MoE models sometimes underperform their dense counterparts. Motivated by the research gap and counter-intuitive phenomenon, we propose $texttt{MoE-RBench}$, the first comprehensive assessment of SMoE reliability from three aspects: $textit{(i)}$ safety and hallucination, $textit{(ii)}$ resilience to adversarial attacks, and $textit{(iii)}$ out-of-distribution robustness. Extensive models and datasets are tested to compare the MoE to dense networks from these reliability dimensions. Our empirical observations suggest that with appropriate hyperparameters, training recipes, and inference techniques, we can build the MoE model more reliably than the dense LLM. In particular, we find that the robustness of SMoE is sensitive to the basic training settings. We hope that this study can provide deeper insights into how to adapt the pre-trained MoE model to other tasks with higher-generation security, quality, and stability. Codes are available at https://github.com/UNITES-Lab/MoE-RBench | [
"['Guanjie Chen' 'Xinyu Zhao' 'Tianlong Chen' 'Yu Cheng']"
]
|
null | null | 2406.11370 | null | null | http://arxiv.org/pdf/2406.11370v1 | 2024-06-17T09:48:53Z | 2024-06-17T09:48:53Z | Fairer Preferences Elicit Improved Human-Aligned Large Language Model
Judgments | Large language models (LLMs) have shown promising abilities as cost-effective and reference-free evaluators for assessing language generation quality. In particular, pairwise LLM evaluators, which compare two generated texts and determine the preferred one, have been employed in a wide range of applications. However, LLMs exhibit preference biases and worrying sensitivity to prompt designs. In this work, we first reveal that the predictive preference of LLMs can be highly brittle and skewed, even with semantically equivalent instructions. We find that fairer predictive preferences from LLMs consistently lead to judgments that are better aligned with humans. Motivated by this phenomenon, we propose an automatic Zero-shot Evaluation-oriented Prompt Optimization framework, ZEPO, which aims to produce fairer preference decisions and improve the alignment of LLM evaluators with human judgments. To this end, we propose a zero-shot learning objective based on the preference decision fairness. ZEPO demonstrates substantial performance improvements over state-of-the-art LLM evaluators, without requiring labeled data, on representative meta-evaluation benchmarks. Our findings underscore the critical correlation between preference fairness and human alignment, positioning ZEPO as an efficient prompt optimizer for bridging the gap between LLM evaluators and human judgments. | [
"['Han Zhou' 'Xingchen Wan' 'Yinhong Liu' 'Nigel Collier' 'Ivan Vulić'\n 'Anna Korhonen']"
]
|
null | null | 2406.11389 | null | null | http://arxiv.org/pdf/2406.11389v1 | 2024-06-17T10:18:53Z | 2024-06-17T10:18:53Z | SEFraud: Graph-based Self-Explainable Fraud Detection via Interpretative
Mask Learning | Graph-based fraud detection has widespread application in modern industry scenarios, such as spam review and malicious account detection. While considerable efforts have been devoted to designing adequate fraud detectors, the interpretability of their results has often been overlooked. Previous works have attempted to generate explanations for specific instances using post-hoc explaining methods such as a GNNExplainer. However, post-hoc explanations can not facilitate the model predictions and the computational cost of these methods cannot meet practical requirements, thus limiting their application in real-world scenarios. To address these issues, we propose SEFraud, a novel graph-based self-explainable fraud detection framework that simultaneously tackles fraud detection and result in interpretability. Concretely, SEFraud first leverages customized heterogeneous graph transformer networks with learnable feature masks and edge masks to learn expressive representations from the informative heterogeneously typed transactions. A new triplet loss is further designed to enhance the performance of mask learning. Empirical results on various datasets demonstrate the effectiveness of SEFraud as it shows considerable advantages in both the fraud detection performance and interpretability of prediction results. Moreover, SEFraud has been deployed and offers explainable fraud detection service for the largest bank in China, Industrial and Commercial Bank of China Limited (ICBC). Results collected from the production environment of ICBC show that SEFraud can provide accurate detection results and comprehensive explanations that align with the expert business understanding, confirming its efficiency and applicability in large-scale online services. | [
"['Kaidi Li' 'Tianmeng Yang' 'Min Zhou' 'Jiahao Meng' 'Shendi Wang'\n 'Yihui Wu' 'Boshuai Tan' 'Hu Song' 'Lujia Pan' 'Fan Yu' 'Zhenli Sheng'\n 'Yunhai Tong']"
]
|
null | null | 2406.11390 | null | null | http://arxiv.org/pdf/2406.11390v1 | 2024-06-17T10:21:01Z | 2024-06-17T10:21:01Z | Unfolding Time: Generative Modeling for Turbulent Flows in 4D | A recent study in turbulent flow simulation demonstrated the potential of generative diffusion models for fast 3D surrogate modeling. This approach eliminates the need for specifying initial states or performing lengthy simulations, significantly accelerating the process. While adept at sampling individual frames from the learned manifold of turbulent flow states, the previous model lacks the capability to generate sequences, hindering analysis of dynamic phenomena. This work addresses this limitation by introducing a 4D generative diffusion model and a physics-informed guidance technique that enables the generation of realistic sequences of flow states. Our findings indicate that the proposed method can successfully sample entire subsequences from the turbulent manifold, even though generalizing from individual frames to sequences remains a challenging task. This advancement opens doors for the application of generative modeling in analyzing the temporal evolution of turbulent flows, providing valuable insights into their complex dynamics. | [
"['Abdullah Saydemir' 'Marten Lienen' 'Stephan Günnemann']"
]
|
null | null | 2406.11391 | null | null | http://arxiv.org/pdf/2406.11391v1 | 2024-06-17T10:22:00Z | 2024-06-17T10:22:00Z | P-TA: Using Proximal Policy Optimization to Enhance Tabular Data
Augmentation via Large Language Models | A multitude of industries depend on accurate and reasonable tabular data augmentation for their business processes. Contemporary methodologies in generating tabular data revolve around utilizing Generative Adversarial Networks (GAN) or fine-tuning Large Language Models (LLM). However, GAN-based approaches are documented to produce samples with common-sense errors attributed to the absence of external knowledge. On the other hand, LLM-based methods exhibit a limited capacity to capture the disparities between synthesized and actual data distribution due to the absence of feedback from a discriminator during training. Furthermore, the decoding of LLM-based generation introduces gradient breakpoints, impeding the backpropagation of loss from a discriminator, thereby complicating the integration of these two approaches. To solve this challenge, we propose using proximal policy optimization (PPO) to apply GANs, guiding LLMs to enhance the probability distribution of tabular features. This approach enables the utilization of LLMs as generators for GANs in synthesizing tabular data. Our experiments demonstrate that PPO leads to an approximately 4% improvement in the accuracy of models trained on synthetically generated data over state-of-the-art across three real-world datasets. | [
"['Shuo Yang' 'Chenchen Yuan' 'Yao Rong' 'Felix Steinbauer'\n 'Gjergji Kasneci']"
]
|
null | null | 2406.11397 | null | null | http://arxiv.org/pdf/2406.11397v1 | 2024-06-17T10:33:00Z | 2024-06-17T10:33:00Z | DistPred: A Distribution-Free Probabilistic Inference Method for
Regression and Forecasting | Traditional regression and prediction tasks often only provide deterministic point estimates. To estimate the uncertainty or distribution information of the response variable, methods such as Bayesian inference, model ensembling, or MC Dropout are typically used. These methods either assume that the posterior distribution of samples follows a Gaussian process or require thousands of forward passes for sample generation. We propose a novel approach called DistPred for regression and forecasting tasks, which overcomes the limitations of existing methods while remaining simple and powerful. Specifically, we transform proper scoring rules that measure the discrepancy between the predicted distribution and the target distribution into a differentiable discrete form and use it as a loss function to train the model end-to-end. This allows the model to sample numerous samples in a single forward pass to estimate the potential distribution of the response variable. We have compared our method with several existing approaches on multiple datasets and achieved state-of-the-art performance. Additionally, our method significantly improves computational efficiency. For example, compared to state-of-the-art models, DistPred has a 90x faster inference speed. Experimental results can be reproduced through https://github.com/Anoise/DistPred. | [
"['Daojun Liang' 'Haixia Zhang' 'Dongfeng Yuan']"
]
|
null | null | 2406.11402 | null | null | http://arxiv.org/pdf/2406.11402v1 | 2024-06-17T10:45:36Z | 2024-06-17T10:45:36Z | Evaluating Open Language Models Across Task Types, Application Domains,
and Reasoning Types: An In-Depth Experimental Analysis | The rapid rise of Language Models (LMs) has expanded their use in several applications. Yet, due to constraints of model size, associated cost, or proprietary restrictions, utilizing state-of-the-art (SOTA) LLMs is not always feasible. With open, smaller LMs emerging, more applications can leverage their capabilities, but selecting the right LM can be challenging. This work conducts an in-depth experimental analysis of the semantic correctness of outputs of 10 smaller, open LMs across three aspects: task types, application domains and reasoning types, using diverse prompt styles. We demonstrate that most effective models and prompt styles vary depending on the specific requirements. Our analysis provides a comparative assessment of LMs and prompt styles using a proposed three-tier schema of aspects for their strategic selection based on use-case and other constraints. We also show that if utilized appropriately, these LMs can compete with, and sometimes outperform, SOTA LLMs like DeepSeek-v2, GPT-3.5-Turbo, and GPT-4o. | [
"['Neelabh Sinha' 'Vinija Jain' 'Aman Chadha']"
]
|
null | null | 2406.11422 | null | null | http://arxiv.org/pdf/2406.11422v1 | 2024-06-17T11:20:09Z | 2024-06-17T11:20:09Z | Cross-domain Open-world Discovery | In many real-world applications, test data may commonly exhibit categorical shifts, characterized by the emergence of novel classes, as well as distribution shifts arising from feature distributions different from the ones the model was trained on. However, existing methods either discover novel classes in the open-world setting or assume domain shifts without the ability to discover novel classes. In this work, we consider a cross-domain open-world discovery setting, where the goal is to assign samples to seen classes and discover unseen classes under a domain shift. To address this challenging problem, we present CROW, a prototype-based approach that introduces a cluster-then-match strategy enabled by a well-structured representation space of foundation models. In this way, CROW discovers novel classes by robustly matching clusters with previously seen classes, followed by fine-tuning the representation space using an objective designed for cross-domain open-world discovery. Extensive experimental results on image classification benchmark datasets demonstrate that CROW outperforms alternative baselines, achieving an 8% average performance improvement across 75 experimental settings. | [
"['Shuo Wen' 'Maria Brbic']"
]
|
null | null | 2406.11423 | null | null | http://arxiv.org/pdf/2406.11423v1 | 2024-06-17T11:22:04Z | 2024-06-17T11:22:04Z | Dredge Word, Social Media, and Webgraph Networks for Unreliable Website
Classification and Identification | In an attempt to mimic the complex paths through which unreliable content spreads between search engines and social media, we explore the impact of incorporating both webgraph and large-scale social media contexts into website credibility classification and discovery systems. We further explore the usage of what we define as textit{dredge words} on social media -- terms or phrases for which unreliable domains rank highly. Through comprehensive graph neural network ablations, we demonstrate that curriculum-based heterogeneous graph models that leverage context from both webgraphs and social media data outperform homogeneous and single-mode approaches. We further demonstrate that the incorporation of dredge words into our model strongly associates unreliable websites with social media and online commerce platforms. Finally, we show our heterogeneous model greatly outperforms competing systems in the top-k identification of unlabeled unreliable websites. We demonstrate the strong unreliability signals present in the diverse paths that users follow to uncover unreliable content, and we release a novel dataset of dredge words. | [
"['Evan M. Williams' 'Peter Carragher' 'Kathleen M. Carley']"
]
|
null | null | 2406.11427 | null | null | http://arxiv.org/pdf/2406.11427v1 | 2024-06-17T11:25:57Z | 2024-06-17T11:25:57Z | DiTTo-TTS: Efficient and Scalable Zero-Shot Text-to-Speech with
Diffusion Transformer | Large-scale diffusion models have shown outstanding generative abilities across multiple modalities including images, videos, and audio. However, text-to-speech (TTS) systems typically involve domain-specific modeling factors (e.g., phonemes and phoneme-level durations) to ensure precise temporal alignments between text and speech, which hinders the efficiency and scalability of diffusion models for TTS. In this work, we present an efficient and scalable Diffusion Transformer (DiT) that utilizes off-the-shelf pre-trained text and speech encoders. Our approach addresses the challenge of text-speech alignment via cross-attention mechanisms with the prediction of the total length of speech representations. To achieve this, we enhance the DiT architecture to suit TTS and improve the alignment by incorporating semantic guidance into the latent space of speech. We scale the training dataset and the model size to 82K hours and 790M parameters, respectively. Our extensive experiments demonstrate that the large-scale diffusion model for TTS without domain-specific modeling not only simplifies the training pipeline but also yields superior or comparable zero-shot performance to state-of-the-art TTS models in terms of naturalness, intelligibility, and speaker similarity. Our speech samples are available at https://ditto-tts.github.io. | [
"['Keon Lee' 'Dong Won Kim' 'Jaehyeon Kim' 'Jaewoong Cho']"
]
|
null | null | 2406.11437 | null | null | http://arxiv.org/pdf/2406.11437v1 | 2024-06-17T11:47:14Z | 2024-06-17T11:47:14Z | Analysing the Behaviour of Tree-Based Neural Networks in Regression
Tasks | The landscape of deep learning has vastly expanded the frontiers of source code analysis, particularly through the utilization of structural representations such as Abstract Syntax Trees (ASTs). While these methodologies have demonstrated effectiveness in classification tasks, their efficacy in regression applications, such as execution time prediction from source code, remains underexplored. This paper endeavours to decode the behaviour of tree-based neural network models in the context of such regression challenges. We extend the application of established models--tree-based Convolutional Neural Networks (CNNs), Code2Vec, and Transformer-based methods--to predict the execution time of source code by parsing it to an AST. Our comparative analysis reveals that while these models are benchmarks in code representation, they exhibit limitations when tasked with regression. To address these deficiencies, we propose a novel dual-transformer approach that operates on both source code tokens and AST representations, employing cross-attention mechanisms to enhance interpretability between the two domains. Furthermore, we explore the adaptation of Graph Neural Networks (GNNs) to this tree-based problem, theorizing the inherent compatibility due to the graphical nature of ASTs. Empirical evaluations on real-world datasets showcase that our dual-transformer model outperforms all other tree-based neural networks and the GNN-based models. Moreover, our proposed dual transformer demonstrates remarkable adaptability and robust performance across diverse datasets. | [
"['Peter Samoaa' 'Mehrdad Farahani' 'Antonio Longa' 'Philipp Leitner'\n 'Morteza Haghir Chehreghani']"
]
|
null | null | 2406.11443 | null | null | http://arxiv.org/pdf/2406.11443v1 | 2024-06-17T11:56:15Z | 2024-06-17T11:56:15Z | PrAViC: Probabilistic Adaptation Framework for Real-Time Video
Classification | Video processing is generally divided into two main categories: processing of the entire video, which typically yields optimal classification outcomes, and real-time processing, where the objective is to make a decision as promptly as possible. The latter is often driven by the need to identify rapidly potential critical or dangerous situations. These could include machine failure, traffic accidents, heart problems, or dangerous behavior. Although the models dedicated to the processing of entire videos are typically well-defined and clearly presented in the literature, this is not the case for online processing, where a plethora of hand-devised methods exist. To address this, we present our{}, a novel, unified, and theoretically-based adaptation framework for dealing with the online classification problem for video data. The initial phase of our study is to establish a robust mathematical foundation for the theory of classification of sequential data, with the potential to make a decision at an early stage. This allows us to construct a natural function that encourages the model to return an outcome much faster. The subsequent phase is to demonstrate a straightforward and readily implementable method for adapting offline models to online and recurrent operations. Finally, by comparing the proposed approach to the non-online state-of-the-art baseline, it is demonstrated that the use of our{} encourages the network to make earlier classification decisions without compromising accuracy. | [
"['Magdalena Trędowicz' 'Łukasz Struski' 'Marcin Mazur' 'Szymon Janusz'\n 'Arkadiusz Lewicki' 'Jacek Tabor']"
]
|
null | null | 2406.11456 | null | null | http://arxiv.org/pdf/2406.11456v1 | 2024-06-17T12:14:31Z | 2024-06-17T12:14:31Z | Calibrating Where It Matters: Constrained Temperature Scaling | We consider calibration of convolutional classifiers for diagnostic decision making. Clinical decision makers can use calibrated classifiers to minimise expected costs given their own cost function. Such functions are usually unknown at training time. If minimising expected costs is the primary aim, algorithms should focus on tuning calibration in regions of probability simplex likely to effect decisions. We give an example, modifying temperature scaling calibration, and demonstrate improved calibration where it matters using convnets trained to classify dermoscopy images. | [
"['Stephen McKenna' 'Jacob Carse']"
]
|
null | null | 2406.11458 | null | null | http://arxiv.org/pdf/2406.11458v1 | 2024-06-17T12:20:59Z | 2024-06-17T12:20:59Z | Adversaries With Incentives: A Strategic Alternative to Adversarial
Robustness | Adversarial training aims to defend against *adversaries*: malicious opponents whose sole aim is to harm predictive performance in any way possible - a rather harsh perspective, which we assert results in unnecessarily conservative models. Instead, we propose to model opponents as simply pursuing their own goals, rather than working directly against the classifier. Employing tools from strategic modeling, our approach uses knowledge or beliefs regarding the opponent's possible incentives as inductive bias for learning. Our method of *strategic training* is designed to defend against opponents within an *incentive uncertainty set*: this resorts to adversarial learning when the set is maximal, but offers potential gains when it can be appropriately reduced. We conduct a series of experiments that show how even mild knowledge regarding the adversary's incentives can be useful, and that the degree of potential gains depends on how incentives relate to the structure of the learning task. | [
"['Maayan Ehrenberg' 'Roy Ganz' 'Nir Rosenfeld']"
]
|
null | null | 2406.11463 | null | null | http://arxiv.org/pdf/2406.11463v1 | 2024-06-17T12:24:45Z | 2024-06-17T12:24:45Z | Just How Flexible are Neural Networks in Practice? | It is widely believed that a neural network can fit a training set containing at least as many samples as it has parameters, underpinning notions of overparameterized and underparameterized models. In practice, however, we only find solutions accessible via our training procedure, including the optimizer and regularizers, limiting flexibility. Moreover, the exact parameterization of the function class, built into an architecture, shapes its loss surface and impacts the minima we find. In this work, we examine the ability of neural networks to fit data in practice. Our findings indicate that: (1) standard optimizers find minima where the model can only fit training sets with significantly fewer samples than it has parameters; (2) convolutional networks are more parameter-efficient than MLPs and ViTs, even on randomly labeled data; (3) while stochastic training is thought to have a regularizing effect, SGD actually finds minima that fit more training data than full-batch gradient descent; (4) the difference in capacity to fit correctly labeled and incorrectly labeled samples can be predictive of generalization; (5) ReLU activation functions result in finding minima that fit more data despite being designed to avoid vanishing and exploding gradients in deep architectures. | [
"['Ravid Shwartz-Ziv' 'Micah Goldblum' 'Arpit Bansal' 'C. Bayan Bruss'\n 'Yann LeCun' 'Andrew Gordon Wilson']"
]
|
null | null | 2406.11481 | null | null | http://arxiv.org/pdf/2406.11481v2 | 2024-06-21T13:04:50Z | 2024-06-17T12:46:02Z | Constrained Reinforcement Learning with Average Reward Objective:
Model-Based and Model-Free Algorithms | Reinforcement Learning (RL) serves as a versatile framework for sequential decision-making, finding applications across diverse domains such as robotics, autonomous driving, recommendation systems, supply chain optimization, biology, mechanics, and finance. The primary objective in these applications is to maximize the average reward. Real-world scenarios often necessitate adherence to specific constraints during the learning process. This monograph focuses on the exploration of various model-based and model-free approaches for Constrained RL within the context of average reward Markov Decision Processes (MDPs). The investigation commences with an examination of model-based strategies, delving into two foundational methods - optimism in the face of uncertainty and posterior sampling. Subsequently, the discussion transitions to parametrized model-free approaches, where the primal-dual policy gradient-based algorithm is explored as a solution for constrained MDPs. The monograph provides regret guarantees and analyzes constraint violation for each of the discussed setups. For the above exploration, we assume the underlying MDP to be ergodic. Further, this monograph extends its discussion to encompass results tailored for weakly communicating MDPs, thereby broadening the scope of its findings and their relevance to a wider range of practical scenarios. | [
"['Vaneet Aggarwal' 'Washim Uddin Mondal' 'Qinbo Bai']"
]
|
null | null | 2406.11485 | null | null | http://arxiv.org/pdf/2406.11485v1 | 2024-06-17T12:52:19Z | 2024-06-17T12:52:19Z | Active clustering with bandit feedback | We investigate the Active Clustering Problem (ACP). A learner interacts with an $N$-armed stochastic bandit with $d$-dimensional subGaussian feedback. There exists a hidden partition of the arms into $K$ groups, such that arms within the same group, share the same mean vector. The learner's task is to uncover this hidden partition with the smallest budget - i.e., the least number of observation - and with a probability of error smaller than a prescribed constant $delta$. In this paper, (i) we derive a non-asymptotic lower bound for the budget, and (ii) we introduce the computationally efficient ACB algorithm, whose budget matches the lower bound in most regimes. We improve on the performance of a uniform sampling strategy. Importantly, contrary to the batch setting, we establish that there is no computation-information gap in the active setting. | [
"['Victor Thuot' 'Alexandra Carpentier' 'Christophe Giraud'\n 'Nicolas Verzelen']"
]
|
null | null | 2406.11486 | null | null | http://arxiv.org/pdf/2406.11486v1 | 2024-06-17T12:53:21Z | 2024-06-17T12:53:21Z | Analysing zero-shot temporal relation extraction on clinical notes using
temporal consistency | This paper presents the first study for temporal relation extraction in a zero-shot setting focusing on biomedical text. We employ two types of prompts and five LLMs (GPT-3.5, Mixtral, Llama 2, Gemma, and PMC-LLaMA) to obtain responses about the temporal relations between two events. Our experiments demonstrate that LLMs struggle in the zero-shot setting performing worse than fine-tuned specialized models in terms of F1 score, showing that this is a challenging task for LLMs. We further contribute a novel comprehensive temporal analysis by calculating consistency scores for each LLM. Our findings reveal that LLMs face challenges in providing responses consistent to the temporal properties of uniqueness and transitivity. Moreover, we study the relation between the temporal consistency of an LLM and its accuracy and whether the latter can be improved by solving temporal inconsistencies. Our analysis shows that even when temporal consistency is achieved, the predictions can remain inaccurate. | [
"['Vasiliki Kougia' 'Anastasiia Sedova' 'Andreas Stephan' 'Klim Zaporojets'\n 'Benjamin Roth']"
]
|
null | null | 2406.11490 | null | null | http://arxiv.org/pdf/2406.11490v1 | 2024-06-17T12:55:56Z | 2024-06-17T12:55:56Z | Interventional Imbalanced Multi-Modal Representation Learning via
$β$-Generalization Front-Door Criterion | Multi-modal methods establish comprehensive superiority over uni-modal methods. However, the imbalanced contributions of different modalities to task-dependent predictions constantly degrade the discriminative performance of canonical multi-modal methods. Based on the contribution to task-dependent predictions, modalities can be identified as predominant and auxiliary modalities. Benchmark methods raise a tractable solution: augmenting the auxiliary modality with a minor contribution during training. However, our empirical explorations challenge the fundamental idea behind such behavior, and we further conclude that benchmark approaches suffer from certain defects: insufficient theoretical interpretability and limited exploration capability of discriminative knowledge. To this end, we revisit multi-modal representation learning from a causal perspective and build the Structural Causal Model. Following the empirical explorations, we determine to capture the true causality between the discriminative knowledge of predominant modality and predictive label while considering the auxiliary modality. Thus, we introduce the $beta$-generalization front-door criterion. Furthermore, we propose a novel network for sufficiently exploring multi-modal discriminative knowledge. Rigorous theoretical analyses and various empirical evaluations are provided to support the effectiveness of the innate mechanism behind our proposed method. | [
"['Yi Li' 'Jiangmeng Li' 'Fei Song' 'Qingmeng Zhu' 'Changwen Zheng'\n 'Wenwen Qiang']"
]
|
null | null | 2406.11501 | null | null | http://arxiv.org/pdf/2406.11501v2 | 2024-06-18T05:49:27Z | 2024-06-17T13:03:44Z | Teleporter Theory: A General and Simple Approach for Modeling
Cross-World Counterfactual Causality | Leveraging the development of structural causal model (SCM), researchers can establish graphical models for exploring the causal mechanisms behind machine learning techniques. As the complexity of machine learning applications rises, single-world interventionism causal analysis encounters theoretical adaptation limitations. Accordingly, cross-world counterfactual approach extends our understanding of causality beyond observed data, enabling hypothetical reasoning about alternative scenarios. However, the joint involvement of cross-world variables, encompassing counterfactual variables and real-world variables, challenges the construction of the graphical model. Twin network is a subtle attempt, establishing a symbiotic relationship, to bridge the gap between graphical modeling and the introduction of counterfactuals albeit with room for improvement in generalization. In this regard, we demonstrate the theoretical breakdowns of twin networks in certain cross-world counterfactual scenarios. To this end, we propose a novel teleporter theory to establish a general and simple graphical representation of counterfactuals, which provides criteria for determining teleporter variables to connect multiple worlds. In theoretical application, we determine that introducing the proposed teleporter theory can directly obtain the conditional independence between counterfactual variables and real-world variables from the cross-world SCM without requiring complex algebraic derivations. Accordingly, we can further identify counterfactual causal effects through cross-world symbolic derivation. We demonstrate the generality of the teleporter theory to the practical application. Adhering to the proposed theory, we build a plug-and-play module, and the effectiveness of which are substantiated by experiments on benchmarks. | [
"['Jiangmeng Li' 'Bin Qin' 'Qirui Ji' 'Yi Li' 'Wenwen Qiang' 'Jianwen Cao'\n 'Fanjiang Xu']"
]
|
null | null | 2406.11504 | null | null | http://arxiv.org/pdf/2406.11504v1 | 2024-06-17T13:05:00Z | 2024-06-17T13:05:00Z | On the Feasibility of Fidelity$^-$ for Graph Pruning | As one of popular quantitative metrics to assess the quality of explanation of graph neural networks (GNNs), fidelity measures the output difference after removing unimportant parts of the input graph. Fidelity has been widely used due to its straightforward interpretation that the underlying model should produce similar predictions when features deemed unimportant from the explanation are removed. This raises a natural question: "Does fidelity induce a global (soft) mask for graph pruning?" To solve this, we aim to explore the potential of the fidelity measure to be used for graph pruning, eventually enhancing the GNN models for better efficiency. To this end, we propose Fidelity$^-$-inspired Pruning (FiP), an effective framework to construct global edge masks from local explanations. Our empirical observations using 7 edge attribution methods demonstrate that, surprisingly, general eXplainable AI methods outperform methods tailored to GNNs in terms of graph pruning performance. | [
"['Yong-Min Shin' 'Won-Yong Shin']"
]
|
null | null | 2406.11517 | null | null | http://arxiv.org/pdf/2406.11517v1 | 2024-06-17T13:22:00Z | 2024-06-17T13:22:00Z | Revisiting Spurious Correlation in Domain Generalization | Without loss of generality, existing machine learning techniques may learn spurious correlation dependent on the domain, which exacerbates the generalization of models in out-of-distribution (OOD) scenarios. To address this issue, recent works build a structural causal model (SCM) to describe the causality within data generation process, thereby motivating methods to avoid the learning of spurious correlation by models. However, from the machine learning viewpoint, such a theoretical analysis omits the nuanced difference between the data generation process and representation learning process, resulting in that the causal analysis based on the former cannot well adapt to the latter. To this end, we explore to build a SCM for representation learning process and further conduct a thorough analysis of the mechanisms underlying spurious correlation. We underscore that adjusting erroneous covariates introduces bias, thus necessitating the correct selection of spurious correlation mechanisms based on practical application scenarios. In this regard, we substantiate the correctness of the proposed SCM and further propose to control confounding bias in OOD generalization by introducing a propensity score weighted estimator, which can be integrated into any existing OOD method as a plug-and-play module. The empirical results comprehensively demonstrate the effectiveness of our method on synthetic and large-scale real OOD datasets. | [
"['Bin Qin' 'Jiangmeng Li' 'Yi Li' 'Xuesong Wu' 'Yupeng Wang'\n 'Wenwen Qiang' 'Jianwen Cao']"
]
|
null | null | 2406.11522 | null | null | http://arxiv.org/pdf/2406.11522v1 | 2024-06-17T13:23:52Z | 2024-06-17T13:23:52Z | FullCert: Deterministic End-to-End Certification for Training and
Inference of Neural Networks | Modern machine learning models are sensitive to the manipulation of both the training data (poisoning attacks) and inference data (adversarial examples). Recognizing this issue, the community has developed many empirical defenses against both attacks and, more recently, provable certification methods against inference-time attacks. However, such guarantees are still largely lacking for training-time attacks. In this work, we present FullCert, the first end-to-end certifier with sound, deterministic bounds, which proves robustness against both training-time and inference-time attacks. We first bound all possible perturbations an adversary can make to the training data under the considered threat model. Using these constraints, we bound the perturbations' influence on the model's parameters. Finally, we bound the impact of these parameter changes on the model's prediction, resulting in joint robustness guarantees against poisoning and adversarial examples. To facilitate this novel certification paradigm, we combine our theoretical work with a new open-source library BoundFlow, which enables model training on bounded datasets. We experimentally demonstrate FullCert's feasibility on two different datasets. | [
"['Tobias Lorenz' 'Marta Kwiatkowska' 'Mario Fritz']"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.