categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 2406.11524 | null | null | http://arxiv.org/pdf/2406.11524v1 | 2024-06-17T13:26:53Z | 2024-06-17T13:26:53Z | Explainable Artificial Intelligence and Multicollinearity : A Mini
Review of Current Approaches | Explainable Artificial Intelligence (XAI) methods help to understand the internal mechanism of machine learning models and how they reach a specific decision or made a specific action. The list of informative features is one of the most common output of XAI methods. Multicollinearity is one of the big issue that should be considered when XAI generates the explanation in terms of the most informative features in an AI system. No review has been dedicated to investigate the current approaches to handle such significant issue. In this paper, we provide a review of the current state-of-the-art approaches in relation to the XAI in the context of recent advances in dealing with the multicollinearity issue. To do so, we searched in three repositories that are: Web of Science, Scopus and IEEE Xplore to find pertinent published papers. After excluding irrelevant papers, seven papers were considered in the review. In addition, we discuss the current XAI methods and their limitations in dealing with the multicollinearity and suggest future directions. | [
"['Ahmed M Salih']"
] |
null | null | 2406.11544 | null | null | http://arxiv.org/pdf/2406.11544v1 | 2024-06-17T13:42:28Z | 2024-06-17T13:42:28Z | Do Parameters Reveal More than Loss for Membership Inference? | Membership inference attacks aim to infer whether an individual record was used to train a model, serving as a key tool for disclosure auditing. While such evaluations are useful to demonstrate risk, they are computationally expensive and often make strong assumptions about potential adversaries' access to models and training environments, and thus do not provide very tight bounds on leakage from potential attacks. We show how prior claims around black-box access being sufficient for optimal membership inference do not hold for most useful settings such as stochastic gradient descent, and that optimal membership inference indeed requires white-box access. We validate our findings with a new white-box inference attack IHA (Inverse Hessian Attack) that explicitly uses model parameters by taking advantage of computing inverse-Hessian vector products. Our results show that both audits and adversaries may be able to benefit from access to model parameters, and we advocate for further research into white-box methods for membership privacy auditing. | [
"['Anshuman Suri' 'Xiao Zhang' 'David Evans']"
] |
null | null | 2406.11547 | null | null | http://arxiv.org/pdf/2406.11547v1 | 2024-06-17T13:44:37Z | 2024-06-17T13:44:37Z | GECOBench: A Gender-Controlled Text Dataset and Benchmark for
Quantifying Biases in Explanations | Large pre-trained language models have become popular for many applications and form an important backbone of many downstream tasks in natural language processing (NLP). Applying 'explainable artificial intelligence' (XAI) techniques to enrich such models' outputs is considered crucial for assuring their quality and shedding light on their inner workings. However, large language models are trained on a plethora of data containing a variety of biases, such as gender biases, affecting model weights and, potentially, behavior. Currently, it is unclear to what extent such biases also impact model explanations in possibly unfavorable ways. We create a gender-controlled text dataset, GECO, in which otherwise identical sentences appear in male and female forms. This gives rise to ground-truth 'world explanations' for gender classification tasks, enabling the objective evaluation of the correctness of XAI methods. We also provide GECOBench, a rigorous quantitative evaluation framework benchmarking popular XAI methods, applying them to pre-trained language models fine-tuned to different degrees. This allows us to investigate how pre-training induces undesirable bias in model explanations and to what extent fine-tuning can mitigate such explanation bias. We show a clear dependency between explanation performance and the number of fine-tuned layers, where XAI methods are observed to particularly benefit from fine-tuning or complete retraining of embedding layers. Remarkably, this relationship holds for models achieving similar classification performance on the same task. With that, we highlight the utility of the proposed gender-controlled dataset and novel benchmarking approach for research and development of novel XAI methods. All code including dataset generation, model training, evaluation and visualization is available at: https://github.com/braindatalab/gecobench | [
"['Rick Wilming' 'Artur Dox' 'Hjalmar Schulz' 'Marta Oliveira'\n 'Benedict Clark' 'Stefan Haufe']"
] |
null | null | 2406.11562 | null | null | http://arxiv.org/pdf/2406.11562v1 | 2024-06-17T13:59:52Z | 2024-06-17T13:59:52Z | An Imitative Reinforcement Learning Framework for Autonomous Dogfight | Unmanned Combat Aerial Vehicle (UCAV) dogfight, which refers to a fight between two or more UCAVs usually at close quarters, plays a decisive role on the aerial battlefields. With the evolution of artificial intelligence, dogfight progressively transits towards intelligent and autonomous modes. However, the development of autonomous dogfight policy learning is hindered by challenges such as weak exploration capabilities, low learning efficiency, and unrealistic simulated environments. To overcome these challenges, this paper proposes a novel imitative reinforcement learning framework, which efficiently leverages expert data while enabling autonomous exploration. The proposed framework not only enhances learning efficiency through expert imitation, but also ensures adaptability to dynamic environments via autonomous exploration with reinforcement learning. Therefore, the proposed framework can learn a successful dogfight policy of 'pursuit-lock-launch' for UCAVs. To support data-driven learning, we establish a dogfight environment based on the Harfang3D sandbox, where we conduct extensive experiments. The results indicate that the proposed framework excels in multistage dogfight, significantly outperforms state-of-the-art reinforcement learning and imitation learning methods. Thanks to the ability of imitating experts and autonomous exploration, our framework can quickly learn the critical knowledge in complex aerial combat tasks, achieving up to a 100% success rate and demonstrating excellent robustness. | [
"['Siyuan Li' 'Rongchang Zuo' 'Peng Liu' 'Yingnan Zhao']"
] |
null | null | 2406.11569 | null | null | http://arxiv.org/pdf/2406.11569v1 | 2024-06-17T14:06:13Z | 2024-06-17T14:06:13Z | Pre-Training and Personalized Fine-Tuning via Over-the-Air Federated
Meta-Learning: Convergence-Generalization Trade-Offs | For modern artificial intelligence (AI) applications such as large language models (LLMs), the training paradigm has recently shifted to pre-training followed by fine-tuning. Furthermore, owing to dwindling open repositories of data and thanks to efforts to democratize access to AI models, pre-training is expected to increasingly migrate from the current centralized deployments to federated learning (FL) implementations. Meta-learning provides a general framework in which pre-training and fine-tuning can be formalized. Meta-learning-based personalized FL (meta-pFL) moves beyond basic personalization by targeting generalization to new agents and tasks. This paper studies the generalization performance of meta-pFL for a wireless setting in which the agents participating in the pre-training phase, i.e., meta-learning, are connected via a shared wireless channel to the server. Adopting over-the-air computing, we study the trade-off between generalization to new agents and tasks, on the one hand, and convergence, on the other hand. The trade-off arises from the fact that channel impairments may enhance generalization, while degrading convergence. Extensive numerical results validate the theory. | [
"['Haifeng Wen' 'Hong Xing' 'Osvaldo Simeone']"
] |
null | null | 2406.11594 | null | null | http://arxiv.org/pdf/2406.11594v1 | 2024-06-17T14:42:59Z | 2024-06-17T14:42:59Z | On GNN explanability with activation rules | GNNs are powerful models based on node representation learning that perform particularly well in many machine learning problems related to graphs. The major obstacle to the deployment of GNNs is mostly a problem of societal acceptability and trustworthiness, properties which require making explicit the internal functioning of such models. Here, we propose to mine activation rules in the hidden layers to understand how the GNNs perceive the world. The problem is not to discover activation rules that are individually highly discriminating for an output of the model. Instead, the challenge is to provide a small set of rules that cover all input graphs. To this end, we introduce the subjective activation pattern domain. We define an effective and principled algorithm to enumerate activations rules in each hidden layer. The proposed approach for quantifying the interest of these rules is rooted in information theory and is able to account for background knowledge on the input graph data. The activation rules can then be redescribed thanks to pattern languages involving interpretable features. We show that the activation rules provide insights on the characteristics used by the GNN to classify the graphs. Especially, this allows to identify the hidden features built by the GNN through its different layers. Also, these rules can subsequently be used for explaining GNN decisions. Experiments on both synthetic and real-life datasets show highly competitive performance, with up to 200% improvement in fidelity on explaining graph classification over the SOTA methods. | [
"['Luca Veyrin-Forrer' 'Ataollah Kamal' 'Stefan Duffner' 'Marc Plantevit'\n 'Céline Robardet']"
] |
null | null | 2406.11601 | null | null | http://arxiv.org/pdf/2406.11601v1 | 2024-06-17T14:52:21Z | 2024-06-17T14:52:21Z | Standardizing Structural Causal Models | Synthetic datasets generated by structural causal models (SCMs) are commonly used for benchmarking causal structure learning algorithms. However, the variances and pairwise correlations in SCM data tend to increase along the causal ordering. Several popular algorithms exploit these artifacts, possibly leading to conclusions that do not generalize to real-world settings. Existing metrics like $operatorname{Var}$-sortability and $operatorname{R^2}$-sortability quantify these patterns, but they do not provide tools to remedy them. To address this, we propose internally-standardized structural causal models (iSCMs), a modification of SCMs that introduces a standardization operation at each variable during the generative process. By construction, iSCMs are not $operatorname{Var}$-sortable, and as we show experimentally, not $operatorname{R^2}$-sortable either for commonly-used graph families. Moreover, contrary to the post-hoc standardization of data generated by standard SCMs, we prove that linear iSCMs are less identifiable from prior knowledge on the weights and do not collapse to deterministic relationships in large systems, which may make iSCMs a useful model in causal inference beyond the benchmarking problem studied here. | [
"['Weronika Ormaniec' 'Scott Sussex' 'Lars Lorch' 'Bernhard Schölkopf'\n 'Andreas Krause']"
] |
null | null | 2406.11612 | null | null | http://arxiv.org/pdf/2406.11612v1 | 2024-06-17T14:58:29Z | 2024-06-17T14:58:29Z | Long Code Arena: a Set of Benchmarks for Long-Context Code Models | Nowadays, the fields of code and natural language processing are evolving rapidly. In particular, models become better at processing long context windows - supported context sizes have increased by orders of magnitude over the last few years. However, there is a shortage of benchmarks for code processing that go beyond a single file of context, while the most popular ones are limited to a single method. With this work, we aim to close this gap by introducing Long Code Arena, a suite of six benchmarks for code processing tasks that require project-wide context. These tasks cover different aspects of code processing: library-based code generation, CI builds repair, project-level code completion, commit message generation, bug localization, and module summarization. For each task, we provide a manually verified dataset for testing, an evaluation suite, and open-source baseline solutions based on popular LLMs to showcase the usage of the dataset and to simplify adoption by other researchers. We publish the benchmark page on HuggingFace Spaces with the leaderboard, links to HuggingFace Hub for all the datasets, and link to the GitHub repository with baselines: https://huggingface.co/spaces/JetBrains-Research/long-code-arena. | [
"['Egor Bogomolov' 'Aleksandra Eliseeva' 'Timur Galimzyanov'\n 'Evgeniy Glukhov' 'Anton Shapkin' 'Maria Tigina' 'Yaroslav Golubev'\n 'Alexander Kovrigin' 'Arie van Deursen' 'Maliheh Izadi' 'Timofey Bryksin']"
] |
null | null | 2406.11619 | null | null | http://arxiv.org/pdf/2406.11619v1 | 2024-06-17T15:04:15Z | 2024-06-17T15:04:15Z | AV-CrossNet: an Audiovisual Complex Spectral Mapping Network for Speech
Separation By Leveraging Narrow- and Cross-Band Modeling | Adding visual cues to audio-based speech separation can improve separation performance. This paper introduces AV-CrossNet, an audiovisual (AV) system for speech enhancement, target speaker extraction, and multi-talker speaker separation. AV-CrossNet is extended from the CrossNet architecture, which is a recently proposed network that performs complex spectral mapping for speech separation by leveraging global attention and positional encoding. To effectively utilize visual cues, the proposed system incorporates pre-extracted visual embeddings and employs a visual encoder comprising temporal convolutional layers. Audio and visual features are fused in an early fusion layer before feeding to AV-CrossNet blocks. We evaluate AV-CrossNet on multiple datasets, including LRS, VoxCeleb, and COG-MHEAR challenge. Evaluation results demonstrate that AV-CrossNet advances the state-of-the-art performance in all audiovisual tasks, even on untrained and mismatched datasets. | [
"['Vahid Ahmadi Kalkhorani' 'Cheng Yu' 'Anurag Kumar' 'Ke Tan' 'Buye Xu'\n 'DeLiang Wang']"
] |
null | null | 2406.11624 | null | null | http://arxiv.org/pdf/2406.11624v1 | 2024-06-17T15:07:55Z | 2024-06-17T15:07:55Z | Words in Motion: Representation Engineering for Motion Forecasting | Motion forecasting transforms sequences of past movements and environment context into future motion. Recent methods rely on learned representations, resulting in hidden states that are difficult to interpret. In this work, we use natural language to quantize motion features in a human-interpretable way, and measure the degree to which they are embedded in hidden states. Our experiments reveal that hidden states of motion sequences are arranged with respect to our discrete sets of motion features. Following these insights, we fit control vectors to motion features, which allow for controlling motion forecasts at inference. Consequently, our method enables controlling transformer-based motion forecasting models with textual inputs, providing a unique interface to interact with and understand these models. Our implementation is available at https://github.com/kit-mrt/future-motion | [
"['Omer Sahin Tas' 'Royden Wagner']"
] |
null | null | 2406.11631 | null | null | http://arxiv.org/pdf/2406.11631v1 | 2024-06-17T15:13:36Z | 2024-06-17T15:13:36Z | The Liouville Generator for Producing Integrable Expressions | There has been a growing need to devise processes that can create comprehensive datasets in the world of Computer Algebra, both for accurate benchmarking and for new intersections with machine learning technology. We present here a method to generate integrands that are guaranteed to be integrable, dubbed the LIOUVILLE method. It is based on Liouville's theorem and the Parallel Risch Algorithm for symbolic integration. We show that this data generation method retains the best qualities of previous data generation methods, while overcoming some of the issues built into that prior work. The LIOUVILLE generator is able to generate sufficiently complex and realistic integrands, and could be used for benchmarking or machine learning training tasks related to symbolic integration. | [
"['Rashid Barket' 'Matthew England' 'Jürgen Gerhard']"
] |
null | null | 2406.11636 | null | null | http://arxiv.org/pdf/2406.11636v1 | 2024-06-17T15:16:18Z | 2024-06-17T15:16:18Z | Feasibility of Federated Learning from Client Databases with Different
Brain Diseases and MRI Modalities | Segmentation models for brain lesions in MRI are commonly developed for a specific disease and trained on data with a predefined set of MRI modalities. Each such model cannot segment the disease using data with a different set of MRI modalities, nor can it segment any other type of disease. Moreover, this training paradigm does not allow a model to benefit from learning from heterogeneous databases that may contain scans and segmentation labels for different types of brain pathologies and diverse sets of MRI modalities. Is it feasible to use Federated Learning (FL) for training a single model on client databases that contain scans and labels of different brain pathologies and diverse sets of MRI modalities? We demonstrate promising results by combining appropriate, simple, and practical modifications to the model and training strategy: Designing a model with input channels that cover the whole set of modalities available across clients, training with random modality drop, and exploring the effects of feature normalization methods. Evaluation on 7 brain MRI databases with 5 different diseases shows that such FL framework can train a single model that is shown to be very promising in segmenting all disease types seen during training. Importantly, it is able to segment these diseases in new databases that contain sets of modalities different from those in training clients. These results demonstrate, for the first time, feasibility and effectiveness of using FL to train a single segmentation model on decentralised data with diverse brain diseases and MRI modalities, a necessary step towards leveraging heterogeneous real-world databases. Code will be made available at: https://github.com/FelixWag/FL-MultiDisease-MRI | [
"['Felix Wagner' 'Wentian Xu' 'Pramit Saha' 'Ziyun Liang'\n 'Daniel Whitehouse' 'David Menon' 'Natalie Voets' 'J. Alison Noble'\n 'Konstantinos Kamnitsas']"
] |
null | null | 2406.11640 | null | null | http://arxiv.org/pdf/2406.11640v2 | 2024-06-18T04:27:49Z | 2024-06-17T15:24:49Z | Linear Bellman Completeness Suffices for Efficient Online Reinforcement
Learning with Few Actions | One of the most natural approaches to reinforcement learning (RL) with function approximation is value iteration, which inductively generates approximations to the optimal value function by solving a sequence of regression problems. To ensure the success of value iteration, it is typically assumed that Bellman completeness holds, which ensures that these regression problems are well-specified. We study the problem of learning an optimal policy under Bellman completeness in the online model of RL with linear function approximation. In the linear setting, while statistically efficient algorithms are known under Bellman completeness (e.g., Jiang et al. (2017); Zanette et al. (2020)), these algorithms all rely on the principle of global optimism which requires solving a nonconvex optimization problem. In particular, it has remained open as to whether computationally efficient algorithms exist. In this paper we give the first polynomial-time algorithm for RL under linear Bellman completeness when the number of actions is any constant. | [
"['Noah Golowich' 'Ankur Moitra']"
] |
null | null | 2406.11649 | null | null | http://arxiv.org/pdf/2406.11649v1 | 2024-06-17T15:31:53Z | 2024-06-17T15:31:53Z | Making Old Things New: A Unified Algorithm for Differentially Private
Clustering | As a staple of data analysis and unsupervised learning, the problem of private clustering has been widely studied under various privacy models. Centralized differential privacy is the first of them, and the problem has also been studied for the local and the shuffle variation. In each case, the goal is to design an algorithm that computes privately a clustering, with the smallest possible error. The study of each variation gave rise to new algorithms: the landscape of private clustering algorithms is therefore quite intricate. In this paper, we show that a 20-year-old algorithm can be slightly modified to work for any of these models. This provides a unified picture: while matching almost all previously known results, it allows us to improve some of them and extend it to a new privacy model, the continual observation setting, where the input is changing over time and the algorithm must output a new solution at each time step. | [
"['Max Dupré la Tour' 'Monika Henzinger' 'David Saulpic']"
] |
null | null | 2406.11650 | null | null | http://arxiv.org/pdf/2406.11650v2 | 2024-07-01T09:57:32Z | 2024-06-17T15:31:54Z | Multimodal Learning With Intraoperative CBCT & Variably Aligned
Preoperative CT Data To Improve Segmentation | Cone-beam computed tomography (CBCT) is an important tool facilitating computer aided interventions, despite often suffering from artifacts that pose challenges for accurate interpretation. While the degraded image quality can affect downstream segmentation, the availability of high quality, preoperative scans represents potential for improvements. Here we consider a setting where preoperative CT and intraoperative CBCT scans are available, however, the alignment (registration) between the scans is imperfect. We propose a multimodal learning method that fuses roughly aligned CBCT and CT scans and investigate the effect of CBCT quality and misalignment on the final segmentation performance. For that purpose, we make use of a synthetically generated data set containing real CT and synthetic CBCT volumes. As an application scenario, we focus on liver and liver tumor segmentation. We show that the fusion of preoperative CT and simulated, intraoperative CBCT mostly improves segmentation performance (compared to using intraoperative CBCT only) and that even clearly misaligned preoperative data has the potential to improve segmentation performance. | [
"['Maximilian E. Tschuchnig' 'Philipp Steininger' 'Michael Gadermayr']"
] |
null | null | 2406.11664 | null | null | http://arxiv.org/pdf/2406.11664v1 | 2024-06-17T15:48:46Z | 2024-06-17T15:48:46Z | Diffusion Generative Modelling for Divide-and-Conquer MCMC | Divide-and-conquer MCMC is a strategy for parallelising Markov Chain Monte Carlo sampling by running independent samplers on disjoint subsets of a dataset and merging their output. An ongoing challenge in the literature is to efficiently perform this merging without imposing distributional assumptions on the posteriors. We propose using diffusion generative modelling to fit density approximations to the subposterior distributions. This approach outperforms existing methods on challenging merging problems, while its computational cost scales more efficiently to high dimensional problems than existing density estimation approaches. | [
"['C. Trojan' 'P. Fearnhead' 'C. Nemeth']"
] |
null | null | 2406.11666 | null | null | http://arxiv.org/pdf/2406.11666v1 | 2024-06-17T15:50:00Z | 2024-06-17T15:50:00Z | ROTI-GCV: Generalized Cross-Validation for right-ROTationally Invariant
Data | Two key tasks in high-dimensional regularized regression are tuning the regularization strength for good predictions and estimating the out-of-sample risk. It is known that the standard approach -- $k$-fold cross-validation -- is inconsistent in modern high-dimensional settings. While leave-one-out and generalized cross-validation remain consistent in some high-dimensional cases, they become inconsistent when samples are dependent or contain heavy-tailed covariates. To model structured sample dependence and heavy tails, we use right-rotationally invariant covariate distributions - a crucial concept from compressed sensing. In the common modern proportional asymptotics regime where the number of features and samples grow comparably, we introduce a new framework, ROTI-GCV, for reliably performing cross-validation. Along the way, we propose new estimators for the signal-to-noise ratio and noise variance under these challenging conditions. We conduct extensive experiments that demonstrate the power of our approach and its superiority over existing methods. | [
"['Kevin Luo' 'Yufan Li' 'Pragya Sur']"
] |
null | null | 2406.11667 | null | null | http://arxiv.org/pdf/2406.11667v2 | 2024-06-18T04:18:17Z | 2024-06-17T15:50:08Z | Is Efficient PAC Learning Possible with an Oracle That Responds 'Yes' or
'No'? | The empirical risk minimization (ERM) principle has been highly impactful in machine learning, leading both to near-optimal theoretical guarantees for ERM-based learning algorithms as well as driving many of the recent empirical successes in deep learning. In this paper, we investigate the question of whether the ability to perform ERM, which computes a hypothesis minimizing empirical risk on a given dataset, is necessary for efficient learning: in particular, is there a weaker oracle than ERM which can nevertheless enable learnability? We answer this question affirmatively, showing that in the realizable setting of PAC learning for binary classification, a concept class can be learned using an oracle which only returns a single bit indicating whether a given dataset is realizable by some concept in the class. The sample complexity and oracle complexity of our algorithm depend polynomially on the VC dimension of the hypothesis class, thus showing that there is only a polynomial price to pay for use of our weaker oracle. Our results extend to the agnostic learning setting with a slight strengthening of the oracle, as well as to the partial concept, multiclass and real-valued learning settings. In the setting of partial concept classes, prior to our work no oracle-efficient algorithms were known, even with a standard ERM oracle. Thus, our results address a question of Alon et al. (2021) who asked whether there are algorithmic principles which enable efficient learnability in this setting. | [
"['Constantinos Daskalakis' 'Noah Golowich']"
] |
null | null | 2406.11675 | null | null | http://arxiv.org/pdf/2406.11675v2 | 2024-06-18T15:15:04Z | 2024-06-17T15:55:38Z | BLoB: Bayesian Low-Rank Adaptation by Backpropagation for Large Language
Models | Large Language Models (LLMs) often suffer from overconfidence during inference, particularly when adapted to downstream domain-specific tasks with limited data. Previous work addresses this issue by employing approximate Bayesian estimation after the LLMs are trained, enabling them to quantify uncertainty. However, such post-training approaches' performance is severely limited by the parameters learned during training. In this paper, we go beyond post-training Bayesianization and propose Bayesian Low-Rank Adaptation by Backpropagation (BLoB), an algorithm that continuously and jointly adjusts both the mean and covariance of LLM parameters throughout the whole fine-tuning process. Our empirical results verify the effectiveness of BLoB in terms of generalization and uncertainty estimation, when evaluated on both in-distribution and out-of-distribution data. | [
"['Yibin Wang' 'Haizhou Shi' 'Ligong Han' 'Dimitris Metaxas' 'Hao Wang']"
] |
null | null | 2406.11676 | null | null | http://arxiv.org/pdf/2406.11676v1 | 2024-06-17T15:57:23Z | 2024-06-17T15:57:23Z | Score-fPINN: Fractional Score-Based Physics-Informed Neural Networks for
High-Dimensional Fokker-Planck-Levy Equations | We introduce an innovative approach for solving high-dimensional Fokker-Planck-L'evy (FPL) equations in modeling non-Brownian processes across disciplines such as physics, finance, and ecology. We utilize a fractional score function and Physical-informed neural networks (PINN) to lift the curse of dimensionality (CoD) and alleviate numerical overflow from exponentially decaying solutions with dimensions. The introduction of a fractional score function allows us to transform the FPL equation into a second-order partial differential equation without fractional Laplacian and thus can be readily solved with standard physics-informed neural networks (PINNs). We propose two methods to obtain a fractional score function: fractional score matching (FSM) and score-fPINN for fitting the fractional score function. While FSM is more cost-effective, it relies on known conditional distributions. On the other hand, score-fPINN is independent of specific stochastic differential equations (SDEs) but requires evaluating the PINN model's derivatives, which may be more costly. We conduct our experiments on various SDEs and demonstrate numerical stability and effectiveness of our method in dealing with high-dimensional problems, marking a significant advancement in addressing the CoD in FPL equations. | [
"['Zheyuan Hu' 'Zhongqiang Zhang' 'George Em Karniadakis' 'Kenji Kawaguchi']"
] |
null | null | 2406.11685 | null | null | http://arxiv.org/pdf/2406.11685v2 | 2024-06-18T02:49:25Z | 2024-06-17T16:02:36Z | Edge Classification on Graphs: New Directions in Topological Imbalance | Recent years have witnessed the remarkable success of applying Graph machine learning (GML) to node/graph classification and link prediction. However, edge classification task that enjoys numerous real-world applications such as social network analysis and cybersecurity, has not seen significant advancement. To address this gap, our study pioneers a comprehensive approach to edge classification. We identify a novel `Topological Imbalance Issue', which arises from the skewed distribution of edges across different classes, affecting the local subgraph of each edge and harming the performance of edge classifications. Inspired by the recent studies in node classification that the performance discrepancy exists with varying local structural patterns, we aim to investigate if the performance discrepancy in topological imbalanced edge classification can also be mitigated by characterizing the local class distribution variance. To overcome this challenge, we introduce Topological Entropy (TE), a novel topological-based metric that measures the topological imbalance for each edge. Our empirical studies confirm that TE effectively measures local class distribution variance, and indicate that prioritizing edges with high TE values can help address the issue of topological imbalance. Based on this, we develop two strategies - Topological Reweighting and TE Wedge-based Mixup - to focus training on (synthetic) edges based on their TEs. While topological reweighting directly manipulates training edge weights according to TE, our wedge-based mixup interpolates synthetic edges between high TE wedges. Ultimately, we integrate these strategies into a novel topological imbalance strategy for edge classification: TopoEdge. Through extensive experiments, we demonstrate the efficacy of our proposed strategies on newly curated datasets and thus establish a new benchmark for (imbalanced) edge classification. | [
"['Xueqi Cheng' 'Yu Wang' 'Yunchao Liu' 'Yuying Zhao' 'Charu C. Aggarwal'\n 'Tyler Derr']"
] |
null | null | 2406.11686 | null | null | http://arxiv.org/pdf/2406.11686v2 | 2024-06-18T04:23:39Z | 2024-06-17T16:04:06Z | The Role of Inherent Bellman Error in Offline Reinforcement Learning
with Linear Function Approximation | In this paper, we study the offline RL problem with linear function approximation. Our main structural assumption is that the MDP has low inherent Bellman error, which stipulates that linear value functions have linear Bellman backups with respect to the greedy policy. This assumption is natural in that it is essentially the minimal assumption required for value iteration to succeed. We give a computationally efficient algorithm which succeeds under a single-policy coverage condition on the dataset, namely which outputs a policy whose value is at least that of any policy which is well-covered by the dataset. Even in the setting when the inherent Bellman error is 0 (termed linear Bellman completeness), our algorithm yields the first known guarantee under single-policy coverage. In the setting of positive inherent Bellman error ${varepsilon_{mathrm{BE}}} > 0$, we show that the suboptimality error of our algorithm scales with $sqrt{varepsilon_{mathrm{BE}}}$. Furthermore, we prove that the scaling of the suboptimality with $sqrt{varepsilon_{mathrm{BE}}}$ cannot be improved for any algorithm. Our lower bound stands in contrast to many other settings in reinforcement learning with misspecification, where one can typically obtain performance that degrades linearly with the misspecification error. | [
"['Noah Golowich' 'Ankur Moitra']"
] |
null | null | 2406.11695 | null | null | http://arxiv.org/pdf/2406.11695v1 | 2024-06-17T16:12:03Z | 2024-06-17T16:12:03Z | Optimizing Instructions and Demonstrations for Multi-Stage Language
Model Programs | Language Model Programs, i.e. sophisticated pipelines of modular language model (LM) calls, are increasingly advancing NLP tasks, but they require crafting prompts that are jointly effective for all modules. We study prompt optimization for LM programs, i.e. how to update these prompts to maximize a downstream metric without access to module-level labels or gradients. To make this tractable, we factorize our problem into optimizing the free-form instructions and few-shot demonstrations of every module and introduce several strategies to craft task-grounded instructions and navigate credit assignment across modules. Our strategies include (i) program- and data-aware techniques for proposing effective instructions, (ii) a stochastic mini-batch evaluation function for learning a surrogate model of our objective, and (iii) a meta-optimization procedure in which we refine how LMs construct proposals over time. Using these insights we develop MIPRO, a novel optimizer that outperforms baselines on five of six diverse LM programs using a best-in-class open-source model (Llama-3-8B), by as high as 12.9% accuracy. We will release our new optimizers and benchmark in DSPy at https://github.com/stanfordnlp/dspy | [
"['Krista Opsahl-Ong' 'Michael J Ryan' 'Josh Purtell' 'David Broman'\n 'Christopher Potts' 'Matei Zaharia' 'Omar Khattab']"
] |
null | null | 2406.11703 | null | null | http://arxiv.org/pdf/2406.11703v1 | 2024-06-17T16:24:23Z | 2024-06-17T16:24:23Z | Multiple Descents in Unsupervised Learning: The Role of Noise, Domain
Shift and Anomalies | The phenomenon of double descent has recently gained attention in supervised learning. It challenges the conventional wisdom of the bias-variance trade-off by showcasing a surprising behavior. As the complexity of the model increases, the test error initially decreases until reaching a certain point where the model starts to overfit the train set, causing the test error to rise. However, deviating from classical theory, the error exhibits another decline when exceeding a certain degree of over-parameterization. We study the presence of double descent in unsupervised learning, an area that has received little attention and is not yet fully understood. We conduct extensive experiments using under-complete auto-encoders (AEs) for various applications, such as dealing with noisy data, domain shifts, and anomalies. We use synthetic and real data and identify model-wise, epoch-wise, and sample-wise double descent for all the aforementioned applications. Finally, we assessed the usability of the AEs for detecting anomalies and mitigating the domain shift between datasets. Our findings indicate that over-parameterized models can improve performance not only in terms of reconstruction, but also in enhancing capabilities for the downstream task. | [
"['Kobi Rahimi' 'Tom Tirer' 'Ofir Lindenbaum']"
] |
null | null | 2406.11704 | null | null | http://arxiv.org/pdf/2406.11704v1 | 2024-06-17T16:25:04Z | 2024-06-17T16:25:04Z | Nemotron-4 340B Technical Report | We release the Nemotron-4 340B model family, including Nemotron-4-340B-Base, Nemotron-4-340B-Instruct, and Nemotron-4-340B-Reward. Our models are open access under the NVIDIA Open Model License Agreement, a permissive model license that allows distribution, modification, and use of the models and its outputs. These models perform competitively to open access models on a wide range of evaluation benchmarks, and were sized to fit on a single DGX H100 with 8 GPUs when deployed in FP8 precision. We believe that the community can benefit from these models in various research studies and commercial applications, especially for generating synthetic data to train smaller language models. Notably, over 98% of data used in our model alignment process is synthetically generated, showcasing the effectiveness of these models in generating synthetic data. To further support open research and facilitate model development, we are also open-sourcing the synthetic data generation pipeline used in our model alignment process. | [
"['Nvidia' ':' 'Bo Adler' 'Niket Agarwal' 'Ashwath Aithal' 'Dong H. Anh'\n 'Pallab Bhattacharya' 'Annika Brundyn' 'Jared Casper' 'Bryan Catanzaro'\n 'Sharon Clay' 'Jonathan Cohen' 'Sirshak Das' 'Ayush Dattagupta'\n 'Olivier Delalleau' 'Leon Derczynski' 'Yi Dong' 'Daniel Egert'\n 'Ellie Evans' 'Aleksander Ficek' 'Denys Fridman' 'Shaona Ghosh'\n 'Boris Ginsburg' 'Igor Gitman' 'Tomasz Grzegorzek' 'Robert Hero'\n 'Jining Huang' 'Vibhu Jawa' 'Joseph Jennings' 'Aastha Jhunjhunwala'\n 'John Kamalu' 'Sadaf Khan' 'Oleksii Kuchaiev' 'Patrick LeGresley'\n 'Hui Li' 'Jiwei Liu' 'Zihan Liu' 'Eileen Long'\n 'Ameya Sunil Mahabaleshwarkar' 'Somshubra Majumdar' 'James Maki'\n 'Miguel Martinez' 'Maer Rodrigues de Melo' 'Ivan Moshkov'\n 'Deepak Narayanan' 'Sean Narenthiran' 'Jesus Navarro' 'Phong Nguyen'\n 'Osvald Nitski' 'Vahid Noroozi' 'Guruprasad Nutheti'\n 'Christopher Parisien' 'Jupinder Parmar' 'Mostofa Patwary'\n 'Krzysztof Pawelec' 'Wei Ping' 'Shrimai Prabhumoye' 'Rajarshi Roy'\n 'Trisha Saar' 'Vasanth Rao Naik Sabavat' 'Sanjeev Satheesh'\n 'Jane Polak Scowcroft' 'Jason Sewall' 'Pavel Shamis' 'Gerald Shen'\n 'Mohammad Shoeybi' 'Dave Sizer' 'Misha Smelyanskiy' 'Felipe Soares'\n 'Makesh Narsimhan Sreedhar' 'Dan Su' 'Sandeep Subramanian'\n 'Shengyang Sun' 'Shubham Toshniwal' 'Hao Wang' 'Zhilin Wang'\n 'Jiaxuan You' 'Jiaqi Zeng' 'Jimmy Zhang' 'Jing Zhang' 'Vivienne Zhang'\n 'Yian Zhang' 'Chen Zhu']"
] |
null | null | 2406.11706 | null | null | http://arxiv.org/pdf/2406.11706v1 | 2024-06-17T16:25:55Z | 2024-06-17T16:25:55Z | Prompts as Auto-Optimized Training Hyperparameters: Training
Best-in-Class IR Models from Scratch with 10 Gold Labels | We develop a method for training small-scale (under 100M parameter) neural information retrieval models with as few as 10 gold relevance labels. The method depends on generating synthetic queries for documents using a language model (LM), and the key step is that we automatically optimize the LM prompt that is used to generate these queries based on training quality. In experiments with the BIRCO benchmark, we find that models trained with our method outperform RankZephyr and are competitive with RankLLama, both of which are 7B parameter models trained on over 100K labels. These findings point to the power of automatic prompt optimization for synthetic dataset generation. | [
"['Jasper Xian' 'Saron Samuel' 'Faraz Khoubsirat' 'Ronak Pradeep'\n 'Md Arafat Sultan' 'Radu Florian' 'Salim Roukos' 'Avirup Sil'\n 'Christopher Potts' 'Omar Khattab']"
] |
null | null | 2406.11707 | null | null | http://arxiv.org/pdf/2406.11707v1 | 2024-06-17T16:26:00Z | 2024-06-17T16:26:00Z | A First Physical-World Trajectory Prediction Attack via LiDAR-induced
Deceptions in Autonomous Driving | Trajectory prediction forecasts nearby agents' moves based on their historical trajectories. Accurate trajectory prediction is crucial for autonomous vehicles. Existing attacks compromise the prediction model of a victim AV by directly manipulating the historical trajectory of an attacker AV, which has limited real-world applicability. This paper, for the first time, explores an indirect attack approach that induces prediction errors via attacks against the perception module of a victim AV. Although it has been shown that physically realizable attacks against LiDAR-based perception are possible by placing a few objects at strategic locations, it is still an open challenge to find an object location from the vast search space in order to launch effective attacks against prediction under varying victim AV velocities. Through analysis, we observe that a prediction model is prone to an attack focusing on a single point in the scene. Consequently, we propose a novel two-stage attack framework to realize the single-point attack. The first stage of prediction-side attack efficiently identifies, guided by the distribution of detection results under object-based attacks against perception, the state perturbations for the prediction model that are effective and velocity-insensitive. In the second stage of location matching, we match the feasible object locations with the found state perturbations. Our evaluation using a public autonomous driving dataset shows that our attack causes a collision rate of up to 63% and various hazardous responses of the victim AV. The effectiveness of our attack is also demonstrated on a real testbed car. To the best of our knowledge, this study is the first security analysis spanning from LiDAR-based perception to prediction in autonomous driving, leading to a realistic attack on prediction. To counteract the proposed attack, potential defenses are discussed. | [
"['Yang Lou' 'Yi Zhu' 'Qun Song' 'Rui Tan' 'Chunming Qiao' 'Wei-Bin Lee'\n 'Jianping Wang']"
] |
null | null | 2406.11708 | null | null | http://arxiv.org/pdf/2406.11708v1 | 2024-06-17T16:26:18Z | 2024-06-17T16:26:18Z | Tackling the Curse of Dimensionality in Fractional and Tempered
Fractional PDEs with Physics-Informed Neural Networks | Fractional and tempered fractional partial differential equations (PDEs) are effective models of long-range interactions, anomalous diffusion, and non-local effects. Traditional numerical methods for these problems are mesh-based, thus struggling with the curse of dimensionality (CoD). Physics-informed neural networks (PINNs) offer a promising solution due to their universal approximation, generalization ability, and mesh-free training. In principle, Monte Carlo fractional PINN (MC-fPINN) estimates fractional derivatives using Monte Carlo methods and thus could lift CoD. However, this may cause significant variance and errors, hence affecting convergence; in addition, MC-fPINN is sensitive to hyperparameters. In general, numerical methods and specifically PINNs for tempered fractional PDEs are under-developed. Herein, we extend MC-fPINN to tempered fractional PDEs to address these issues, resulting in the Monte Carlo tempered fractional PINN (MC-tfPINN). To reduce possible high variance and errors from Monte Carlo sampling, we replace the one-dimensional (1D) Monte Carlo with 1D Gaussian quadrature, applicable to both MC-fPINN and MC-tfPINN. We validate our methods on various forward and inverse problems of fractional and tempered fractional PDEs, scaling up to 100,000 dimensions. Our improved MC-fPINN/MC-tfPINN using quadrature consistently outperforms the original versions in accuracy and convergence speed in very high dimensions. | [
"['Zheyuan Hu' 'Kenji Kawaguchi' 'Zhongqiang Zhang' 'George Em Karniadakis']"
] |
null | null | 2406.11714 | null | null | http://arxiv.org/pdf/2406.11714v1 | 2024-06-17T16:32:57Z | 2024-06-17T16:32:57Z | Scalable Expressiveness through Preprocessed Graph Perturbations | Graph Neural Networks (GNNs) have emerged as the predominant method for analyzing graph-structured data. However, canonical GNNs have limited expressive power and generalization capability, thus triggering the development of more expressive yet computationally intensive methods. One such approach is to create a series of perturbed versions of input graphs and then repeatedly conduct multiple message-passing operations on all variations during training. Despite their expressive power, this approach does not scale well on larger graphs. To address this scalability issue, we introduce Scalable Expressiveness through Preprocessed Graph Perturbation (SE2P). This model offers a flexible, configurable balance between scalability and generalizability with four distinct configuration classes. At one extreme, the configuration prioritizes scalability through minimal learnable feature extraction and extensive preprocessing; at the other extreme, it enhances generalizability with more learnable feature extractions, though this increases scalability costs. We conduct extensive experiments on real-world datasets to evaluate the generalizability and scalability of SE2P variants compared to various state-of-the-art benchmarks. Our results indicate that, depending on the chosen SE2P configuration, the model can enhance generalizability compared to benchmarks while achieving significant speed improvements of up to 8-fold. | [
"['Danial Saber' 'Amirali Salehi-Abari']"
] |
null | null | 2406.11715 | null | null | http://arxiv.org/pdf/2406.11715v1 | 2024-06-17T16:33:35Z | 2024-06-17T16:33:35Z | Measuring memorization in RLHF for code completion | Reinforcement learning with human feedback (RLHF) has become the dominant method to align large models to user preferences. Unlike fine-tuning, for which there are many studies regarding training data memorization, it is not clear how memorization is affected by or introduced in the RLHF alignment process. Understanding this relationship is important as real user data may be collected and used to align large models; if user data is memorized during RLHF and later regurgitated, this could raise privacy concerns. In this work, we analyze how training data memorization can surface and propagate through each phase of RLHF. We focus our study on code completion models, as code completion is one of the most popular use cases for large language models. We find that RLHF significantly decreases the chance that data used for reward modeling and reinforcement learning is memorized, in comparison to aligning via directly fine-tuning on this data, but that examples already memorized during the fine-tuning stage of RLHF, will, in the majority of cases, remain memorized after RLHF. | [
"['Aneesh Pappu' 'Billy Porter' 'Ilia Shumailov' 'Jamie Hayes']"
] |
null | null | 2406.11717 | null | null | http://arxiv.org/pdf/2406.11717v2 | 2024-07-15T11:53:41Z | 2024-06-17T16:36:12Z | Refusal in Language Models Is Mediated by a Single Direction | Conversational large language models are fine-tuned for both instruction-following and safety, resulting in models that obey benign requests but refuse harmful ones. While this refusal behavior is widespread across chat models, its underlying mechanisms remain poorly understood. In this work, we show that refusal is mediated by a one-dimensional subspace, across 13 popular open-source chat models up to 72B parameters in size. Specifically, for each model, we find a single direction such that erasing this direction from the model's residual stream activations prevents it from refusing harmful instructions, while adding this direction elicits refusal on even harmless instructions. Leveraging this insight, we propose a novel white-box jailbreak method that surgically disables refusal with minimal effect on other capabilities. Finally, we mechanistically analyze how adversarial suffixes suppress propagation of the refusal-mediating direction. Our findings underscore the brittleness of current safety fine-tuning methods. More broadly, our work showcases how an understanding of model internals can be leveraged to develop practical methods for controlling model behavior. | [
"['Andy Arditi' 'Oscar Obeso' 'Aaquib Syed' 'Daniel Paleka'\n 'Nina Panickssery' 'Wes Gurnee' 'Neel Nanda']"
] |
null | null | 2406.11721 | null | null | http://arxiv.org/pdf/2406.11721v1 | 2024-06-17T16:40:21Z | 2024-06-17T16:40:21Z | Zero-Shot Generalization during Instruction Tuning: Insights from
Similarity and Granularity | Understanding alignment techniques begins with comprehending zero-shot generalization brought by instruction tuning, but little of the mechanism has been understood. Existing work has largely been confined to the task level, without considering that tasks are artificially defined and, to LLMs, merely consist of tokens and representations. This line of research has been limited to examining transfer between tasks from a task-pair perspective, with few studies focusing on understanding zero-shot generalization from the perspective of the data itself. To bridge this gap, we first demonstrate through multiple metrics that zero-shot generalization during instruction tuning happens very early. Next, we investigate the facilitation of zero-shot generalization from both data similarity and granularity perspectives, confirming that encountering highly similar and fine-grained training data earlier during instruction tuning, without the constraints of defined "tasks", enables better generalization. Finally, we propose a more grounded training data arrangement method, Test-centric Multi-turn Arrangement, and show its effectiveness in promoting continual learning and further loss reduction. For the first time, we show that zero-shot generalization during instruction tuning is a form of similarity-based generalization between training and test data at the instance level. We hope our analysis will advance the understanding of zero-shot generalization during instruction tuning and contribute to the development of more aligned LLMs. Our code is released at https://github.com/HBX-hbx/dynamics_of_zero-shot_generalization. | [
"['Bingxiang He' 'Ning Ding' 'Cheng Qian' 'Jia Deng' 'Ganqu Cui'\n 'Lifan Yuan' 'Huan-ang Gao' 'Huimin Chen' 'Zhiyuan Liu' 'Maosong Sun']"
] |
null | null | 2406.11730 | null | null | http://arxiv.org/pdf/2406.11730v2 | 2024-06-18T07:38:31Z | 2024-06-17T16:48:31Z | CHG Shapley: Efficient Data Valuation and Selection towards Trustworthy
Machine Learning | Understanding the decision-making process of machine learning models is crucial for ensuring trustworthy machine learning. Data Shapley, a landmark study on data valuation, advances this understanding by assessing the contribution of each datum to model accuracy. However, the resource-intensive and time-consuming nature of multiple model retraining poses challenges for applying Data Shapley to large datasets. To address this, we propose the CHG (Conduct of Hardness and Gradient) score, which approximates the utility of each data subset on model accuracy during a single model training. By deriving the closed-form expression of the Shapley value for each data point under the CHG score utility function, we reduce the computational complexity to the equivalent of a single model retraining, an exponential improvement over existing methods. Additionally, we employ CHG Shapley for real-time data selection, demonstrating its effectiveness in identifying high-value and noisy data. CHG Shapley facilitates trustworthy model training through efficient data valuation, introducing a novel data-centric perspective on trustworthy machine learning. | [
"['Huaiguang Cai']"
] |
null | null | 2406.11733 | null | null | http://arxiv.org/pdf/2406.11733v1 | 2024-06-17T16:50:22Z | 2024-06-17T16:50:22Z | A Clipped Trip: the Dynamics of SGD with Gradient Clipping in
High-Dimensions | The success of modern machine learning is due in part to the adaptive optimization methods that have been developed to deal with the difficulties of training large models over complex datasets. One such method is gradient clipping: a practical procedure with limited theoretical underpinnings. In this work, we study clipping in a least squares problem under streaming SGD. We develop a theoretical analysis of the learning dynamics in the limit of large intrinsic dimension-a model and dataset dependent notion of dimensionality. In this limit we find a deterministic equation that describes the evolution of the loss. We show that with Gaussian noise clipping cannot improve SGD performance. Yet, in other noisy settings, clipping can provide benefits with tuning of the clipping threshold. In these cases, clipping biases updates in a way beneficial to training which cannot be recovered by SGD under any schedule. We conclude with a discussion about the links between high-dimensional clipping and neural network training. | [
"['Noah Marshall' 'Ke Liang Xiao' 'Atish Agarwala' 'Elliot Paquette']"
] |
null | null | 2406.11740 | null | null | http://arxiv.org/pdf/2406.11740v1 | 2024-06-17T17:00:41Z | 2024-06-17T17:00:41Z | Imagination Policy: Using Generative Point Cloud Models for Learning
Manipulation Policies | Humans can imagine goal states during planning and perform actions to match those goals. In this work, we propose Imagination Policy, a novel multi-task key-frame policy network for solving high-precision pick and place tasks. Instead of learning actions directly, Imagination Policy generates point clouds to imagine desired states which are then translated to actions using rigid action estimation. This transforms action inference into a local generative task. We leverage pick and place symmetries underlying the tasks in the generation process and achieve extremely high sample efficiency and generalizability to unseen configurations. Finally, we demonstrate state-of-the-art performance across various tasks on the RLbench benchmark compared with several strong baselines. | [
"['Haojie Huang' 'Karl Schmeckpeper' 'Dian Wang' 'Ondrej Biza'\n 'Yaoyao Qian' 'Haotian Liu' 'Mingxi Jia' 'Robert Platt' 'Robin Walters']"
] |
null | null | 2406.11741 | null | null | http://arxiv.org/pdf/2406.11741v3 | 2024-06-28T05:28:27Z | 2024-06-17T17:00:52Z | Transcendence: Generative Models Can Outperform The Experts That Train
Them | Generative models are trained with the simple objective of imitating the conditional probability distribution induced by the data they are trained on. Therefore, when trained on data generated by humans, we may not expect the artificial model to outperform the humans on their original objectives. In this work, we study the phenomenon of transcendence: when a generative model achieves capabilities that surpass the abilities of the experts generating its data. We demonstrate transcendence by training an autoregressive transformer to play chess from game transcripts, and show that the trained model can sometimes achieve better performance than all players in the dataset. We theoretically prove that transcendence can be enabled by low-temperature sampling, and rigorously assess this claim experimentally. Finally, we discuss other sources of transcendence, laying the groundwork for future investigation of this phenomenon in a broader setting. | [
"['Edwin Zhang' 'Vincent Zhu' 'Naomi Saphra' 'Anat Kleiman'\n 'Benjamin L. Edelman' 'Milind Tambe' 'Sham M. Kakade' 'Eran Malach']"
] |
null | null | 2406.11753 | null | null | http://arxiv.org/pdf/2406.11753v1 | 2024-06-17T17:13:08Z | 2024-06-17T17:13:08Z | A Semantic-based Layer Freezing Approach to Efficient Fine-Tuning of
Language Models | Finetuning language models (LMs) is crucial for adapting the models to downstream data and tasks. However, full finetuning is usually costly. Existing work, such as parameter-efficient finetuning (PEFT), often focuses on textit{how to finetune} but neglects the issue of textit{where to finetune}. As a pioneering work on answering where to finetune (at the layer level), we conduct a semantic analysis of the LM inference process. We first propose a virtual transition of the latent representation and then trace its factual transition. Based on the deviation in transitions, we estimate the gain of finetuning each model layer, and further, narrow down the scope for finetuning. We perform extensive experiments across well-known LMs and datasets. The results show that our approach is effective and efficient, and outperforms the existing baselines. Our approach is orthogonal to existing efficient techniques, such as PEFT methods, offering practical values on LM finetuning. | [
"['Jian Gu' 'Aldeida Aleti' 'Chunyang Chen' 'Hongyu Zhang']"
] |
null | null | 2406.11761 | null | null | http://arxiv.org/pdf/2406.11761v1 | 2024-06-17T17:25:23Z | 2024-06-17T17:25:23Z | Joint Linked Component Analysis for Multiview Data | In this work, we propose the joint linked component analysis (joint_LCA) for multiview data. Unlike classic methods which extract the shared components in a sequential manner, the objective of joint_LCA is to identify the view-specific loading matrices and the rank of the common latent subspace simultaneously. We formulate a matrix decomposition model where a joint structure and an individual structure are present in each data view, which enables us to arrive at a clean svd representation for the cross covariance between any pair of data views. An objective function with a novel penalty term is then proposed to achieve simultaneous estimation and rank selection. In addition, a refitting procedure is employed as a remedy to reduce the shrinkage bias caused by the penalization. | [
"['Lin Xiao' 'Luo Xiao']"
] |
null | null | 2406.11774 | null | null | http://arxiv.org/pdf/2406.11774v1 | 2024-06-17T17:32:25Z | 2024-06-17T17:32:25Z | Optimal Transport-Assisted Risk-Sensitive Q-Learning | The primary goal of reinforcement learning is to develop decision-making policies that prioritize optimal performance without considering risk or safety. In contrast, safe reinforcement learning aims to mitigate or avoid unsafe states. This paper presents a risk-sensitive Q-learning algorithm that leverages optimal transport theory to enhance the agent safety. By integrating optimal transport into the Q-learning framework, our approach seeks to optimize the policy's expected return while minimizing the Wasserstein distance between the policy's stationary distribution and a predefined risk distribution, which encapsulates safety preferences from domain experts. We validate the proposed algorithm in a Gridworld environment. The results indicate that our method significantly reduces the frequency of visits to risky states and achieves faster convergence to a stable policy compared to the traditional Q-learning algorithm. | [
"['Zahra Shahrooei' 'Ali Baheri']"
] |
null | null | 2406.11779 | null | null | http://arxiv.org/pdf/2406.11779v8 | 2024-07-12T21:51:34Z | 2024-06-17T17:34:25Z | Compact Proofs of Model Performance via Mechanistic Interpretability | We propose using mechanistic interpretability -- techniques for reverse engineering model weights into human-interpretable algorithms -- to derive and compactly prove formal guarantees on model performance. We prototype this approach by formally proving lower bounds on the accuracy of 151 small transformers trained on a Max-of-$K$ task. We create 102 different computer-assisted proof strategies and assess their length and tightness of bound on each of our models. Using quantitative metrics, we find that shorter proofs seem to require and provide more mechanistic understanding. Moreover, we find that more faithful mechanistic understanding leads to tighter performance bounds. We confirm these connections by qualitatively examining a subset of our proofs. Finally, we identify compounding structureless noise as a key challenge for using mechanistic interpretability to generate compact proofs on model performance. | [
"['Jason Gross' 'Rajashree Agrawal' 'Thomas Kwa' 'Euan Ong' 'Chun Hei Yip'\n 'Alex Gibson' 'Soufiane Noubir' 'Lawrence Chan']"
] |
null | null | 2406.11780 | null | null | http://arxiv.org/pdf/2406.11780v1 | 2024-06-17T17:35:52Z | 2024-06-17T17:35:52Z | Split, Unlearn, Merge: Leveraging Data Attributes for More Effective
Unlearning in LLMs | Large language models (LLMs) have shown to pose social and ethical risks such as generating toxic language or facilitating malicious use of hazardous knowledge. Machine unlearning is a promising approach to improve LLM safety by directly removing harmful behaviors and knowledge. In this paper, we propose "SPlit, UNlearn, MerGE" (SPUNGE), a framework that can be used with any unlearning method to amplify its effectiveness. SPUNGE leverages data attributes during unlearning by splitting unlearning data into subsets based on specific attribute values, unlearning each subset separately, and merging the unlearned models. We empirically demonstrate that SPUNGE significantly improves the performance of two recent unlearning methods on state-of-the-art LLMs while maintaining their general capabilities on standard academic benchmarks. | [
"['Swanand Ravindra Kadhe' 'Farhan Ahmed' 'Dennis Wei' 'Nathalie Baracaldo'\n 'Inkit Padhi']"
] |
null | null | 2406.11785 | null | null | http://arxiv.org/pdf/2406.11785v1 | 2024-06-17T17:39:10Z | 2024-06-17T17:39:10Z | CELL your Model: Contrastive Explanation Methods for Large Language
Models | The advent of black-box deep neural network classification models has sparked the need to explain their decisions. However, in the case of generative AI such as large language models (LLMs), there is no class prediction to explain. Rather, one can ask why an LLM output a particular response to a given prompt. In this paper, we answer this question by proposing, to the best of our knowledge, the first contrastive explanation methods requiring simply black-box/query access. Our explanations suggest that an LLM outputs a reply to a given prompt because if the prompt was slightly modified, the LLM would have given a different response that is either less preferable or contradicts the original response. The key insight is that contrastive explanations simply require a distance function that has meaning to the user and not necessarily a real valued representation of a specific response (viz. class label). We offer two algorithms for finding contrastive explanations: i) A myopic algorithm, which although effective in creating contrasts, requires many model calls and ii) A budgeted algorithm, our main algorithmic contribution, which intelligently creates contrasts adhering to a query budget, necessary for longer contexts. We show the efficacy of these methods on diverse natural language tasks such as open-text generation, automated red teaming, and explaining conversational degradation. | [
"['Ronny Luss' 'Erik Miehling' 'Amit Dhurandhar']"
] |
null | null | 2406.11794 | null | null | http://arxiv.org/pdf/2406.11794v3 | 2024-06-20T17:43:05Z | 2024-06-17T17:42:57Z | DataComp-LM: In search of the next generation of training sets for
language models | We introduce DataComp for Language Models (DCLM), a testbed for controlled dataset experiments with the goal of improving language models. As part of DCLM, we provide a standardized corpus of 240T tokens extracted from Common Crawl, effective pretraining recipes based on the OpenLM framework, and a broad suite of 53 downstream evaluations. Participants in the DCLM benchmark can experiment with data curation strategies such as deduplication, filtering, and data mixing at model scales ranging from 412M to 7B parameters. As a baseline for DCLM, we conduct extensive experiments and find that model-based filtering is key to assembling a high-quality training set. The resulting dataset, DCLM-Baseline enables training a 7B parameter language model from scratch to 64% 5-shot accuracy on MMLU with 2.6T training tokens. Compared to MAP-Neo, the previous state-of-the-art in open-data language models, DCLM-Baseline represents a 6.6 percentage point improvement on MMLU while being trained with 40% less compute. Our baseline model is also comparable to Mistral-7B-v0.3 and Llama 3 8B on MMLU (63% & 66%), and performs similarly on an average of 53 natural language understanding tasks while being trained with 6.6x less compute than Llama 3 8B. Our results highlight the importance of dataset design for training language models and offer a starting point for further research on data curation. | [
"['Jeffrey Li' 'Alex Fang' 'Georgios Smyrnis' 'Maor Ivgi' 'Matt Jordan'\n 'Samir Gadre' 'Hritik Bansal' 'Etash Guha' 'Sedrick Keh' 'Kushal Arora'\n 'Saurabh Garg' 'Rui Xin' 'Niklas Muennighoff' 'Reinhard Heckel'\n 'Jean Mercat' 'Mayee Chen' 'Suchin Gururangan' 'Mitchell Wortsman'\n 'Alon Albalak' 'Yonatan Bitton' 'Marianna Nezhurina' 'Amro Abbas'\n 'Cheng-Yu Hsieh' 'Dhruba Ghosh' 'Josh Gardner' 'Maciej Kilian'\n 'Hanlin Zhang' 'Rulin Shao' 'Sarah Pratt' 'Sunny Sanyal'\n 'Gabriel Ilharco' 'Giannis Daras' 'Kalyani Marathe' 'Aaron Gokaslan'\n 'Jieyu Zhang' 'Khyathi Chandu' 'Thao Nguyen' 'Igor Vasiljevic'\n 'Sham Kakade' 'Shuran Song' 'Sujay Sanghavi' 'Fartash Faghri'\n 'Sewoong Oh' 'Luke Zettlemoyer' 'Kyle Lo' 'Alaaeldin El-Nouby'\n 'Hadi Pouransari' 'Alexander Toshev' 'Stephanie Wang' 'Dirk Groeneveld'\n 'Luca Soldaini' 'Pang Wei Koh' 'Jenia Jitsev' 'Thomas Kollar'\n 'Alexandros G. Dimakis' 'Yair Carmon' 'Achal Dave' 'Ludwig Schmidt'\n 'Vaishaal Shankar']"
] |
null | null | 2406.11799 | null | null | http://arxiv.org/pdf/2406.11799v1 | 2024-06-17T17:47:44Z | 2024-06-17T17:47:44Z | Mix-Domain Contrastive Learning for Unpaired H&E-to-IHC Stain
Translation | H&E-to-IHC stain translation techniques offer a promising solution for precise cancer diagnosis, especially in low-resource regions where there is a shortage of health professionals and limited access to expensive equipment. Considering the pixel-level misalignment of H&E-IHC image pairs, current research explores the pathological consistency between patches from the same positions of the image pair. However, most of them overemphasize the correspondence between domains or patches, overlooking the side information provided by the non-corresponding objects. In this paper, we propose a Mix-Domain Contrastive Learning (MDCL) method to leverage the supervision information in unpaired H&E-to-IHC stain translation. Specifically, the proposed MDCL method aggregates the inter-domain and intra-domain pathology information by estimating the correlation between the anchor patch and all the patches from the matching images, encouraging the network to learn additional contrastive knowledge from mixed domains. With the mix-domain pathology information aggregation, MDCL enhances the pathological consistency between the corresponding patches and the component discrepancy of the patches from the different positions of the generated IHC image. Extensive experiments on two H&E-to-IHC stain translation datasets, namely MIST and BCI, demonstrate that the proposed method achieves state-of-the-art performance across multiple metrics. | [
"['Song Wang' 'Zhong Zhang' 'Huan Yan' 'Ming Xu' 'Guanghui Wang']"
] |
null | null | 2406.11803 | null | null | http://arxiv.org/pdf/2406.11803v1 | 2024-06-17T17:49:27Z | 2024-06-17T17:49:27Z | Efficient Discovery of Significant Patterns with Few-Shot Resampling | Significant pattern mining is a fundamental task in mining transactional data, requiring to identify patterns significantly associated with the value of a given feature, the target. In several applications, such as biomedicine, basket market analysis, and social networks, the goal is to discover patterns whose association with the target is defined with respect to an underlying population, or process, of which the dataset represents only a collection of observations, or samples. A natural way to capture the association of a pattern with the target is to consider its statistical significance, assessing its deviation from the (null) hypothesis of independence between the pattern and the target. While several algorithms have been proposed to find statistically significant patterns, it remains a computationally demanding task, and for complex patterns such as subgroups, no efficient solution exists. We present FSR, an efficient algorithm to identify statistically significant patterns with rigorous guarantees on the probability of false discoveries. FSR builds on a novel general framework for mining significant patterns that captures some of the most commonly considered patterns, including itemsets, sequential patterns, and subgroups. FSR uses a small number of resampled datasets, obtained by assigning i.i.d. labels to each transaction, to rigorously bound the supremum deviation of a quality statistic measuring the significance of patterns. FSR builds on novel tight bounds on the supremum deviation that require to mine a small number of resampled datasets, while providing a high effectiveness in discovering significant patterns. As a test case, we consider significant subgroup mining, and our evaluation on several real datasets shows that FSR is effective in discovering significant subgroups, while requiring a small number of resampled datasets. | [
"['Leonardo Pellegrina' 'Fabio Vandin']"
] |
null | null | 2406.11809 | null | null | http://arxiv.org/pdf/2406.11809v1 | 2024-06-17T17:52:01Z | 2024-06-17T17:52:01Z | Physics-Constrained Learning for PDE Systems with Uncertainty Quantified
Port-Hamiltonian Models | Modeling the dynamics of flexible objects has become an emerging topic in the community as these objects become more present in many applications, e.g., soft robotics. Due to the properties of flexible materials, the movements of soft objects are often highly nonlinear and, thus, complex to predict. Data-driven approaches seem promising for modeling those complex dynamics but often neglect basic physical principles, which consequently makes them untrustworthy and limits generalization. To address this problem, we propose a physics-constrained learning method that combines powerful learning tools and reliable physical models. Our method leverages the data collected from observations by sending them into a Gaussian process that is physically constrained by a distributed Port-Hamiltonian model. Based on the Bayesian nature of the Gaussian process, we not only learn the dynamics of the system, but also enable uncertainty quantification. Furthermore, the proposed approach preserves the compositional nature of Port-Hamiltonian systems. | [
"['Kaiyuan Tan' 'Peilun Li' 'Thomas Beckers']"
] |
null | null | 2406.11810 | null | null | http://arxiv.org/pdf/2406.11810v1 | 2024-06-17T17:52:38Z | 2024-06-17T17:52:38Z | Computationally Efficient RL under Linear Bellman Completeness for
Deterministic Dynamics | We study computationally and statistically efficient Reinforcement Learning algorithms for the linear Bellman Complete setting, a setting that uses linear function approximation to capture value functions and unifies existing models like linear Markov Decision Processes (MDP) and Linear Quadratic Regulators (LQR). While it is known from the prior works that this setting is statistically tractable, it remained open whether a computationally efficient algorithm exists. Our work provides a computationally efficient algorithm for the linear Bellman complete setting that works for MDPs with large action spaces, random initial states, and random rewards but relies on the underlying dynamics to be deterministic. Our approach is based on randomization: we inject random noise into least square regression problems to perform optimistic value iteration. Our key technical contribution is to carefully design the noise to only act in the null space of the training data to ensure optimism while circumventing a subtle error amplification issue. | [
"['Runzhe Wu' 'Ayush Sekhari' 'Akshay Krishnamurthy' 'Wen Sun']"
] |
null | null | 2406.11814 | null | null | http://arxiv.org/pdf/2406.11814v1 | 2024-06-17T17:54:42Z | 2024-06-17T17:54:42Z | Stochastic Neural Network Symmetrisation in Markov Categories | We consider the problem of symmetrising a neural network along a group homomorphism: given a homomorphism $varphi : H to G$, we would like a procedure that converts $H$-equivariant neural networks into $G$-equivariant ones. We formulate this in terms of Markov categories, which allows us to consider neural networks whose outputs may be stochastic, but with measure-theoretic details abstracted away. We obtain a flexible, compositional, and generic framework for symmetrisation that relies on minimal assumptions about the structure of the group and the underlying neural network architecture. Our approach recovers existing methods for deterministic symmetrisation as special cases, and extends directly to provide a novel methodology for stochastic symmetrisation also. Beyond this, we believe our findings also demonstrate the utility of Markov categories for addressing problems in machine learning in a conceptual yet mathematically rigorous way. | [
"['Rob Cornish']"
] |
null | null | 2406.11815 | null | null | http://arxiv.org/pdf/2406.11815v1 | 2024-06-17T17:55:29Z | 2024-06-17T17:55:29Z | LLARVA: Vision-Action Instruction Tuning Enhances Robot Learning | In recent years, instruction-tuned Large Multimodal Models (LMMs) have been successful at several tasks, including image captioning and visual question answering; yet leveraging these models remains an open question for robotics. Prior LMMs for robotics applications have been extensively trained on language and action data, but their ability to generalize in different settings has often been less than desired. To address this, we introduce LLARVA, a model trained with a novel instruction tuning method that leverages structured prompts to unify a range of robotic learning tasks, scenarios, and environments. Additionally, we show that predicting intermediate 2-D representations, which we refer to as "visual traces", can help further align vision and action spaces for robot learning. We generate 8.5M image-visual trace pairs from the Open X-Embodiment dataset in order to pre-train our model, and we evaluate on 12 different tasks in the RLBench simulator as well as a physical Franka Emika Panda 7-DoF robot. Our experiments yield strong performance, demonstrating that LLARVA - using 2-D and language representations - performs well compared to several contemporary baselines, and can generalize across various robot environments and configurations. | [
"['Dantong Niu' 'Yuvan Sharma' 'Giscard Biamby' 'Jerome Quenum'\n 'Yutong Bai' 'Baifeng Shi' 'Trevor Darrell' 'Roei Herzig']"
] |
null | null | 2406.11817 | null | null | http://arxiv.org/pdf/2406.11817v1 | 2024-06-17T17:55:38Z | 2024-06-17T17:55:38Z | Iterative Length-Regularized Direct Preference Optimization: A Case
Study on Improving 7B Language Models to GPT-4 Level | Direct Preference Optimization (DPO), a standard method for aligning language models with human preferences, is traditionally applied to offline preferences. Recent studies show that DPO benefits from iterative training with online preferences labeled by a trained reward model. In this work, we identify a pitfall of vanilla iterative DPO - improved response quality can lead to increased verbosity. To address this, we introduce iterative length-regularized DPO (iLR-DPO) to penalize response length. Our empirical results show that iLR-DPO can enhance a 7B model to perform on par with GPT-4 without increasing verbosity. Specifically, our 7B model achieves a $50.5%$ length-controlled win rate against $texttt{GPT-4 Preview}$ on AlpacaEval 2.0, and excels across standard benchmarks including MT-Bench, Arena-Hard and OpenLLM Leaderboard. These results demonstrate the effectiveness of iterative DPO in aligning language models with human feedback. | [
"['Jie Liu' 'Zhanhui Zhou' 'Jiaheng Liu' 'Xingyuan Bu' 'Chao Yang'\n 'Han-Sen Zhong' 'Wanli Ouyang']"
] |
null | null | 2406.11825 | null | null | http://arxiv.org/pdf/2406.11825v1 | 2024-06-17T17:58:15Z | 2024-06-17T17:58:15Z | Spectral Introspection Identifies Group Training Dynamics in Deep Neural
Networks for Neuroimaging | Neural networks, whice have had a profound effect on how researchers study complex phenomena, do so through a complex, nonlinear mathematical structure which can be difficult for human researchers to interpret. This obstacle can be especially salient when researchers want to better understand the emergence of particular model behaviors such as bias, overfitting, overparametrization, and more. In Neuroimaging, the understanding of how such phenomena emerge is fundamental to preventing and informing users of the potential risks involved in practice. In this work, we present a novel introspection framework for Deep Learning on Neuroimaging data, which exploits the natural structure of gradient computations via the singular value decomposition of gradient components during reverse-mode auto-differentiation. Unlike post-hoc introspection techniques, which require fully-trained models for evaluation, our method allows for the study of training dynamics on the fly, and even more interestingly, allow for the decomposition of gradients based on which samples belong to particular groups of interest. We demonstrate how the gradient spectra for several common deep learning models differ between schizophrenia and control participants from the COBRE study, and illustrate how these trajectories may reveal specific training dynamics helpful for further analysis. | [
"['Bradley T. Baker' 'Vince D. Calhoun' 'Sergey M. Plis']"
] |
null | null | 2406.11827 | null | null | http://arxiv.org/pdf/2406.11827v1 | 2024-06-17T17:59:13Z | 2024-06-17T17:59:13Z | WPO: Enhancing RLHF with Weighted Preference Optimization | Reinforcement learning from human feedback (RLHF) is a promising solution to align large language models (LLMs) more closely with human values. Off-policy preference optimization, where the preference data is obtained from other models, is widely adopted due to its cost efficiency and scalability. However, off-policy preference optimization often suffers from a distributional gap between the policy used for data collection and the target policy, leading to suboptimal optimization. In this paper, we propose a novel strategy to mitigate this problem by simulating on-policy learning with off-policy preference data. Our Weighted Preference Optimization (WPO) method adapts off-policy data to resemble on-policy data more closely by reweighting preference pairs according to their probability under the current policy. This method not only addresses the distributional gap problem but also enhances the optimization process without incurring additional costs. We validate our method on instruction following benchmarks including Alpaca Eval 2 and MT-bench. WPO not only outperforms Direct Preference Optimization (DPO) by up to 5.6% on Alpaca Eval 2 but also establishes a remarkable length-controlled winning rate against GPT-4-turbo of 48.6% based on Llama-3-8B-Instruct, making it the strongest 8B model on the leaderboard. We will release the code and models at https://github.com/wzhouad/WPO. | [
"['Wenxuan Zhou' 'Ravi Agrawal' 'Shujian Zhang' 'Sathish Reddy Indurthi'\n 'Sanqiang Zhao' 'Kaiqiang Song' 'Silei Xu' 'Chenguang Zhu']"
] |
null | null | 2406.11828 | null | null | http://arxiv.org/pdf/2406.11828v1 | 2024-06-17T17:59:17Z | 2024-06-17T17:59:17Z | Learning sum of diverse features: computational hardness and efficient
gradient-based training for ridge combinations | We study the computational and sample complexity of learning a target function $f_*:mathbb{R}^dtomathbb{R}$ with additive structure, that is, $f_*(x) = frac{1}{sqrt{M}}sum_{m=1}^M f_m(langle x, v_mrangle)$, where $f_1,f_2,...,f_M:mathbb{R}tomathbb{R}$ are nonlinear link functions of single-index models (ridge functions) with diverse and near-orthogonal index features ${v_m}_{m=1}^M$, and the number of additive tasks $M$ grows with the dimensionality $Masymp d^gamma$ for $gammage 0$. This problem setting is motivated by the classical additive model literature, the recent representation learning theory of two-layer neural network, and large-scale pretraining where the model simultaneously acquires a large number of "skills" that are often localized in distinct parts of the trained network. We prove that a large subset of polynomial $f_*$ can be efficiently learned by gradient descent training of a two-layer neural network, with a polynomial statistical and computational complexity that depends on the number of tasks $M$ and the information exponent of $f_m$, despite the unknown link function and $M$ growing with the dimensionality. We complement this learnability guarantee with computational hardness result by establishing statistical query (SQ) lower bounds for both the correlational SQ and full SQ algorithms. | [
"['Kazusato Oko' 'Yujin Song' 'Taiji Suzuki' 'Denny Wu']"
] |
null | null | 2406.11833 | null | null | http://arxiv.org/pdf/2406.11833v1 | 2024-06-17T17:59:47Z | 2024-06-17T17:59:47Z | MMDU: A Multi-Turn Multi-Image Dialog Understanding Benchmark and
Instruction-Tuning Dataset for LVLMs | Generating natural and meaningful responses to communicate with multi-modal human inputs is a fundamental capability of Large Vision-Language Models(LVLMs). While current open-source LVLMs demonstrate promising performance in simplified scenarios such as single-turn single-image input, they fall short in real-world conversation scenarios such as following instructions in a long context history with multi-turn and multi-images. Existing LVLM benchmarks primarily focus on single-choice questions or short-form responses, which do not adequately assess the capabilities of LVLMs in real-world human-AI interaction applications. Therefore, we introduce MMDU, a comprehensive benchmark, and MMDU-45k, a large-scale instruction tuning dataset, designed to evaluate and improve LVLMs' abilities in multi-turn and multi-image conversations. We employ the clustering algorithm to ffnd the relevant images and textual descriptions from the open-source Wikipedia and construct the question-answer pairs by human annotators with the assistance of the GPT-4o model. MMDU has a maximum of 18k image+text tokens, 20 images, and 27 turns, which is at least 5x longer than previous benchmarks and poses challenges to current LVLMs. Our in-depth analysis of 15 representative LVLMs using MMDU reveals that open-source LVLMs lag behind closed-source counterparts due to limited conversational instruction tuning data. We demonstrate that ffne-tuning open-source LVLMs on MMDU-45k signiffcantly address this gap, generating longer and more accurate conversations, and improving scores on MMDU and existing benchmarks (MMStar: +1.1%, MathVista: +1.5%, ChartQA:+1.2%). Our contributions pave the way for bridging the gap between current LVLM models and real-world application demands. This project is available at https://github.com/Liuziyu77/MMDU. | [
"['Ziyu Liu' 'Tao Chu' 'Yuhang Zang' 'Xilin Wei' 'Xiaoyi Dong' 'Pan Zhang'\n 'Zijian Liang' 'Yuanjun Xiong' 'Yu Qiao' 'Dahua Lin' 'Jiaqi Wang']"
] |
null | null | 2406.11839 | null | null | http://arxiv.org/pdf/2406.11839v1 | 2024-06-17T17:59:58Z | 2024-06-17T17:59:58Z | mDPO: Conditional Preference Optimization for Multimodal Large Language
Models | Direct preference optimization (DPO) has shown to be an effective method for large language model (LLM) alignment. Recent works have attempted to apply DPO to multimodal scenarios but have found it challenging to achieve consistent improvement. Through a comparative experiment, we identify the unconditional preference problem in multimodal preference optimization, where the model overlooks the image condition. To address this problem, we propose mDPO, a multimodal DPO objective that prevents the over-prioritization of language-only preferences by also optimizing image preference. Moreover, we introduce a reward anchor that forces the reward to be positive for chosen responses, thereby avoiding the decrease in their likelihood -- an intrinsic problem of relative preference optimization. Experiments on two multimodal LLMs of different sizes and three widely used benchmarks demonstrate that mDPO effectively addresses the unconditional preference problem in multimodal preference optimization and significantly improves model performance, particularly in reducing hallucination. | [
"['Fei Wang' 'Wenxuan Zhou' 'James Y. Huang' 'Nan Xu' 'Sheng Zhang'\n 'Hoifung Poon' 'Muhao Chen']"
] |
null | null | 2406.11847 | null | null | http://arxiv.org/pdf/2406.11847v1 | 2024-03-28T03:29:02Z | 2024-03-28T03:29:02Z | Integrating behavior analysis with machine learning to predict online
learning performance: A scientometric review and empirical study | The interest in predicting online learning performance using ML algorithms has been steadily increasing. We first conducted a scientometric analysis to provide a systematic review of research in this area. The findings show that most existing studies apply the ML methods without considering learning behavior patterns, which may compromise the prediction accuracy and precision of the ML methods. This study proposes an integration framework that blends learning behavior analysis with ML algorithms to enhance the prediction accuracy of students' online learning performance. Specifically, the framework identifies distinct learning patterns among students by employing clustering analysis and implements various ML algorithms to predict performance within each pattern. For demonstration, the integration framework is applied to a real dataset from edX and distinguishes two learning patterns, as in, low autonomy students and motivated students. The results show that the framework yields nearly perfect prediction performance for autonomous students and satisfactory performance for motivated students. Additionally, this study compares the prediction performance of the integration framework to that of directly applying ML methods without learning behavior analysis using comprehensive evaluation metrics. The results consistently demonstrate the superiority of the integration framework over the direct approach, particularly when integrated with the best-performing XGBoosting method. Moreover, the framework significantly improves prediction accuracy for the motivated students and for the worst-performing random forest method. This study also evaluates the importance of various learning behaviors within each pattern using LightGBM with SHAP values. The implications of the integration framework and the results for online education practice and future research are discussed. | [
"['Jin Yuan' 'Xuelan Qiu' 'Jinran Wu' 'Jiesi Guo' 'Weide Li' 'You-Gan Wang']"
] |
null | null | 2406.11862 | null | null | http://arxiv.org/pdf/2406.11862v1 | 2024-04-11T10:24:25Z | 2024-04-11T10:24:25Z | Matrix-Free Jacobian Chaining | The efficient computation of Jacobians represents a fundamental challenge in computational science and engineering. Large-scale modular numerical simulation programs can be regarded as sequences of evaluations of in our case differentiable subprograms with corresponding elemental Jacobians. The latter are typically not available. Tangent and adjoint versions of the individual subprograms are assumed to be given as results of algorithmic differentiation instead. The classical (Jacobian) Matrix Chain Product problem is reformulated in terms of matrix-free Jacobian-matrix (tangents) and matrix-Jacobian products (adjoints), subject to limited memory for storing information required by latter. All numerical results can be reproduced using an open-source reference implementation. | [
"['Uwe Naumann']"
] |
null | null | 2406.11872 | null | null | http://arxiv.org/pdf/2406.11872v1 | 2024-05-31T05:13:02Z | 2024-05-31T05:13:02Z | The EarlyBird Gets the WORM: Heuristically Accelerating EarlyBird
Convergence | The Lottery Ticket hypothesis proposes that ideal sparse subnetworks called lottery tickets exist in the untrained dense network. The Early Bird hypothesis proposes an efficient algorithm to find these winning lottery tickets in convolutional neural networks using the novel concept of distance between subnetworks to detect convergence in the subnetworks of a model. However, this approach overlooks unchanging groups of unimportant neurons near the end of the search. We propose WORM, a method that exploits these static groups by truncating their gradients, forcing the model to rely on other neurons. Experiments show WORM achieves faster ticket identification training and uses fewer FLOPs, despite the additional computational overhead. Additionally WORM pruned models lose less accuracy during pruning and recover accuracy faster, improving the robustness of the model. Furthermore, WORM is also able to generalize the Early Bird hypothesis reasonably well to larger models such as transformers, displaying its flexibility to adapt to various architectures. | [
"['Adithya Vasudev']"
] |
null | null | 2406.11877 | null | null | http://arxiv.org/pdf/2406.11877v1 | 2024-06-08T04:23:21Z | 2024-06-08T04:23:21Z | Solar Power Prediction Using Satellite Data in Different Parts of Nepal | Due to the unavailability of solar irradiance data for many potential sites of Nepal, the paper proposes predicting solar irradiance based on alternative meteorological parameters. The study focuses on five distinct regions in Nepal and utilizes a dataset spanning almost ten years, obtained from CERES SYN1deg and MERRA-2. Machine learning models such as Random Forest, XGBoost, K-Nearest Neighbors, and deep learning models like LSTM and ANN-MLP are employed and evaluated for their performance. The results indicate high accuracy in predicting solar irradiance, with R-squared(R2) scores close to unity for both train and test datasets. The impact of parameter integration on model performance is analyzed, revealing the significance of various parameters in enhancing predictive accuracy. Each model demonstrates strong performance across all parameters, consistently achieving MAE values below 6, RMSE values under 10, MBE within |2|, and nearly unity R2 values. Upon removal of various solar parameters such as "Solar_Irradiance_Clear_Sky", "UVA", etc. from the datasets, the model's performance is significantly affected. This exclusion leads to considerable increases in MAE, reaching up to 82, RMSE up to 135, and MBE up to |7|. Among the models, KNN displays the weakest performance, with an R2 of 0.7582546. Conversely, ANN exhibits the strongest performance, boasting an R2 value of 0.9245877. Hence, the study concludes that Artificial Neural Network (ANN) performs exceptionally well, showcasing its versatility even under sparse data parameter conditions. | [
"['Raj Krishna Nepal' 'Bibek Khanal' 'Vibek Ghimire' 'Kismat Neupane'\n 'Atul Pokharel' 'Kshitij Niraula' 'Baburam Tiwari' 'Nawaraj Bhattarai'\n 'Khem N. Poudyal' 'Nawaraj Karki' 'Mohan B Dangi' 'John Biden']"
] |
null | null | 2406.11880 | null | null | http://arxiv.org/pdf/2406.11880v1 | 2024-06-11T23:58:37Z | 2024-06-11T23:58:37Z | Knowledge Return Oriented Prompting (KROP) | Many Large Language Models (LLMs) and LLM-powered apps deployed today use some form of prompt filter or alignment to protect their integrity. However, these measures aren't foolproof. This paper introduces KROP, a prompt injection technique capable of obfuscating prompt injection attacks, rendering them virtually undetectable to most of these security measures. | [
"['Jason Martin' 'Kenneth Yeung']"
] |
null | null | 2406.11882 | null | null | http://arxiv.org/pdf/2406.11882v1 | 2024-06-12T15:05:29Z | 2024-06-12T15:05:29Z | Applications of Explainable artificial intelligence in Earth system
science | In recent years, artificial intelligence (AI) rapidly accelerated its influence and is expected to promote the development of Earth system science (ESS) if properly harnessed. In application of AI to ESS, a significant hurdle lies in the interpretability conundrum, an inherent problem of black-box nature arising from the complexity of AI algorithms. To address this, explainable AI (XAI) offers a set of powerful tools that make the models more transparent. The purpose of this review is twofold: First, to provide ESS scholars, especially newcomers, with a foundational understanding of XAI, serving as a primer to inspire future research advances; second, to encourage ESS professionals to embrace the benefits of AI, free from preconceived biases due to its lack of interpretability. We begin with elucidating the concept of XAI, along with typical methods. We then delve into a review of XAI applications in the ESS literature, highlighting the important role that XAI has played in facilitating communication with AI model decisions, improving model diagnosis, and uncovering scientific insights. We identify four significant challenges that XAI faces within the ESS, and propose solutions. Furthermore, we provide a comprehensive illustration of multifaceted perspectives. Given the unique challenges in ESS, an interpretable hybrid approach that seamlessly integrates AI with domain-specific knowledge appears to be a promising way to enhance the utility of AI in ESS. A visionary outlook for ESS envisions a harmonious blend where process-based models govern the known, AI models explore the unknown, and XAI bridges the gap by providing explanations. | [
"['Feini Huang' 'Shijie Jiang' 'Lu Li' 'Yongkun Zhang' 'Ye Zhang'\n 'Ruqing Zhang' 'Qingliang Li' 'Danxi Li' 'Wei Shangguan' 'Yongjiu Dai']"
] |
null | null | 2406.11886 | null | null | http://arxiv.org/pdf/2406.11886v1 | 2024-06-13T09:42:28Z | 2024-06-13T09:42:28Z | Financial Assets Dependency Prediction Utilizing Spatiotemporal Patterns | Financial assets exhibit complex dependency structures, which are crucial for investors to create diversified portfolios to mitigate risk in volatile financial markets. To explore the financial asset dependencies dynamics, we propose a novel approach that models the dependencies of assets as an Asset Dependency Matrix (ADM) and treats the ADM sequences as image sequences. This allows us to leverage deep learning-based video prediction methods to capture the spatiotemporal dependencies among assets. However, unlike images where neighboring pixels exhibit explicit spatiotemporal dependencies due to the natural continuity of object movements, assets in ADM do not have a natural order. This poses challenges to organizing the relational assets to reveal better the spatiotemporal dependencies among neighboring assets for ADM forecasting. To tackle the challenges, we propose the Asset Dependency Neural Network (ADNN), which employs the Convolutional Long Short-Term Memory (ConvLSTM) network, a highly successful method for video prediction. ADNN can employ static and dynamic transformation functions to optimize the representations of the ADM. Through extensive experiments, we demonstrate that our proposed framework consistently outperforms the baselines in the ADM prediction and downstream application tasks. This research contributes to understanding and predicting asset dependencies, offering valuable insights for financial market participants. | [
"['Haoren Zhu' 'Pengfei Zhao' 'Wilfred Siu Hung NG' 'Dik Lun Lee']"
] |
null | null | 2406.11888 | null | null | http://arxiv.org/pdf/2406.11888v1 | 2024-06-13T19:22:04Z | 2024-06-13T19:22:04Z | Neural logic programs and neural nets | Neural-symbolic integration aims to combine the connectionist subsymbolic with the logical symbolic approach to artificial intelligence. In this paper, we first define the answer set semantics of (boolean) neural nets and then introduce from first principles a class of neural logic programs and show that nets and programs are equivalent. | [
"['Christian Antić']"
] |
null | null | 2406.11890 | null | null | http://arxiv.org/pdf/2406.11890v1 | 2024-06-14T03:34:02Z | 2024-06-14T03:34:02Z | Unraveling the Mechanics of Learning-Based Demonstration Selection for
In-Context Learning | Large Language Models (LLMs) have demonstrated impressive in-context learning (ICL) capabilities from few-shot demonstration exemplars. While recent learning-based demonstration selection methods have proven beneficial to ICL by choosing more useful exemplars, their underlying mechanisms are opaque, hindering efforts to address limitations such as high training costs and poor generalization across tasks. These methods generally assume the selection process captures similarities between the exemplar and the target instance, however, it remains unknown what kinds of similarities are captured and vital to performing ICL. To dive into this question, we analyze the working mechanisms of the learning-based demonstration selection methods and empirically identify two important factors related to similarity measurement: 1) The ability to integrate different levels of task-agnostic text similarities between the input of exemplars and test cases enhances generalization power across different tasks. 2) Incorporating task-specific labels when measuring the similarities significantly improves the performance on each specific task. We validate these two findings through extensive quantitative and qualitative analyses across ten datasets and various LLMs. Based on our findings, we introduce two effective yet simplified exemplar selection methods catering to task-agnostic and task-specific demands, eliminating the costly LLM inference overhead. | [
"['Hui Liu' 'Wenya Wang' 'Hao Sun' 'Chris Xing Tian' 'Chenqi Kong'\n 'Xin Dong' 'Haoliang Li']"
] |
null | null | 2406.11891 | null | null | http://arxiv.org/pdf/2406.11891v1 | 2024-06-14T07:57:17Z | 2024-06-14T07:57:17Z | Towards Adaptive Neighborhood for Advancing Temporal Interaction Graph
Modeling | Temporal Graph Networks (TGNs) have demonstrated their remarkable performance in modeling temporal interaction graphs. These works can generate temporal node representations by encoding the surrounding neighborhoods for the target node. However, an inherent limitation of existing TGNs is their reliance on fixed, hand-crafted rules for neighborhood encoding, overlooking the necessity for an adaptive and learnable neighborhood that can accommodate both personalization and temporal evolution across different timestamps. In this paper, we aim to enhance existing TGNs by introducing an adaptive neighborhood encoding mechanism. We present SEAN, a flexible plug-and-play model that can be seamlessly integrated with existing TGNs, effectively boosting their performance. To achieve this, we decompose the adaptive neighborhood encoding process into two phases: (i) representative neighbor selection, and (ii) temporal-aware neighborhood information aggregation. Specifically, we propose the Representative Neighbor Selector component, which automatically pinpoints the most important neighbors for the target node. It offers a tailored understanding of each node's unique surrounding context, facilitating personalization. Subsequently, we propose a Temporal-aware Aggregator, which synthesizes neighborhood aggregation by selectively determining the utilization of aggregation routes and decaying the outdated information, allowing our model to adaptively leverage both the contextually significant and current information during aggregation. We conduct extensive experiments by integrating SEAN into three representative TGNs, evaluating their performance on four public datasets and one financial benchmark dataset introduced in this paper. The results demonstrate that SEAN consistently leads to performance improvements across all models, achieving SOTA performance and exceptional robustness. | [
"['Siwei Zhang' 'Xi Chen' 'Yun Xiong' 'Xixi Wu' 'Yao Zhang' 'Yongrui Fu'\n 'Yinglong Zhao' 'Jiawei Zhang']"
] |
null | null | 2406.11896 | null | null | http://arxiv.org/pdf/2406.11896v1 | 2024-06-14T17:49:55Z | 2024-06-14T17:49:55Z | DigiRL: Training In-The-Wild Device-Control Agents with Autonomous
Reinforcement Learning | Training corpuses for vision language models (VLMs) typically lack sufficient amounts of decision-centric data. This renders off-the-shelf VLMs sub-optimal for decision-making tasks such as in-the-wild device control through graphical user interfaces (GUIs). While training with static demonstrations has shown some promise, we show that such methods fall short for controlling real GUIs due to their failure to deal with real-world stochasticity and non-stationarity not captured in static observational data. This paper introduces a novel autonomous RL approach, called DigiRL, for training in-the-wild device control agents through fine-tuning a pre-trained VLM in two stages: offline RL to initialize the model, followed by offline-to-online RL. To do this, we build a scalable and parallelizable Android learning environment equipped with a VLM-based evaluator and develop a simple yet effective RL approach for learning in this domain. Our approach runs advantage-weighted RL with advantage estimators enhanced to account for stochasticity along with an automatic curriculum for deriving maximal learning signal. We demonstrate the effectiveness of DigiRL using the Android-in-the-Wild (AitW) dataset, where our 1.3B VLM trained with RL achieves a 49.5% absolute improvement -- from 17.7 to 67.2% success rate -- over supervised fine-tuning with static human demonstration data. These results significantly surpass not only the prior best agents, including AppAgent with GPT-4V (8.3% success rate) and the 17B CogAgent trained with AitW data (38.5%), but also the prior best autonomous RL approach based on filtered behavior cloning (57.8%), thereby establishing a new state-of-the-art for digital agents for in-the-wild device control. | [
"['Hao Bai' 'Yifei Zhou' 'Mert Cemri' 'Jiayi Pan' 'Alane Suhr'\n 'Sergey Levine' 'Aviral Kumar']"
] |
null | null | 2406.11897 | null | null | http://arxiv.org/pdf/2406.11897v1 | 2024-06-14T19:44:23Z | 2024-06-14T19:44:23Z | A Benchmark for Maximum Cut: Towards Standardization of the Evaluation
of Learned Heuristics for Combinatorial Optimization | Recently, there has been much work on the design of general heuristics for graph-based, combinatorial optimization problems via the incorporation of Graph Neural Networks (GNNs) to learn distribution-specific solution structures.However, there is a lack of consistency in the evaluation of these heuristics, in terms of the baselines and instances chosen, which makes it difficult to assess the relative performance of the algorithms. In this paper, we propose an open-source benchmark suite MaxCut-Bench dedicated to the NP-hard Maximum Cut problem in both its weighted and unweighted variants, based on a careful selection of instances curated from diverse graph datasets. The suite offers a unified interface to various heuristics, both traditional and machine learning-based. Next, we use the benchmark in an attempt to systematically corroborate or reproduce the results of several, popular learning-based approaches, including S2V-DQN [31], ECO-DQN [4], among others, in terms of three dimensions: objective value, generalization, and scalability. Our empirical results show that several of the learned heuristics fail to outperform a naive greedy algorithm, and that only one of them consistently outperforms Tabu Search, a simple, general heuristic based upon local search. Furthermore, we find that the performance of ECO-DQN remains the same or is improved if the GNN is replaced by a simple linear regression on a subset of the features that are related to Tabu Search. Code, data, and pretrained models are available at: url{https://github.com/ankurnath/MaxCut-Bench}. | [
"['Ankur Nath' 'Alan Kuhnle']"
] |
null | null | 2406.11898 | null | null | http://arxiv.org/pdf/2406.11898v1 | 2024-06-14T21:01:46Z | 2024-06-14T21:01:46Z | Towards Better Benchmark Datasets for Inductive Knowledge Graph
Completion | Knowledge Graph Completion (KGC) attempts to predict missing facts in a Knowledge Graph (KG). Recently, there's been an increased focus on designing KGC methods that can excel in the {it inductive setting}, where a portion or all of the entities and relations seen in inference are unobserved during training. Numerous benchmark datasets have been proposed for inductive KGC, all of which are subsets of existing KGs used for transductive KGC. However, we find that the current procedure for constructing inductive KGC datasets inadvertently creates a shortcut that can be exploited even while disregarding the relational information. Specifically, we observe that the Personalized PageRank (PPR) score can achieve strong or near SOTA performance on most inductive datasets. In this paper, we study the root cause of this problem. Using these insights, we propose an alternative strategy for constructing inductive KGC datasets that helps mitigate the PPR shortcut. We then benchmark multiple popular methods using the newly constructed datasets and analyze their performance. The new benchmark datasets help promote a better understanding of the capabilities and challenges of inductive KGC by removing any shortcuts that obfuscate performance. | [
"['Harry Shomer' 'Jay Revolinsky' 'Jiliang Tang']"
] |
null | null | 2406.11900 | null | null | http://arxiv.org/pdf/2406.11900v1 | 2024-06-15T08:18:09Z | 2024-06-15T08:18:09Z | Horizon-wise Learning Paradigm Promotes Gene Splicing Identification | Identifying gene splicing is a core and significant task confronted in modern collaboration between artificial intelligence and bioinformatics. Past decades have witnessed great efforts on this concern, such as the bio-plausible splicing pattern AT-CG and the famous SpliceAI. In this paper, we propose a novel framework for the task of gene splicing identification, named Horizon-wise Gene Splicing Identification (H-GSI). The proposed H-GSI follows the horizon-wise identification paradigm and comprises four components: the pre-processing procedure transforming string data into tensors, the sliding window technique handling long sequences, the SeqLab model, and the predictor. In contrast to existing studies that process gene information with a truncated fixed-length sequence, H-GSI employs a horizon-wise identification paradigm in which all positions in a sequence are predicted with only one forward computation, improving accuracy and efficiency. The experiments conducted on the real-world Human dataset show that our proposed H-GSI outperforms SpliceAI and achieves the best accuracy of 97.20%. The source code is available from this link. | [
"['Qi-Jie Li' 'Qian Sun' 'Shao-Qun Zhang']"
] |
null | null | 2406.11901 | null | null | http://arxiv.org/pdf/2406.11901v1 | 2024-06-15T09:19:09Z | 2024-06-15T09:19:09Z | Model Evaluation and Anomaly Detection in Temporal Complex Networks
using Deep Learning Methods | Modeling complex networks allows us to analyze the characteristics and discover the basic mechanisms governing phenomena such as disease outbreaks, information diffusion, transportation efficiency, social influence, and even human brain function. Consequently, various network generative models (called temporal network models) have been presented to model how the network topologies evolve dynamically over time. Temporal network models face the challenge of results evaluation because common evaluation methods are appropriate only for static networks. This paper proposes an automatic approach based on deep learning to handle this issue. In addition to an evaluation method, the proposed method can also be used for anomaly detection in evolving networks. The proposed method has been evaluated on five different datasets, and the evaluations show that it outperforms the alternative methods based on the error rate measure in different datasets. | [
"['Alireza Rashnu' 'Sadegh Aliakbary']"
] |
null | null | 2406.11905 | null | null | http://arxiv.org/pdf/2406.11905v1 | 2024-06-15T22:46:39Z | 2024-06-15T22:46:39Z | EvIL: Evolution Strategies for Generalisable Imitation Learning | Often times in imitation learning (IL), the environment we collect expert demonstrations in and the environment we want to deploy our learned policy in aren't exactly the same (e.g. demonstrations collected in simulation but deployment in the real world). Compared to policy-centric approaches to IL like behavioural cloning, reward-centric approaches like inverse reinforcement learning (IRL) often better replicate expert behaviour in new environments. This transfer is usually performed by optimising the recovered reward under the dynamics of the target environment. However, (a) we find that modern deep IL algorithms frequently recover rewards which induce policies far weaker than the expert, even in the same environment the demonstrations were collected in. Furthermore, (b) these rewards are often quite poorly shaped, necessitating extensive environment interaction to optimise effectively. We provide simple and scalable fixes to both of these concerns. For (a), we find that reward model ensembles combined with a slightly different training objective significantly improves re-training and transfer performance. For (b), we propose a novel evolution-strategies based method EvIL to optimise for a reward-shaping term that speeds up re-training in the target environment, closing a gap left open by the classical theory of IRL. On a suite of continuous control tasks, we are able to re-train policies in target (and source) environments more interaction-efficiently than prior work. | [
"['Silvia Sapora' 'Gokul Swamy' 'Chris Lu' 'Yee Whye Teh'\n 'Jakob Nicolaus Foerster']"
] |
null | null | 2406.11909 | null | null | http://arxiv.org/pdf/2406.11909v2 | 2024-07-05T11:06:12Z | 2024-06-16T14:19:49Z | Mixture-of-Subspaces in Low-Rank Adaptation | In this paper, we introduce a subspace-inspired Low-Rank Adaptation (LoRA) method, which is computationally efficient, easy to implement, and readily applicable to large language, multimodal, and diffusion models. Initially, we equivalently decompose the weights of LoRA into two subspaces, and find that simply mixing them can enhance performance. To study such a phenomenon, we revisit it through a fine-grained subspace lens, showing that such modification is equivalent to employing a fixed mixer to fuse the subspaces. To be more flexible, we jointly learn the mixer with the original LoRA weights, and term the method Mixture-of-Subspaces LoRA (MoSLoRA). MoSLoRA consistently outperforms LoRA on tasks in different modalities, including commonsense reasoning, visual instruction tuning, and subject-driven text-to-image generation, demonstrating its effectiveness and robustness. Codes are available at https://github.com/wutaiqiang/MoSLoRA. | [
"['Taiqiang Wu' 'Jiahao Wang' 'Zhe Zhao' 'Ngai Wong']"
] |
null | null | 2406.11911 | null | null | http://arxiv.org/pdf/2406.11911v1 | 2024-06-16T16:46:55Z | 2024-06-16T16:46:55Z | A Notion of Complexity for Theory of Mind via Discrete World Models | Theory of Mind (ToM) can be used to assess the capabilities of Large Language Models (LLMs) in complex scenarios where social reasoning is required. While the research community has proposed many ToM benchmarks, their hardness varies greatly, and their complexity is not well defined. This work proposes a framework to measure the complexity of ToM tasks. We quantify a problem's complexity as the number of states necessary to solve it correctly. Our complexity measure also accounts for spurious states of a ToM problem designed to make it apparently harder. We use our method to assess the complexity of five widely adopted ToM benchmarks. On top of this framework, we design a prompting technique that augments the information available to a model with a description of how the environment changes with the agents' interactions. We name this technique Discrete World Models (DWM) and show how it elicits superior performance on ToM tasks. | [
"['X. Angelo Huang' 'Emanuele La Malfa' 'Samuele Marro' 'Andrea Asperti'\n 'Anthony Cohn' 'Michael Wooldridge']"
] |
null | null | 2406.11914 | null | null | http://arxiv.org/pdf/2406.11914v1 | 2024-06-16T19:56:03Z | 2024-06-16T19:56:03Z | Initial Investigation of Kolmogorov-Arnold Networks (KANs) as Feature
Extractors for IMU Based Human Activity Recognition | In this work, we explore the use of a novel neural network architecture, the Kolmogorov-Arnold Networks (KANs) as feature extractors for sensor-based (specifically IMU) Human Activity Recognition (HAR). Where conventional networks perform a parameterized weighted sum of the inputs at each node and then feed the result into a statically defined nonlinearity, KANs perform non-linear computations represented by B-SPLINES on the edges leading to each node and then just sum up the inputs at the node. Instead of learning weights, the system learns the spline parameters. In the original work, such networks have been shown to be able to more efficiently and exactly learn sophisticated real valued functions e.g. in regression or PDE solution. We hypothesize that such an ability is also advantageous for computing low-level features for IMU-based HAR. To this end, we have implemented KAN as the feature extraction architecture for IMU-based human activity recognition tasks, including four architecture variations. We present an initial performance investigation of the KAN feature extractor on four public HAR datasets. It shows that the KAN-based feature extractor outperforms CNN-based extractors on all datasets while being more parameter efficient. | [
"['Mengxi Liu' 'Daniel Geißler' 'Dominique Nshimyimana' 'Sizhen Bian'\n 'Bo Zhou' 'Paul Lukowicz']"
] |
null | null | 2406.11915 | null | null | http://arxiv.org/pdf/2406.11915v1 | 2024-06-16T21:11:23Z | 2024-06-16T21:11:23Z | miniCodeProps: a Minimal Benchmark for Proving Code Properties | Neural networks have shown initial promise in automating mathematical theorem proving in proof assistants such as Lean. The same proof assistants can be used to verify the correctness of code by pairing code with specifications and proofs that the specifications hold. Automating the writing of code, specifications, and proofs could lower the cost of verification, or, ambitiously, enable a machine learning system to output provably correct code. However, it remains unclear whether current neural theorem provers can automatically verify even relatively simple programs. We present miniCodeProps, a benchmark of 177 program specifications in the Lean proof assistant, aimed at the subproblem of automatically generating a proof for a provided program and specification. miniCodeProps contains specifications about simple, self-contained programs (e.g., lists, natural numbers, binary trees) with varied proof difficulty. Despite its simplicity, miniCodeProps is challenging for current LLM-based provers, which succeed in proving about 25 percent of the specifications. We publicly release miniCodeProps as a benchmark for furthering automated theorem proving in the context of formally verified code. | [
"['Evan Lohn' 'Sean Welleck']"
] |
null | null | 2406.11917 | null | null | http://arxiv.org/abs/2406.11917v1 | 2024-06-17T02:43:24Z | 2024-06-17T02:43:24Z | Interpretable modulated differentiable STFT and physics-informed
balanced spectrum metric for freight train wheelset bearing cross-machine
transfer fault diagnosis under speed fluctuations | The service conditions of wheelset bearings has a direct impact on the safe operation of railway heavy haul freight trains as the key components. However, speed fluctuation of the trains and few fault samples are the two main problems that restrict the accuracy of bearing fault diagnosis. Therefore, a cross-machine transfer diagnosis (pyDSN) network coupled with interpretable modulated differentiable short-time Fourier transform (STFT) and physics-informed balanced spectrum quality metric is proposed to learn domain-invariant and discriminative features under time-varying speeds. Firstly, due to insufficiency in extracting extract frequency components of time-varying speed signals using fixed windows, a modulated differentiable STFT (MDSTFT) that is interpretable with STFT-informed theoretical support, is proposed to extract the robust time-frequency spectrum (TFS). During training process, multiple windows with different lengths dynamically change. Also, in addition to the classification metric and domain discrepancy metric, we creatively introduce a third kind of metric, referred to as the physics-informed metric, to enhance transferable TFS. A physics-informed balanced spectrum quality (BSQ) regularization loss is devised to guide an optimization direction for MDSTFT and model. With it, not only can model acquire high-quality TFS, but also a physics-restricted domain adaptation network can be also acquired, making it learn real-world physics knowledge, ultimately diminish the domain discrepancy across different datasets. The experiment is conducted in the scenario of migrating from the laboratory datasets to the freight train dataset, indicating that the hybrid-driven pyDSN outperforms existing methods and has practical value. | [
"['Chao He' 'Hongmei Shi' 'Ruixin Li' 'Jianbo Li' 'ZuJun Yu']"
] |
null | null | 2406.11919 | null | null | http://arxiv.org/pdf/2406.11919v1 | 2024-06-17T04:00:41Z | 2024-06-17T04:00:41Z | Graph Knowledge Distillation to Mixture of Experts | In terms of accuracy, Graph Neural Networks (GNNs) are the best architectural choice for the node classification task. Their drawback in real-world deployment is the latency that emerges from the neighbourhood processing operation. One solution to the latency issue is to perform knowledge distillation from a trained GNN to a Multi-Layer Perceptron (MLP), where the MLP processes only the features of the node being classified (and possibly some pre-computed structural information). However, the performance of such MLPs in both transductive and inductive settings remains inconsistent for existing knowledge distillation techniques. We propose to address the performance concerns by using a specially-designed student model instead of an MLP. Our model, named Routing-by-Memory (RbM), is a form of Mixture-of-Experts (MoE), with a design that enforces expert specialization. By encouraging each expert to specialize on a certain region on the hidden representation space, we demonstrate experimentally that it is possible to derive considerably more consistent performance across multiple datasets. | [
"['Pavel Rumiantsev' 'Mark Coates']"
] |
null | null | 2406.11920 | null | null | http://arxiv.org/pdf/2406.11920v2 | 2024-06-19T17:47:25Z | 2024-06-17T07:22:51Z | Job-SDF: A Multi-Granularity Dataset for Job Skill Demand Forecasting
and Benchmarking | In a rapidly evolving job market, skill demand forecasting is crucial as it enables policymakers and businesses to anticipate and adapt to changes, ensuring that workforce skills align with market needs, thereby enhancing productivity and competitiveness. Additionally, by identifying emerging skill requirements, it directs individuals towards relevant training and education opportunities, promoting continuous self-learning and development. However, the absence of comprehensive datasets presents a significant challenge, impeding research and the advancement of this field. To bridge this gap, we present Job-SDF, a dataset designed to train and benchmark job-skill demand forecasting models. Based on 10.35 million public job advertisements collected from major online recruitment platforms in China between 2021 and 2023, this dataset encompasses monthly recruitment demand for 2,324 types of skills across 521 companies. Our dataset uniquely enables evaluating skill demand forecasting models at various granularities, including occupation, company, and regional levels. We benchmark a range of models on this dataset, evaluating their performance in standard scenarios, in predictions focused on lower value ranges, and in the presence of structural breaks, providing new insights for further research. Our code and dataset are publicly accessible via the https://github.com/Job-SDF/benchmark. | [
"['Xi Chen' 'Chuan Qin' 'Chuyu Fang' 'Chao Wang' 'Chen Zhu' 'Fuzhen Zhuang'\n 'Hengshu Zhu' 'Hui Xiong']"
] |
null | null | 2406.11921 | null | null | http://arxiv.org/pdf/2406.11921v1 | 2024-06-17T07:36:57Z | 2024-06-17T07:36:57Z | Rethinking Spatio-Temporal Transformer for Traffic
Prediction:Multi-level Multi-view Augmented Learning Framework | Traffic prediction is a challenging spatio-temporal forecasting problem that involves highly complex spatio-temporal correlations. This paper proposes a Multi-level Multi-view Augmented Spatio-temporal Transformer (LVSTformer) for traffic prediction. The model aims to capture spatial dependencies from three different levels: local geographic, global semantic, and pivotal nodes, along with long- and short-term temporal dependencies. Specifically, we design three spatial augmented views to delve into the spatial information from the perspectives of local, global, and pivotal nodes. By combining three spatial augmented views with three parallel spatial self-attention mechanisms, the model can comprehensively captures spatial dependencies at different levels. We design a gated temporal self-attention mechanism to effectively capture long- and short-term temporal dependencies. Furthermore, a spatio-temporal context broadcasting module is introduced between two spatio-temporal layers to ensure a well-distributed allocation of attention scores, alleviating overfitting and information loss, and enhancing the generalization ability and robustness of the model. A comprehensive set of experiments is conducted on six well-known traffic benchmarks, the experimental results demonstrate that LVSTformer achieves state-of-the-art performance compared to competing baselines, with the maximum improvement reaching up to 4.32%. | [
"['Jiaqi Lin' 'Qianqian Ren']"
] |
null | null | 2406.11924 | null | null | http://arxiv.org/abs/2406.11924v1 | 2024-06-17T08:08:03Z | 2024-06-17T08:08:03Z | Explainable assessment of financial experts' credibility by classifying
social media forecasts and checking the predictions with actual market data | Social media include diverse interaction metrics related to user popularity, the most evident example being the number of user followers. The latter has raised concerns about the credibility of the posts by the most popular creators. However, most existing approaches to assess credibility in social media strictly consider this problem a binary classification, often based on a priori information, without checking if actual real-world facts back the users' comments. In addition, they do not provide automatic explanations of their predictions to foster their trustworthiness. In this work, we propose a credibility assessment solution for financial creators in social media that combines Natural Language Processing and Machine Learning. The reputation of the contributors is assessed by automatically classifying their forecasts on asset values by type and verifying these predictions with actual market data to approximate their probability of success. The outcome of this verification is a continuous credibility score instead of a binary result, an entirely novel contribution by this work. Moreover, social media metrics (i.e., user context) are exploited by calculating their correlation with the credibility rankings, providing insights on the interest of the end-users in financial posts and their forecasts (i.e., drop or rise). Finally, the system provides natural language explanations of its decisions based on a model-agnostic analysis of relevant features. | [
"['Silvia García-Méndez' 'Francisco de Arriba-Pérez'\n 'Jaime González-Gonzáleza' 'Francisco J. González-Castaño']"
] |
null | null | 2406.11928 | null | null | http://arxiv.org/abs/2406.11928v1 | 2024-06-17T12:03:10Z | 2024-06-17T12:03:10Z | FlexCare: Leveraging Cross-Task Synergy for Flexible Multimodal
Healthcare Prediction | Multimodal electronic health record (EHR) data can offer a holistic assessment of a patient's health status, supporting various predictive healthcare tasks. Recently, several studies have embraced the multitask learning approach in the healthcare domain, exploiting the inherent correlations among clinical tasks to predict multiple outcomes simultaneously. However, existing methods necessitate samples to possess complete labels for all tasks, which places heavy demands on the data and restricts the flexibility of the model. Meanwhile, within a multitask framework with multimodal inputs, how to comprehensively consider the information disparity among modalities and among tasks still remains a challenging problem. To tackle these issues, a unified healthcare prediction model, also named by textbf{FlexCare}, is proposed to flexibly accommodate incomplete multimodal inputs, promoting the adaption to multiple healthcare tasks. The proposed model breaks the conventional paradigm of parallel multitask prediction by decomposing it into a series of asynchronous single-task prediction. Specifically, a task-agnostic multimodal information extraction module is presented to capture decorrelated representations of diverse intra- and inter-modality patterns. Taking full account of the information disparities between different modalities and different tasks, we present a task-guided hierarchical multimodal fusion module that integrates the refined modality-level representations into an individual patient-level representation. Experimental results on multiple tasks from MIMIC-IV/MIMIC-CXR/MIMIC-NOTE datasets demonstrate the effectiveness of the proposed method. Additionally, further analysis underscores the feasibility and potential of employing such a multitask strategy in the healthcare domain. The source code is available at https://github.com/mhxu1998/FlexCare. | [
"['Muhao Xu' 'Zhenfeng Zhu' 'Youru Li' 'Shuai Zheng' 'Yawei Zhao'\n 'Kunlun He' 'Yao Zhao']"
] |
null | null | 2406.11929 | null | null | http://arxiv.org/pdf/2406.11929v2 | 2024-06-21T07:45:55Z | 2024-06-17T13:00:51Z | Long-time asymptotics of noisy SVGD outside the population limit | Stein Variational Gradient Descent (SVGD) is a widely used sampling algorithm that has been successfully applied in several areas of Machine Learning. SVGD operates by iteratively moving a set of interacting particles (which represent the samples) to approximate the target distribution. Despite recent studies on the complexity of SVGD and its variants, their long-time asymptotic behavior (i.e., after numerous iterations ) is still not understood in the finite number of particles regime. We study the long-time asymptotic behavior of a noisy variant of SVGD. First, we establish that the limit set of noisy SVGD for large is well-defined. We then characterize this limit set, showing that it approaches the target distribution as increases. In particular, noisy SVGD provably avoids the variance collapse observed for SVGD. Our approach involves demonstrating that the trajectories of noisy SVGD closely resemble those described by a McKean-Vlasov process. | [
"['Victor Priser' 'Pascal Bianchi' 'Adil Salim']"
] |
null | null | 2406.11931 | null | null | http://arxiv.org/pdf/2406.11931v1 | 2024-06-17T13:51:35Z | 2024-06-17T13:51:35Z | DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code
Intelligence | We present DeepSeek-Coder-V2, an open-source Mixture-of-Experts (MoE) code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks. Specifically, DeepSeek-Coder-V2 is further pre-trained from an intermediate checkpoint of DeepSeek-V2 with additional 6 trillion tokens. Through this continued pre-training, DeepSeek-Coder-V2 substantially enhances the coding and mathematical reasoning capabilities of DeepSeek-V2, while maintaining comparable performance in general language tasks. Compared to DeepSeek-Coder-33B, DeepSeek-Coder-V2 demonstrates significant advancements in various aspects of code-related tasks, as well as reasoning and general capabilities. Additionally, DeepSeek-Coder-V2 expands its support for programming languages from 86 to 338, while extending the context length from 16K to 128K. In standard benchmark evaluations, DeepSeek-Coder-V2 achieves superior performance compared to closed-source models such as GPT4-Turbo, Claude 3 Opus, and Gemini 1.5 Pro in coding and math benchmarks. | [
"['DeepSeek-AI' 'Qihao Zhu' 'Daya Guo' 'Zhihong Shao' 'Dejian Yang'\n 'Peiyi Wang' 'Runxin Xu' 'Y. Wu' 'Yukun Li' 'Huazuo Gao' 'Shirong Ma'\n 'Wangding Zeng' 'Xiao Bi' 'Zihui Gu' 'Hanwei Xu' 'Damai Dai' 'Kai Dong'\n 'Liyue Zhang' 'Yishi Piao' 'Zhibin Gou' 'Zhenda Xie' 'Zhewen Hao'\n 'Bingxuan Wang' 'Junxiao Song' 'Deli Chen' 'Xin Xie' 'Kang Guan'\n 'Yuxiang You' 'Aixin Liu' 'Qiushi Du' 'Wenjun Gao' 'Xuan Lu' 'Qinyu Chen'\n 'Yaohui Wang' 'Chengqi Deng' 'Jiashi Li' 'Chenggang Zhao' 'Chong Ruan'\n 'Fuli Luo' 'Wenfeng Liang']"
] |
null | null | 2406.11934 | null | null | http://arxiv.org/pdf/2406.11934v1 | 2024-06-17T16:03:17Z | 2024-06-17T16:03:17Z | Bridging Design Gaps: A Parametric Data Completion Approach With Graph
Guided Diffusion Models | This study introduces a generative imputation model leveraging graph attention networks and tabular diffusion models for completing missing parametric data in engineering designs. This model functions as an AI design co-pilot, providing multiple design options for incomplete designs, which we demonstrate using the bicycle design CAD dataset. Through comparative evaluations, we demonstrate that our model significantly outperforms existing classical methods, such as MissForest, hotDeck, PPCA, and tabular generative method TabCSDI in both the accuracy and diversity of imputation options. Generative modeling also enables a broader exploration of design possibilities, thereby enhancing design decision-making by allowing engineers to explore a variety of design completions. The graph model combines GNNs with the structural information contained in assembly graphs, enabling the model to understand and predict the complex interdependencies between different design parameters. The graph model helps accurately capture and impute complex parametric interdependencies from an assembly graph, which is key for design problems. By learning from an existing dataset of designs, the imputation capability allows the model to act as an intelligent assistant that autocompletes CAD designs based on user-defined partial parametric design, effectively bridging the gap between ideation and realization. The proposed work provides a pathway to not only facilitate informed design decisions but also promote creative exploration in design. | [
"['Rui Zhou' 'Chenyang Yuan' 'Frank Permenter' 'Yanxia Zhang'\n 'Nikos Arechiga' 'Matt Klenk' 'Faez Ahmed']"
] |
null | null | 2406.11939 | null | null | http://arxiv.org/pdf/2406.11939v1 | 2024-06-17T17:26:10Z | 2024-06-17T17:26:10Z | From Crowdsourced Data to High-Quality Benchmarks: Arena-Hard and
BenchBuilder Pipeline | The rapid evolution of language models has necessitated the development of more challenging benchmarks. Current static benchmarks often struggle to consistently distinguish between the capabilities of different models and fail to align with real-world user preferences. On the other hand, live crowd-sourced platforms like the Chatbot Arena collect a wide range of natural prompts and user feedback. However, these prompts vary in sophistication and the feedback cannot be applied offline to new models. In order to ensure that benchmarks keep up with the pace of LLM development, we address how one can evaluate benchmarks on their ability to confidently separate models and their alignment with human preference. Under these principles, we developed BenchBuilder, a living benchmark that filters high-quality prompts from live data sources to enable offline evaluation on fresh, challenging prompts. BenchBuilder identifies seven indicators of a high-quality prompt, such as the requirement for domain knowledge, and utilizes an LLM annotator to select a high-quality subset of prompts from various topic clusters. The LLM evaluation process employs an LLM judge to ensure a fully automated, high-quality, and constantly updating benchmark. We apply BenchBuilder on prompts from the Chatbot Arena to create Arena-Hard-Auto v0.1: 500 challenging user prompts from a wide range of tasks. Arena-Hard-Auto v0.1 offers 3x tighter confidence intervals than MT-Bench and achieves a state-of-the-art 89.1% agreement with human preference rankings, all at a cost of only $25 and without human labelers. The BenchBuilder pipeline enhances evaluation benchmarks and provides a valuable tool for developers, enabling them to extract high-quality benchmarks from extensive data with minimal effort. | [
"['Tianle Li' 'Wei-Lin Chiang' 'Evan Frick' 'Lisa Dunlap' 'Tianhao Wu'\n 'Banghua Zhu' 'Joseph E. Gonzalez' 'Ion Stoica']"
] |
null | null | 2406.11941 | null | null | http://arxiv.org/pdf/2406.11941v1 | 2024-06-17T17:35:47Z | 2024-06-17T17:35:47Z | Crossfusor: A Cross-Attention Transformer Enhanced Conditional Diffusion
Model for Car-Following Trajectory Prediction | Vehicle trajectory prediction is crucial for advancing autonomous driving and advanced driver assistance systems (ADAS), enhancing road safety and traffic efficiency. While traditional methods have laid foundational work, modern deep learning techniques, particularly transformer-based models and generative approaches, have significantly improved prediction accuracy by capturing complex and non-linear patterns in vehicle motion and traffic interactions. However, these models often overlook the detailed car-following behaviors and inter-vehicle interactions essential for real-world driving scenarios. This study introduces a Cross-Attention Transformer Enhanced Conditional Diffusion Model (Crossfusor) specifically designed for car-following trajectory prediction. Crossfusor integrates detailed inter-vehicular interactions and car-following dynamics into a robust diffusion framework, improving both the accuracy and realism of predicted trajectories. The model leverages a novel temporal feature encoding framework combining GRU, location-based attention mechanisms, and Fourier embedding to capture historical vehicle dynamics. It employs noise scaled by these encoded historical features in the forward diffusion process, and uses a cross-attention transformer to model intricate inter-vehicle dependencies in the reverse denoising process. Experimental results on the NGSIM dataset demonstrate that Crossfusor outperforms state-of-the-art models, particularly in long-term predictions, showcasing its potential for enhancing the predictive capabilities of autonomous driving systems. | [
"['Junwei You' 'Haotian Shi' 'Keshu Wu' 'Keke Long' 'Sicheng Fu'\n 'Sikai Chen' 'Bin Ran']"
] |
null | null | 2406.11944 | null | null | http://arxiv.org/pdf/2406.11944v1 | 2024-06-17T17:49:00Z | 2024-06-17T17:49:00Z | Transcoders Find Interpretable LLM Feature Circuits | A key goal in mechanistic interpretability is circuit analysis: finding sparse subgraphs of models corresponding to specific behaviors or capabilities. However, MLP sublayers make fine-grained circuit analysis on transformer-based language models difficult. In particular, interpretable features -- such as those found by sparse autoencoders (SAEs) -- are typically linear combinations of extremely many neurons, each with its own nonlinearity to account for. Circuit analysis in this setting thus either yields intractably large circuits or fails to disentangle local and global behavior. To address this we explore transcoders, which seek to faithfully approximate a densely activating MLP layer with a wider, sparsely-activating MLP layer. We successfully train transcoders on language models with 120M, 410M, and 1.4B parameters, and find them to perform at least on par with SAEs in terms of sparsity, faithfulness, and human-interpretability. We then introduce a novel method for using transcoders to perform weights-based circuit analysis through MLP sublayers. The resulting circuits neatly factorize into input-dependent and input-invariant terms. Finally, we apply transcoders to reverse-engineer unknown circuits in the model, and we obtain novel insights regarding the greater-than circuit in GPT2-small. Our results suggest that transcoders can prove effective in decomposing model computations involving MLPs into interpretable circuits. Code is available at https://github.com/jacobdunefsky/transcoder_circuits. | [
"['Jacob Dunefsky' 'Philippe Chlenski' 'Neel Nanda']"
] |
null | null | 2406.11945 | null | null | http://arxiv.org/pdf/2406.11945v1 | 2024-06-17T17:49:19Z | 2024-06-17T17:49:19Z | GAugLLM: Improving Graph Contrastive Learning for Text-Attributed Graphs
with Large Language Models | This work studies self-supervised graph learning for text-attributed graphs (TAGs) where nodes are represented by textual attributes. Unlike traditional graph contrastive methods that perturb the numerical feature space and alter the graph's topological structure, we aim to improve view generation through language supervision. This is driven by the prevalence of textual attributes in real applications, which complement graph structures with rich semantic information. However, this presents challenges because of two major reasons. First, text attributes often vary in length and quality, making it difficulty to perturb raw text descriptions without altering their original semantic meanings. Second, although text attributes complement graph structures, they are not inherently well-aligned. To bridge the gap, we introduce GAugLLM, a novel framework for augmenting TAGs. It leverages advanced large language models like Mistral to enhance self-supervised graph learning. Specifically, we introduce a mixture-of-prompt-expert technique to generate augmented node features. This approach adaptively maps multiple prompt experts, each of which modifies raw text attributes using prompt engineering, into numerical feature space. Additionally, we devise a collaborative edge modifier to leverage structural and textual commonalities, enhancing edge augmentation by examining or building connections between nodes. Empirical results across five benchmark datasets spanning various domains underscore our framework's ability to enhance the performance of leading contrastive methods as a plug-in tool. Notably, we observe that the augmented features and graph structure can also enhance the performance of standard generative methods, as well as popular graph neural networks. The open-sourced implementation of our GAugLLM is available at Github. | [
"['Yi Fang' 'Dongzhe Fan' 'Daochen Zha' 'Qiaoyu Tan']"
] |
null | null | 2406.11978 | null | null | http://arxiv.org/pdf/2406.11978v1 | 2024-06-17T18:01:32Z | 2024-06-17T18:01:32Z | Dialogue Action Tokens: Steering Language Models in Goal-Directed
Dialogue with a Multi-Turn Planner | We present an approach called Dialogue Action Tokens (DAT) that adapts language model agents to plan goal-directed dialogues. The core idea is to treat each utterance as an action, thereby converting dialogues into games where existing approaches such as reinforcement learning can be applied. Specifically, we freeze a pretrained language model and train a small planner model that predicts a continuous action vector, used for controlled generation in each round. This design avoids the problem of language degradation under reward optimization. When evaluated on the Sotopia platform for social simulations, the DAT-steered LLaMA model surpasses GPT-4's performance. We also apply DAT to steer an attacker language model in a novel multi-turn red-teaming setting, revealing a potential new attack surface. | [
"['Kenneth Li' 'Yiming Wang' 'Fernanda Viégas' 'Martin Wattenberg']"
] |
null | null | 2406.11988 | null | null | http://arxiv.org/pdf/2406.11988v1 | 2024-06-17T18:04:23Z | 2024-06-17T18:04:23Z | Decomposed evaluations of geographic disparities in text-to-image models | Recent work has identified substantial disparities in generated images of different geographic regions, including stereotypical depictions of everyday objects like houses and cars. However, existing measures for these disparities have been limited to either human evaluations, which are time-consuming and costly, or automatic metrics evaluating full images, which are unable to attribute these disparities to specific parts of the generated images. In this work, we introduce a new set of metrics, Decomposed Indicators of Disparities in Image Generation (Decomposed-DIG), that allows us to separately measure geographic disparities in the depiction of objects and backgrounds in generated images. Using Decomposed-DIG, we audit a widely used latent diffusion model and find that generated images depict objects with better realism than backgrounds and that backgrounds in generated images tend to contain larger regional disparities than objects. We use Decomposed-DIG to pinpoint specific examples of disparities, such as stereotypical background generation in Africa, struggling to generate modern vehicles in Africa, and unrealistically placing some objects in outdoor settings. Informed by our metric, we use a new prompting structure that enables a 52% worst-region improvement and a 20% average improvement in generated background diversity. | [
"['Abhishek Sureddy' 'Dishant Padalia' 'Nandhinee Periyakaruppa'\n 'Oindrila Saha' 'Adina Williams' 'Adriana Romero-Soriano'\n 'Megan Richards' 'Polina Kirichenko' 'Melissa Hall']"
] |
null | null | 2406.11993 | null | null | http://arxiv.org/pdf/2406.11993v1 | 2024-06-17T18:07:16Z | 2024-06-17T18:07:16Z | Delay Embedding Theory of Neural Sequence Models | To generate coherent responses, language models infer unobserved meaning from their input text sequence. One potential explanation for this capability arises from theories of delay embeddings in dynamical systems, which prove that unobserved variables can be recovered from the history of only a handful of observed variables. To test whether language models are effectively constructing delay embeddings, we measure the capacities of sequence models to reconstruct unobserved dynamics. We trained 1-layer transformer decoders and state-space sequence models on next-step prediction from noisy, partially-observed time series data. We found that each sequence layer can learn a viable embedding of the underlying system. However, state-space models have a stronger inductive bias than transformers-in particular, they more effectively reconstruct unobserved information at initialization, leading to more parameter-efficient models and lower error on dynamics tasks. Our work thus forges a novel connection between dynamical systems and deep learning sequence models via delay embedding theory. | [
"['Mitchell Ostrow' 'Adam Eisen' 'Ila Fiete']"
] |
null | null | 2406.12002 | null | null | http://arxiv.org/pdf/2406.12002v1 | 2024-06-17T18:13:57Z | 2024-06-17T18:13:57Z | Modeling, Inference, and Prediction in Mobility-Based Compartmental
Models for Epidemiology | Classical compartmental models in epidemiology often struggle to accurately capture real-world dynamics due to their inability to address the inherent heterogeneity of populations. In this paper, we introduce a novel approach that incorporates heterogeneity through a mobility variable, transforming the traditional ODE system into a system of integro-differential equations that describe the dynamics of population densities across different compartments. Our results show that, for the same basic reproduction number, our mobility-based model predicts a smaller final pandemic size compared to classic compartmental models, whose population densities are represented as Dirac delta functions in our density-based framework. This addresses the overestimation issue common in many classical models. Additionally, we demonstrate that the time series of the infected population is sufficient to uniquely identify the mobility distribution. We reconstruct this distribution using a machine-learning-based framework, providing both theoretical and algorithmic support to effectively constrain the mobility-based model with real-world data. | [
"['Ning Jiang' 'Weiqi Chu' 'Yao Li']"
] |
null | null | 2406.12008 | null | null | http://arxiv.org/pdf/2406.12008v3 | 2024-07-11T13:32:59Z | 2024-06-17T18:21:03Z | QC-Forest: a Classical-Quantum Algorithm to Provably Speedup Retraining
of Random Forest | Random Forest (RF) is a popular tree-ensemble method for supervised learning, prized for its ease of use and flexibility. Online RF models require to account for new training data to maintain model accuracy. This is particularly important in applications where data is periodically and sequentially generated over time in data streams, such as auto-driving systems, and credit card payments. In this setting, performing periodic model retraining with the old and new data accumulated is beneficial as it fully captures possible drifts in the data distribution over time. However, this is unpractical with state-of-the-art classical algorithms for RF as they scale linearly with the accumulated number of samples. We propose QC-Forest, a classical-quantum algorithm designed to time-efficiently retrain RF models in the streaming setting for multi-class classification and regression, achieving a runtime poly-logarithmic in the total number of accumulated samples. QC-Forest leverages Des-q, a quantum algorithm for single tree construction and retraining proposed by Kumar et al. by expanding to multi-class classification, as the original proposal was limited to binary classes, and introducing an exact classical method to replace an underlying quantum subroutine incurring a finite error, while maintaining the same poly-logarithmic dependence. Finally, we showcase that QC-Forest achieves competitive accuracy in comparison to state-of-the-art RF methods on widely used benchmark datasets with up to 80,000 samples, while significantly speeding up the model retrain. | [
"['Romina Yalovetzky' 'Niraj Kumar' 'Changhao Li' 'Marco Pistoia']"
] |
null | null | 2406.12011 | null | null | http://arxiv.org/pdf/2406.12011v1 | 2024-06-17T18:29:49Z | 2024-06-17T18:29:49Z | The Benefits and Risks of Transductive Approaches for AI Fairness | Recently, transductive learning methods, which leverage holdout sets during training, have gained popularity for their potential to improve speed, accuracy, and fairness in machine learning models. Despite this, the composition of the holdout set itself, particularly the balance of sensitive sub-groups, has been largely overlooked. Our experiments on CIFAR and CelebA datasets show that compositional changes in the holdout set can substantially influence fairness metrics. Imbalanced holdout sets exacerbate existing disparities, while balanced holdouts can mitigate issues introduced by imbalanced training data. These findings underline the necessity of constructing holdout sets that are both diverse and representative. | [
"['Muhammed Razzak' 'Andreas Kirsch' 'Yarin Gal']"
] |
null | null | 2406.12016 | null | null | http://arxiv.org/pdf/2406.12016v1 | 2024-06-17T18:33:44Z | 2024-06-17T18:33:44Z | Prefixing Attention Sinks can Mitigate Activation Outliers for Large
Language Model Quantization | Despite recent advances in LLM quantization, activation quantization remains to be challenging due to the activation outliers. Conventional remedies, e.g., mixing precisions for different channels, introduce extra overhead and reduce the speedup. In this work, we develop a simple yet effective strategy to facilitate per-tensor activation quantization by preventing the generation of problematic tokens. Precisely, we propose a method to find a set of key-value cache, coined CushionCache, which mitigates outliers in subsequent tokens when inserted as a prefix. CushionCache works in two steps: First, we greedily search for a prompt token sequence that minimizes the maximum activation values in subsequent tokens. Then, we further tune the token cache to regularize the activations of subsequent tokens to be more quantization-friendly. The proposed method successfully addresses activation outliers of LLMs, providing a substantial performance boost for per-tensor activation quantization methods. We thoroughly evaluate our method over a wide range of models and benchmarks and find that it significantly surpasses the established baseline of per-tensor W8A8 quantization and can be seamlessly integrated with the recent activation quantization method. | [
"['Seungwoo Son' 'Wonpyo Park' 'Woohyun Han' 'Kyuyeun Kim' 'Jaeho Lee']"
] |
null | null | 2406.12017 | null | null | http://arxiv.org/pdf/2406.12017v1 | 2024-06-17T18:34:51Z | 2024-06-17T18:34:51Z | Sparsity-Constraint Optimization via Splicing Iteration | Sparsity-constraint optimization has wide applicability in signal processing, statistics, and machine learning. Existing fast algorithms must burdensomely tune parameters, such as the step size or the implementation of precise stop criteria, which may be challenging to determine in practice. To address this issue, we develop an algorithm named Sparsity-Constraint Optimization via sPlicing itEration (SCOPE) to optimize nonlinear differential objective functions with strong convexity and smoothness in low dimensional subspaces. Algorithmically, the SCOPE algorithm converges effectively without tuning parameters. Theoretically, SCOPE has a linear convergence rate and converges to a solution that recovers the true support set when it correctly specifies the sparsity. We also develop parallel theoretical results without restricted-isometry-property-type conditions. We apply SCOPE's versatility and power to solve sparse quadratic optimization, learn sparse classifiers, and recover sparse Markov networks for binary variables. The numerical results on these specific tasks reveal that SCOPE perfectly identifies the true support set with a 10--1000 speedup over the standard exact solver, confirming SCOPE's algorithmic and theoretical merits. Our open-source Python package skscope based on C++ implementation is publicly available on GitHub, reaching a ten-fold speedup on the competing convex relaxation methods implemented by the cvxpy library. | [
"['Zezhi Wang' 'Jin Zhu' 'Junxian Zhu' 'Borui Tang' 'Hongmei Lin'\n 'Xueqin Wang']"
] |
null | null | 2406.12022 | null | null | http://arxiv.org/pdf/2406.12022v1 | 2024-06-17T18:42:03Z | 2024-06-17T18:42:03Z | Constructing Ancestral Recombination Graphs through Reinforcement
Learning | Over the years, many approaches have been proposed to build ancestral recombination graphs (ARGs), graphs used to represent the genetic relationship between individuals. Among these methods, many rely on the assumption that the most likely graph is among the shortest ones. In this paper, we propose a new approach to build short ARGs: Reinforcement Learning (RL). We exploit the similarities between finding the shortest path between a set of genetic sequences and their most recent common ancestor and finding the shortest path between the entrance and exit of a maze, a classic RL problem. In the maze problem, the learner, called the agent, must learn the directions to take in order to escape as quickly as possible, whereas in our problem, the agent must learn the actions to take between coalescence, mutation, and recombination in order to reach the most recent common ancestor as quickly as possible. Our results show that RL can be used to build ARGs as short as those built with a heuristic algorithm optimized to build short ARGs, and sometimes even shorter. Moreover, our method allows to build a distribution of short ARGs for a given sample, and can also generalize learning to new samples not used during the learning process. | [
"['Mélanie Raymond' 'Marie-Hélène Descary' 'Cédric Beaulac'\n 'Fabrice Larribe']"
] |
null | null | 2406.12023 | null | null | http://arxiv.org/pdf/2406.12023v1 | 2024-06-17T18:45:41Z | 2024-06-17T18:45:41Z | LiLiuM: eBay's Large Language Models for e-commerce | We introduce the LiLiuM series of large language models (LLMs): 1B, 7B, and 13B parameter models developed 100% in-house to fit eBay's specific needs in the e-commerce domain. This gives eBay full control over all aspects of the models including license, data, vocabulary, and architecture. We expect these models to be used as a foundation for fine-tuning and instruction-tuning, eliminating dependencies to external models. The LiLiuM LLMs have been trained on 3 trillion tokens of multilingual text from general and e-commerce domain. They perform similar to the popular LLaMA-2 models on English natural language understanding (NLU) benchmarks. At the same time, we outperform LLaMA-2 on non-English NLU tasks, machine translation and on e-commerce specific downstream tasks. As part of our data mixture, we utilize the newly released RedPajama-V2 dataset for training and share our insights regarding data filtering and deduplication. We also discuss in detail how to serialize structured data for use in autoregressive language modeling. We provide insights on the effects of including code and parallel machine translation data in pre-training. Furthermore, we develop our own tokenizer and model vocabulary, customized towards e-commerce. This way, we can achieve up to 34% speed-up in text generation on eBay-specific downstream tasks compared to LLaMA-2. Finally, in relation to LLM pretraining, we show that checkpoint averaging can further improve over the best individual model checkpoint. | [
"['Christian Herold' 'Michael Kozielski' 'Leonid Ekimov' 'Pavel Petrushkov'\n 'Pierre-Yves Vandenbussche' 'Shahram Khadivi']"
] |
null | null | 2406.12031 | null | null | http://arxiv.org/pdf/2406.12031v1 | 2024-06-17T18:58:20Z | 2024-06-17T18:58:20Z | Large Scale Transfer Learning for Tabular Data via Language Modeling | Tabular data -- structured, heterogeneous, spreadsheet-style data with rows and columns -- is widely used in practice across many domains. However, while recent foundation models have reduced the need for developing task-specific datasets and predictors in domains such as language modeling and computer vision, this transfer learning paradigm has not had similar impact in the tabular domain. In this work, we seek to narrow this gap and present TabuLa-8B, a language model for tabular prediction. We define a process for extracting a large, high-quality training dataset from the TabLib corpus, proposing methods for tabular data filtering and quality control. Using the resulting dataset, which comprises over 1.6B rows from 3.1M unique tables, we fine-tune a Llama 3-8B large language model (LLM) for tabular data prediction (classification and binned regression) using a novel packing and attention scheme for tabular prediction. Through evaluation across a test suite of 329 datasets, we find that TabuLa-8B has zero-shot accuracy on unseen tables that is over 15 percentage points (pp) higher than random guessing, a feat that is not possible with existing state-of-the-art tabular prediction models (e.g. XGBoost, TabPFN). In the few-shot setting (1-32 shots), without any fine-tuning on the target datasets, TabuLa-8B is 5-15 pp more accurate than XGBoost and TabPFN models that are explicitly trained on equal, or even up to 16x more data. We release our model, code, and data along with the publication of this paper. | [
"['Josh Gardner' 'Juan C. Perdomo' 'Ludwig Schmidt']"
] |
null | null | 2406.12034 | null | null | http://arxiv.org/pdf/2406.12034v1 | 2024-06-17T19:06:54Z | 2024-06-17T19:06:54Z | Self-MoE: Towards Compositional Large Language Models with
Self-Specialized Experts | We present Self-MoE, an approach that transforms a monolithic LLM into a compositional, modular system of self-specialized experts, named MiXSE (MiXture of Self-specialized Experts). Our approach leverages self-specialization, which constructs expert modules using self-generated synthetic data, each equipped with a shared base LLM and incorporating self-optimized routing. This allows for dynamic and capability-specific handling of various target tasks, enhancing overall capabilities, without extensive human-labeled data and added parameters. Our empirical results reveal that specializing LLMs may exhibit potential trade-offs in performances on non-specialized tasks. On the other hand, our Self-MoE demonstrates substantial improvements over the base LLM across diverse benchmarks such as knowledge, reasoning, math, and coding. It also consistently outperforms other methods, including instance merging and weight merging, while offering better flexibility and interpretability by design with semantic experts and routing. Our findings highlight the critical role of modularity and the potential of self-improvement in achieving efficient, scalable, and adaptable systems. | [
"['Junmo Kang' 'Leonid Karlinsky' 'Hongyin Luo' 'Zhen Wang' 'Jacob Hansen'\n 'James Glass' 'David Cox' 'Rameswar Panda' 'Rogerio Feris' 'Alan Ritter']"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.