categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 2405.20790 | null | null | http://arxiv.org/pdf/2405.20790v3 | 2024-06-07T00:55:32Z | 2024-05-31T13:45:52Z | Intersectional Unfairness Discovery | AI systems have been shown to produce unfair results for certain subgroups of population, highlighting the need to understand bias on certain sensitive attributes. Current research often falls short, primarily focusing on the subgroups characterized by a single sensitive attribute, while neglecting the nature of intersectional fairness of multiple sensitive attributes. This paper focuses on its one fundamental aspect by discovering diverse high-bias subgroups under intersectional sensitive attributes. Specifically, we propose a Bias-Guided Generative Network (BGGN). By treating each bias value as a reward, BGGN efficiently generates high-bias intersectional sensitive attributes. Experiments on real-world text and image datasets demonstrate a diverse and efficient discovery of BGGN. To further evaluate the generated unseen but possible unfair intersectional sensitive attributes, we formulate them as prompts and use modern generative AI to produce new texts and images. The results of frequently generating biased data provides new insights of discovering potential unfairness in popular modern generative AI systems. Warning: This paper contains generative examples that are offensive in nature. | [
"['Gezheng Xu' 'Qi Chen' 'Charles Ling' 'Boyu Wang' 'Changjian Shui']"
] |
null | null | 2405.20791 | null | null | http://arxiv.org/pdf/2405.20791v1 | 2024-05-31T13:48:54Z | 2024-05-31T13:48:54Z | GS-Phong: Meta-Learned 3D Gaussians for Relightable Novel View Synthesis | Decoupling the illumination in 3D scenes is crucial for novel view synthesis and relighting. In this paper, we propose a novel method for representing a scene illuminated by a point light using a set of relightable 3D Gaussian points. Inspired by the Blinn-Phong model, our approach decomposes the scene into ambient, diffuse, and specular components, enabling the synthesis of realistic lighting effects. To facilitate the decomposition of geometric information independent of lighting conditions, we introduce a novel bilevel optimization-based meta-learning framework. The fundamental idea is to view the rendering tasks under various lighting positions as a multi-task learning problem, which our meta-learning approach effectively addresses by generalizing the learned Gaussian geometries not only across different viewpoints but also across diverse light positions. Experimental results demonstrate the effectiveness of our approach in terms of training efficiency and rendering quality compared to existing methods for free-viewpoint relighting. | [
"['Yumeng He' 'Yunbo Wang' 'Xiaokang Yang']"
] |
null | null | 2405.20794 | null | null | http://arxiv.org/abs/2405.20794v1 | 2024-05-31T13:54:25Z | 2024-05-31T13:54:25Z | Model Interpretation and Explainability: Towards Creating Transparency
in Prediction Models | Explainable AI (XAI) has a counterpart in analytical modeling which we refer to as model explainability. We tackle the issue of model explainability in the context of prediction models. We analyze a dataset of loans from a credit card company and apply three stages: execute and compare four different prediction methods, apply the best known explainability techniques in the current literature to the model training sets to identify feature importance (FI) (static case), and finally to cross-check whether the FI set holds up under what if prediction scenarios for continuous and categorical variables (dynamic case). We found inconsistency in FI identification between the static and dynamic cases. We summarize the state of the art in model explainability and suggest further research to advance the field. | [
"['Donald Kridel' 'Jacob Dineen' 'Daniel Dolk' 'David Castillo']"
] |
null | null | 2405.20797 | null | null | http://arxiv.org/pdf/2405.20797v2 | 2024-06-17T17:51:50Z | 2024-05-31T13:59:18Z | Ovis: Structural Embedding Alignment for Multimodal Large Language Model | Current Multimodal Large Language Models (MLLMs) typically integrate a pre-trained LLM with another pre-trained vision transformer through a connector, such as an MLP, endowing the LLM with visual capabilities. However, the misalignment between two embedding strategies in MLLMs -- the structural textual embeddings based on an embedding look-up table and the continuous embeddings generated directly by the vision encoder -- makes challenges for a more seamless fusion of visual and textual information. We propose Ovis, a novel MLLM architecture designed to structurally align visual and textual embeddings. Ovis integrates an additional learnable visual embedding table into the visual encoder's process. To capture rich visual semantics, each image patch indexes the visual embedding table multiple times, resulting in a final visual embedding that is a probabilistic combination of the indexed embeddings. This structural approach mirrors the method used for generating textual embeddings. Empirical evaluations on various multimodal benchmarks show that Ovis outperforms open-source MLLMs of similar parameter scales and even surpasses the proprietary model Qwen-VL-Plus overall. These results highlight the potential of Ovis' structured visual representation for advancing MLLM architectural design and promoting more effective multimodal learning. Code, datasets, and models are available at https://github.com/AIDC-AI/Ovis. | [
"['Shiyin Lu' 'Yang Li' 'Qing-Guo Chen' 'Zhao Xu' 'Weihua Luo'\n 'Kaifu Zhang' 'Han-Jia Ye']"
] |
null | null | 2405.20799 | null | null | http://arxiv.org/pdf/2405.20799v1 | 2024-05-31T14:00:44Z | 2024-05-31T14:00:44Z | Rough Transformers: Lightweight Continuous-Time Sequence Modelling with
Path Signatures | Time-series data in real-world settings typically exhibit long-range dependencies and are observed at non-uniform intervals. In these settings, traditional sequence-based recurrent models struggle. To overcome this, researchers often replace recurrent architectures with Neural ODE-based models to account for irregularly sampled data and use Transformer-based architectures to account for long-range dependencies. Despite the success of these two approaches, both incur very high computational costs for input sequences of even moderate length. To address this challenge, we introduce the Rough Transformer, a variation of the Transformer model that operates on continuous-time representations of input sequences and incurs significantly lower computational costs. In particular, we propose textit{multi-view signature attention}, which uses path signatures to augment vanilla attention and to capture both local and global (multi-scale) dependencies in the input data, while remaining robust to changes in the sequence length and sampling frequency and yielding improved spatial processing. We find that, on a variety of time-series-related tasks, Rough Transformers consistently outperform their vanilla attention counterparts while obtaining the representational benefits of Neural ODE-based models, all at a fraction of the computational time and memory resources. | [
"['Fernando Moreno-Pino' 'Álvaro Arroyo' 'Harrison Waldon' 'Xiaowen Dong'\n 'Álvaro Cartea']"
] |
null | null | 2405.20800 | null | null | http://arxiv.org/pdf/2405.20800v1 | 2024-05-31T14:01:12Z | 2024-05-31T14:01:12Z | Shape Constraints in Symbolic Regression using Penalized Least Squares | We study the addition of shape constraints and their consideration during the parameter estimation step of symbolic regression (SR). Shape constraints serve as a means to introduce prior knowledge about the shape of the otherwise unknown model function into SR. Unlike previous works that have explored shape constraints in SR, we propose minimizing shape constraint violations during parameter estimation using gradient-based numerical optimization. We test three algorithm variants to evaluate their performance in identifying three symbolic expressions from a synthetically generated data set. This paper examines two benchmark scenarios: one with varying noise levels and another with reduced amounts of training data. The results indicate that incorporating shape constraints into the expression search is particularly beneficial when data is scarce. Compared to using shape constraints only in the selection process, our approach of minimizing violations during parameter estimation shows a statistically significant benefit in some of our test cases, without being significantly worse in any instance. | [
"['Viktor Martinek' 'Julia Reuter' 'Ophelia Frotscher' 'Sanaz Mostaghim'\n 'Markus Richter' 'Roland Herzog']"
] |
null | null | 2405.20808 | null | null | http://arxiv.org/pdf/2405.20808v1 | 2024-05-31T14:07:33Z | 2024-05-31T14:07:33Z | Optimally Improving Cooperative Learning in a Social Setting | We consider a cooperative learning scenario where a collection of networked agents with individually owned classifiers dynamically update their predictions, for the same classification task, through communication or observations of each other's predictions. Clearly if highly influential vertices use erroneous classifiers, there will be a negative effect on the accuracy of all the agents in the network. We ask the following question: how can we optimally fix the prediction of a few classifiers so as maximize the overall accuracy in the entire network. To this end we consider an aggregate and an egalitarian objective function. We show a polynomial time algorithm for optimizing the aggregate objective function, and show that optimizing the egalitarian objective function is NP-hard. Furthermore, we develop approximation algorithms for the egalitarian improvement. The performance of all of our algorithms are guaranteed by mathematical analysis and backed by experiments on synthetic and real data. | [
"['Shahrzad Haddadan' 'Cheng Xin' 'Jie Gao']"
] |
null | null | 2405.20821 | null | null | http://arxiv.org/pdf/2405.20821v1 | 2024-05-31T14:15:44Z | 2024-05-31T14:15:44Z | Pursuing Overall Welfare in Federated Learning through Sequential
Decision Making | In traditional federated learning, a single global model cannot perform equally well for all clients. Therefore, the need to achieve the client-level fairness in federated system has been emphasized, which can be realized by modifying the static aggregation scheme for updating the global model to an adaptive one, in response to the local signals of the participating clients. Our work reveals that existing fairness-aware aggregation strategies can be unified into an online convex optimization framework, in other words, a central server's sequential decision making process. To enhance the decision making capability, we propose simple and intuitive improvements for suboptimal designs within existing methods, presenting AAggFF. Considering practical requirements, we further subdivide our method tailored for the cross-device and the cross-silo settings, respectively. Theoretical analyses guarantee sublinear regret upper bounds for both settings: $mathcal{O}(sqrt{T log{K}})$ for the cross-device setting, and $mathcal{O}(K log{T})$ for the cross-silo setting, with $K$ clients and $T$ federation rounds. Extensive experiments demonstrate that the federated system equipped with AAggFF achieves better degree of client-level fairness than existing methods in both practical settings. Code is available at https://github.com/vaseline555/AAggFF | [
"['Seok-Ju Hahn' 'Gi-Soo Kim' 'Junghye Lee']"
] |
null | null | 2405.20824 | null | null | http://arxiv.org/pdf/2405.20824v1 | 2024-05-31T14:16:52Z | 2024-05-31T14:16:52Z | Online Convex Optimisation: The Optimal Switching Regret for all
Segmentations Simultaneously | We consider the classic problem of online convex optimisation. Whereas the notion of static regret is relevant for stationary problems, the notion of switching regret is more appropriate for non-stationary problems. A switching regret is defined relative to any segmentation of the trial sequence, and is equal to the sum of the static regrets of each segment. In this paper we show that, perhaps surprisingly, we can achieve the asymptotically optimal switching regret on every possible segmentation simultaneously. Our algorithm for doing so is very efficient: having a space and per-trial time complexity that is logarithmic in the time-horizon. Our algorithm also obtains novel bounds on its dynamic regret: being adaptive to variations in the rate of change of the comparator sequence. | [
"['Stephen Pasteris' 'Chris Hicks' 'Vasilios Mavroudis' 'Mark Herbster']"
] |
null | null | 2405.20825 | null | null | http://arxiv.org/pdf/2405.20825v1 | 2024-05-31T14:18:37Z | 2024-05-31T14:18:37Z | Analysis of clinical, dosimetric and radiomic features for predicting
local failure after stereotactic radiotherapy of brain metastases in
malignant melanoma | Background: The aim of this study was to investigate the role of clinical, dosimetric and pretherapeutic magnetic resonance imaging (MRI) features for lesion-specific outcome prediction of stereotactic radiotherapy (SRT) in patients with brain metastases from malignant melanoma (MBM). Methods: In this multicenter, retrospective analysis, we reviewed 517 MBM from 130 patients treated with SRT (single fraction or hypofractionated). For each gross tumor volume (GTV) 1576 radiomic features (RF) were calculated (788 each for the GTV and for a 3 mm margin around the GTV). Clinical parameters, radiation dose and RF from pretherapeutic contrast-enhanced T1-weighted MRI from different institutions were evaluated with a feature processing and elimination pipeline in a nested cross-validation scheme. Results: Seventy-two (72) of 517 lesions (13.9%) showed a local failure (LF) after SRT. The processing pipeline showed clinical, dosimetric and radiomic features providing information for LF prediction. The most prominent ones were the correlation of the gray level co-occurrence matrix of the margin (hazard ratio (HR): 0.37, confidence interval (CI): 0.23-0.58) and systemic therapy before SRT (HR: 0.55, CI: 0.42-0.70). The majority of RF associated with LF was calculated in the margin around the GTV. Conclusions: Pretherapeutic MRI based RF connected with lesion-specific outcome after SRT could be identified, despite multicentric data and minor differences in imaging protocols. Image data analysis of the surrounding metastatic environment may provide therapy-relevant information with the potential to further individualize radiotherapy strategies. | [
"['Nanna E. Hartong' 'Ilias Sachpazidis' 'Oliver Blanck' 'Lucas Etzel'\n 'Jan C. Peeken' 'Stephanie E. Combs' 'Horst Urbach' 'Maxim Zaitsev'\n 'Dimos Baltas' 'Ilinca Popp' 'Anca-Ligia Grosu' 'Tobias Fechter']"
] |
null | null | 2405.20829 | null | null | http://arxiv.org/pdf/2405.20829v1 | 2024-05-31T14:21:00Z | 2024-05-31T14:21:00Z | Rethinking Open-World Semi-Supervised Learning: Distribution Mismatch
and Inductive Inference | Open-world semi-supervised learning (OWSSL) extends conventional semi-supervised learning to open-world scenarios by taking account of novel categories in unlabeled datasets. Despite the recent advancements in OWSSL, the success often relies on the assumptions that 1) labeled and unlabeled datasets share the same balanced class prior distribution, which does not generally hold in real-world applications, and 2) unlabeled training datasets are utilized for evaluation, where such transductive inference might not adequately address challenges in the wild. In this paper, we aim to generalize OWSSL by addressing them. Our work suggests that practical OWSSL may require different training settings, evaluation methods, and learning strategies compared to those prevalent in the existing literature. | [
"['Seongheon Park' 'Hyuk Kwon' 'Kwanghoon Sohn' 'Kibok Lee']"
] |
null | null | 2405.20830 | null | null | http://arxiv.org/pdf/2405.20830v1 | 2024-05-31T14:21:04Z | 2024-05-31T14:21:04Z | Self-Augmented Preference Optimization: Off-Policy Paradigms for
Language Model Alignment | Traditional language model alignment methods, such as Direct Preference Optimization (DPO), are limited by their dependence on static, pre-collected paired preference data, which hampers their adaptability and practical applicability. To overcome this limitation, we introduce Self-Augmented Preference Optimization (SAPO), an effective and scalable training paradigm that does not require existing paired data. Building on the self-play concept, which autonomously generates negative responses, we further incorporate an off-policy learning pipeline to enhance data exploration and exploitation. Specifically, we employ an Exponential Moving Average (EMA) model in conjunction with a replay buffer to enable dynamic updates of response segments, effectively integrating real-time feedback with insights from historical data. Our comprehensive evaluations of the LLaMA3-8B and Mistral-7B models across benchmarks, including the Open LLM Leaderboard, IFEval, AlpacaEval 2.0, and MT-Bench, demonstrate that SAPO matches or surpasses established offline contrastive baselines, such as DPO and Odds Ratio Preference Optimization, and outperforms offline self-play methods like SPIN. Our code is available at https://github.com/yinyueqin/SAPO | [
"['Yueqin Yin' 'Zhendong Wang' 'Yujia Xie' 'Weizhu Chen' 'Mingyuan Zhou']"
] |
null | null | 2405.20835 | null | null | http://arxiv.org/pdf/2405.20835v3 | 2024-06-05T09:53:18Z | 2024-05-31T14:24:33Z | Outliers and Calibration Sets have Diminishing Effect on Quantization of
Modern LLMs | Post-Training Quantization (PTQ) enhances the efficiency of Large Language Models (LLMs) by enabling faster operation and compatibility with more accessible hardware through reduced memory usage, at the cost of small performance drops. We explore the role of calibration sets in PTQ, specifically their effect on hidden activations in various notable open-source LLMs. Calibration sets are crucial for evaluating activation magnitudes and identifying outliers, which can distort the quantization range and negatively impact performance. Our analysis reveals a marked contrast in quantization effectiveness across models. The older OPT model, upon which much of the quantization literature is based, shows significant performance deterioration and high susceptibility to outliers with varying calibration sets. In contrast, newer models like Llama-2 7B, Llama-3 8B, Command-R 35B, and Mistral 7B demonstrate strong robustness, with Mistral 7B showing near-immunity to outliers and stable activations. These findings suggest a shift in PTQ strategies might be needed. As advancements in pre-training methods reduce the relevance of outliers, there is an emerging need to reassess the fundamentals of current quantization literature. The emphasis should pivot towards optimizing inference speed, rather than primarily focusing on outlier preservation, to align with the evolving characteristics of state-of-the-art LLMs. | [
"['Davide Paglieri' 'Saurabh Dash' 'Tim Rocktäschel' 'Jack Parker-Holder']"
] |
null | null | 2405.20836 | null | null | http://arxiv.org/pdf/2405.20836v1 | 2024-05-31T14:24:39Z | 2024-05-31T14:24:39Z | Solving partial differential equations with sampled neural networks | Approximation of solutions to partial differential equations (PDE) is an important problem in computational science and engineering. Using neural networks as an ansatz for the solution has proven a challenge in terms of training time and approximation accuracy. In this contribution, we discuss how sampling the hidden weights and biases of the ansatz network from data-agnostic and data-dependent probability distributions allows us to progress on both challenges. In most examples, the random sampling schemes outperform iterative, gradient-based optimization of physics-informed neural networks regarding training time and accuracy by several orders of magnitude. For time-dependent PDE, we construct neural basis functions only in the spatial domain and then solve the associated ordinary differential equation with classical methods from scientific computing over a long time horizon. This alleviates one of the greatest challenges for neural PDE solvers because it does not require us to parameterize the solution in time. For second-order elliptic PDE in Barron spaces, we prove the existence of sampled networks with $L^2$ convergence to the solution. We demonstrate our approach on several time-dependent and static PDEs. We also illustrate how sampled networks can effectively solve inverse problems in this setting. Benefits compared to common numerical schemes include spectral convergence and mesh-free construction of basis functions. | [
"['Chinmay Datar' 'Taniya Kapoor' 'Abhishek Chandra' 'Qing Sun'\n 'Iryna Burak' 'Erik Lien Bolager' 'Anna Veselovska' 'Massimo Fornasier'\n 'Felix Dietrich']"
] |
null | null | 2405.20838 | null | null | http://arxiv.org/pdf/2405.20838v1 | 2024-05-31T14:25:45Z | 2024-05-31T14:25:45Z | einspace: Searching for Neural Architectures from Fundamental Operations | Neural architecture search (NAS) finds high performing networks for a given task. Yet the results of NAS are fairly prosaic; they did not e.g. create a shift from convolutional structures to transformers. This is not least because the search spaces in NAS often aren't diverse enough to include such transformations a priori. Instead, for NAS to provide greater potential for fundamental design shifts, we need a novel expressive search space design which is built from more fundamental operations. To this end, we introduce einspace, a search space based on a parameterised probabilistic context-free grammar. Our space is versatile, supporting architectures of various sizes and complexities, while also containing diverse network operations which allow it to model convolutions, attention components and more. It contains many existing competitive architectures, and provides flexibility for discovering new ones. Using this search space, we perform experiments to find novel architectures as well as improvements on existing ones on the diverse Unseen NAS datasets. We show that competitive architectures can be obtained by searching from scratch, and we consistently find large improvements when initialising the search with strong baselines. We believe that this work is an important advancement towards a transformative NAS paradigm where search space expressivity and strategic search initialisation play key roles. | [
"['Linus Ericsson' 'Miguel Espinosa' 'Chenhongyi Yang' 'Antreas Antoniou'\n 'Amos Storkey' 'Shay B. Cohen' 'Steven McDonagh' 'Elliot J. Crowley']"
] |
null | null | 2405.20848 | null | null | http://arxiv.org/pdf/2405.20848v1 | 2024-05-31T14:32:31Z | 2024-05-31T14:32:31Z | SLIM: a Scalable Light-weight Root Cause Analysis for Imbalanced Data in
Microservice | The newly deployed service -- one kind of change service, could lead to a new type of minority fault. Existing state-of-the-art methods for fault localization rarely consider the imbalanced fault classification in change service. This paper proposes a novel method that utilizes decision rule sets to deal with highly imbalanced data by optimizing the F1 score subject to cardinality constraints. The proposed method greedily generates the rule with maximal marginal gain and uses an efficient minorize-maximization (MM) approach to select rules iteratively, maximizing a non-monotone submodular lower bound. Compared with existing fault localization algorithms, our algorithm can adapt to the imbalanced fault scenario of change service, and provide interpretable fault causes which are easy to understand and verify. Our method can also be deployed in the online training setting, with only about 15% training overhead compared to the current SOTA methods. Empirical studies showcase that our algorithm outperforms existing fault localization algorithms in both accuracy and model interpretability. | [
"['Rui Ren' 'Jingbang Yang' 'Linxiao Yang' 'Xinyue Gu' 'Liang Sun']"
] |
null | null | 2405.20860 | null | null | http://arxiv.org/pdf/2405.20860v1 | 2024-05-31T14:44:05Z | 2024-05-31T14:44:05Z | Enhancing Efficiency of Safe Reinforcement Learning via Sample
Manipulation | Safe reinforcement learning (RL) is crucial for deploying RL agents in real-world applications, as it aims to maximize long-term rewards while satisfying safety constraints. However, safe RL often suffers from sample inefficiency, requiring extensive interactions with the environment to learn a safe policy. We propose Efficient Safe Policy Optimization (ESPO), a novel approach that enhances the efficiency of safe RL through sample manipulation. ESPO employs an optimization framework with three modes: maximizing rewards, minimizing costs, and balancing the trade-off between the two. By dynamically adjusting the sampling process based on the observed conflict between reward and safety gradients, ESPO theoretically guarantees convergence, optimization stability, and improved sample complexity bounds. Experiments on the Safety-MuJoCo and Omnisafe benchmarks demonstrate that ESPO significantly outperforms existing primal-based and primal-dual-based baselines in terms of reward maximization and constraint satisfaction. Moreover, ESPO achieves substantial gains in sample efficiency, requiring 25--29% fewer samples than baselines, and reduces training time by 21--38%. | [
"['Shangding Gu' 'Laixi Shi' 'Yuhao Ding' 'Alois Knoll' 'Costas Spanos'\n 'Adam Wierman' 'Ming Jin']"
] |
null | null | 2405.20877 | null | null | http://arxiv.org/pdf/2405.20877v1 | 2024-05-31T14:52:58Z | 2024-05-31T14:52:58Z | Waveform Design for Over-the-Air Computing | In response to the increasing number of devices anticipated in next-generation networks, a shift toward over-the-air (OTA) computing has been proposed. Leveraging the superposition of multiple access channels, OTA computing enables efficient resource management by supporting simultaneous uncoded transmission in the time and the frequency domain. Thus, to advance the integration of OTA computing, our study presents a theoretical analysis addressing practical issues encountered in current digital communication transceivers, such as time sampling error and intersymbol interference (ISI). To this end, we examine the theoretical mean squared error (MSE) for OTA transmission under time sampling error and ISI, while also exploring methods for minimizing the MSE in the OTA transmission. Utilizing alternating optimization, we also derive optimal power policies for both the devices and the base station. Additionally, we propose a novel deep neural network (DNN)-based approach to design waveforms enhancing OTA transmission performance under time sampling error and ISI. To ensure fair comparison with existing waveforms like the raised cosine (RC) and the better-than-raised-cosine (BRTC), we incorporate a custom loss function integrating energy and bandwidth constraints, along with practical design considerations such as waveform symmetry. Simulation results validate our theoretical analysis and demonstrate performance gains of the designed pulse over RC and BTRC waveforms. To facilitate testing of our results without necessitating the DNN structure recreation, we provide curve fitting parameters for select DNN-based waveforms as well. | [
"['Nikos G. Evgenidis' 'Nikos A. Mitsiou' 'Sotiris A. Tegos'\n 'Panagiotis D. Diamantoulakis' 'Panagiotis Sarigiannidis'\n 'Ioannis T. Rekanos' 'George K. Karagiannidis']"
] |
null | null | 2405.20879 | null | null | http://arxiv.org/pdf/2405.20879v1 | 2024-05-31T14:54:51Z | 2024-05-31T14:54:51Z | Flow matching achieves minimax optimal convergence | Flow matching (FM) has gained significant attention as a simulation-free generative model. Unlike diffusion models, which are based on stochastic differential equations, FM employs a simpler approach by solving an ordinary differential equation with an initial condition from a normal distribution, thus streamlining the sample generation process. This paper discusses the convergence properties of FM in terms of the $p$-Wasserstein distance, a measure of distributional discrepancy. We establish that FM can achieve the minmax optimal convergence rate for $1 leq p leq 2$, presenting the first theoretical evidence that FM can reach convergence rates comparable to those of diffusion models. Our analysis extends existing frameworks by examining a broader class of mean and variance functions for the vector fields and identifies specific conditions necessary to attain these optimal rates. | [
"['Kenji Fukumizu' 'Taiji Suzuki' 'Noboru Isobe' 'Kazusato Oko'\n 'Masanori Koyama']"
] |
null | null | 2405.20882 | null | null | http://arxiv.org/pdf/2405.20882v1 | 2024-05-31T14:55:38Z | 2024-05-31T14:55:38Z | Sheaf HyperNetworks for Personalized Federated Learning | Graph hypernetworks (GHNs), constructed by combining graph neural networks (GNNs) with hypernetworks (HNs), leverage relational data across various domains such as neural architecture search, molecular property prediction and federated learning. Despite GNNs and HNs being individually successful, we show that GHNs present problems compromising their performance, such as over-smoothing and heterophily. Moreover, we cannot apply GHNs directly to personalized federated learning (PFL) scenarios, where a priori client relation graph may be absent, private, or inaccessible. To mitigate these limitations in the context of PFL, we propose a novel class of HNs, sheaf hypernetworks (SHNs), which combine cellular sheaf theory with HNs to improve parameter sharing for PFL. We thoroughly evaluate SHNs across diverse PFL tasks, including multi-class classification, traffic and weather forecasting. Additionally, we provide a methodology for constructing client relation graphs in scenarios where such graphs are unavailable. We show that SHNs consistently outperform existing PFL solutions in complex non-IID scenarios. While the baselines' performance fluctuates depending on the task, SHNs show improvements of up to 2.7% in accuracy and 5.3% in lower mean squared error over the best-performing baseline. | [
"['Bao Nguyen' 'Lorenzo Sani' 'Xinchi Qiu' 'Pietro Liò' 'Nicholas D. Lane']"
] |
null | null | 2405.20887 | null | null | http://arxiv.org/pdf/2405.20887v1 | 2024-05-29T13:07:21Z | 2024-05-29T13:07:21Z | On the Condition Monitoring of Bolted Joints through Acoustic Emission
and Deep Transfer Learning: Generalization, Ordinal Loss and
Super-Convergence | This paper investigates the use of deep transfer learning based on convolutional neural networks (CNNs) to monitor the condition of bolted joints using acoustic emissions. Bolted structures are critical components in many mechanical systems, and the ability to monitor their condition status is crucial for effective structural health monitoring. We evaluated the performance of our methodology using the ORION-AE benchmark, a structure composed of two thin beams connected by three bolts, where highly noisy acoustic emission measurements were taken to detect changes in the applied tightening torque of the bolts. The data used from this structure is derived from the transformation of acoustic emission data streams into images using continuous wavelet transform, and leveraging pretrained CNNs for feature extraction and denoising. Our experiments compared single-sensor versus multiple-sensor fusion for estimating the tightening level (loosening) of bolts and evaluated the use of raw versus prefiltered data on the performance. We particularly focused on the generalization capabilities of CNN-based transfer learning across different measurement campaigns and we studied ordinal loss functions to penalize incorrect predictions less severely when close to the ground truth, thereby encouraging misclassification errors to be in adjacent classes. Network configurations as well as learning rate schedulers are also investigated, and super-convergence is obtained, i.e., high classification accuracy is achieved in a few number of iterations with different networks. Furthermore, results demonstrate the generalization capabilities of CNN-based transfer learning for monitoring bolted structures by acoustic emission with varying amounts of prior information required during training. | [
"['Emmanuel Ramasso' 'Rafael de O. Teloli' 'Romain Marcel']"
] |
null | null | 2405.20905 | null | null | http://arxiv.org/pdf/2405.20905v1 | 2024-05-31T15:16:48Z | 2024-05-31T15:16:48Z | VENI, VINDy, VICI: a variational reduced-order modeling framework with
uncertainty quantification | The simulation of many complex phenomena in engineering and science requires solving expensive, high-dimensional systems of partial differential equations (PDEs). To circumvent this, reduced-order models (ROMs) have been developed to speed up computations. However, when governing equations are unknown or partially known, typically ROMs lack interpretability and reliability of the predicted solutions. In this work we present a data-driven, non-intrusive framework for building ROMs where the latent variables and dynamics are identified in an interpretable manner and uncertainty is quantified. Starting from a limited amount of high-dimensional, noisy data the proposed framework constructs an efficient ROM by leveraging variational autoencoders for dimensionality reduction along with a newly introduced, variational version of sparse identification of nonlinear dynamics (SINDy), which we refer to as Variational Identification of Nonlinear Dynamics (VINDy). In detail, the method consists of Variational Encoding of Noisy Inputs (VENI) to identify the distribution of reduced coordinates. Simultaneously, we learn the distribution of the coefficients of a pre-determined set of candidate functions by VINDy. Once trained offline, the identified model can be queried for new parameter instances and new initial conditions to compute the corresponding full-time solutions. The probabilistic setup enables uncertainty quantification as the online testing consists of Variational Inference naturally providing Certainty Intervals (VICI). In this work we showcase the effectiveness of the newly proposed VINDy method in identifying interpretable and accurate dynamical system for the R"ossler system with different noise intensities and sources. Then the performance of the overall method - named VENI, VINDy, VICI - is tested on PDE benchmarks including structural mechanics and fluid dynamics. | [
"['Paolo Conti' 'Jonas Kneifl' 'Andrea Manzoni' 'Attilio Frangi'\n 'Jörg Fehr' 'Steven L. Brunton' 'J. Nathan Kutz']"
] |
null | null | 2405.20915 | null | null | http://arxiv.org/pdf/2405.20915v1 | 2024-05-31T15:21:44Z | 2024-05-31T15:21:44Z | Fast yet Safe: Early-Exiting with Risk Control | Scaling machine learning models significantly improves their performance. However, such gains come at the cost of inference being slow and resource-intensive. Early-exit neural networks (EENNs) offer a promising solution: they accelerate inference by allowing intermediate layers to exit and produce a prediction early. Yet a fundamental issue with EENNs is how to determine when to exit without severely degrading performance. In other words, when is it 'safe' for an EENN to go 'fast'? To address this issue, we investigate how to adapt frameworks of risk control to EENNs. Risk control offers a distribution-free, post-hoc solution that tunes the EENN's exiting mechanism so that exits only occur when the output is of sufficient quality. We empirically validate our insights on a range of vision and language tasks, demonstrating that risk control can produce substantial computational savings, all the while preserving user-specified performance goals. | [
"['Metod Jazbec' 'Alexander Timans' 'Tin Hadži Veljković' 'Kaspar Sakmann'\n 'Dan Zhang' 'Christian A. Naesseth' 'Eric Nalisnick']"
] |
null | null | 2405.20917 | null | null | http://arxiv.org/pdf/2405.20917v1 | 2024-05-31T15:21:53Z | 2024-05-31T15:21:53Z | Learning to Estimate System Specifications in Linear Temporal Logic
using Transformers and Mamba | Temporal logic is a framework for representing and reasoning about propositions that evolve over time. It is commonly used for specifying requirements in various domains, including hardware and software systems, as well as robotics. Specification mining or formula generation involves extracting temporal logic formulae from system traces and has numerous applications, such as detecting bugs and improving interpretability. Although there has been a surge of deep learning-based methods for temporal logic satisfiability checking in recent years, the specification mining literature has been lagging behind in adopting deep learning methods despite their many advantages, such as scalability. In this paper, we introduce autoregressive models that can generate linear temporal logic formulae from traces, towards addressing the specification mining problem. We propose multiple architectures for this task: transformer encoder-decoder, decoder-only transformer, and Mamba, which is an emerging alternative to transformer models. Additionally, we devise a metric for quantifying the distinctiveness of the generated formulae and a straightforward algorithm to enforce the syntax constraints. Our experiments show that the proposed architectures yield promising results, generating correct and distinct formulae at a fraction of the compute cost needed for the combinatorial baseline. | [
"['İlker Işık' 'Ebru Aydin Gol' 'Ramazan Gokberk Cinbis']"
] |
null | null | 2405.20933 | null | null | http://arxiv.org/pdf/2405.20933v1 | 2024-05-31T15:32:43Z | 2024-05-31T15:32:43Z | Concentration Bounds for Optimized Certainty Equivalent Risk Estimation | We consider the problem of estimating the Optimized Certainty Equivalent (OCE) risk from independent and identically distributed (i.i.d.) samples. For the classic sample average approximation (SAA) of OCE, we derive mean-squared error as well as concentration bounds (assuming sub-Gaussianity). Further, we analyze an efficient stochastic approximation-based OCE estimator, and derive finite sample bounds for the same. To show the applicability of our bounds, we consider a risk-aware bandit problem, with OCE as the risk. For this problem, we derive bound on the probability of mis-identification. Finally, we conduct numerical experiments to validate the theoretical findings. | [
"['Ayon Ghosh' 'L. A. Prashanth' 'Krishna Jagannathan']"
] |
null | null | 2405.20935 | null | null | http://arxiv.org/pdf/2405.20935v1 | 2024-05-31T15:34:13Z | 2024-05-31T15:34:13Z | Effective Interplay between Sparsity and Quantization: From Theory to
Practice | The increasing size of deep neural networks necessitates effective model compression to improve computational efficiency and reduce their memory footprint. Sparsity and quantization are two prominent compression methods that have individually demonstrated significant reduction in computational and memory footprints while preserving model accuracy. While effective, the interplay between these two methods remains an open question. In this paper, we investigate the interaction between these two methods and assess whether their combination impacts final model accuracy. We mathematically prove that applying sparsity before quantization is the optimal sequence for these operations, minimizing error in computation. Our empirical studies across a wide range of models, including OPT and Llama model families (125M-8B) and ViT corroborate these theoretical findings. In addition, through rigorous analysis, we demonstrate that sparsity and quantization are not orthogonal; their interaction can significantly harm model accuracy, with quantization error playing a dominant role in this degradation. Our findings extend to the efficient deployment of large models in resource-limited compute platforms and reduce serving cost, offering insights into best practices for applying these compression methods to maximize efficacy without compromising accuracy. | [
"['Simla Burcu Harma' 'Ayan Chakraborty' 'Elizaveta Kostenok'\n 'Danila Mishin' 'Dongho Ha' 'Babak Falsafi' 'Martin Jaggi' 'Ming Liu'\n 'Yunho Oh' 'Suvinay Subramanian' 'Amir Yazdanbakhsh']"
] |
null | null | 2405.20954 | null | null | http://arxiv.org/pdf/2405.20954v1 | 2024-05-31T15:54:01Z | 2024-05-31T15:54:01Z | Aligning Multiclass Neural Network Classifier Criterion with Task
Performance via $F_β$-Score | Multiclass neural network classifiers are typically trained using cross-entropy loss. Following training, the performance of this same neural network is evaluated using an application-specific metric based on the multiclass confusion matrix, such as the Macro $F_beta$-Score. It is questionable whether the use of cross-entropy will yield a classifier that aligns with the intended application-specific performance criteria, particularly in scenarios where there is a need to emphasize one aspect of classifier performance. For example, if greater precision is preferred over recall, the $beta$ value in the $F_beta$ evaluation metric can be adjusted accordingly, but the cross-entropy objective remains unaware of this preference during training. We propose a method that addresses this training-evaluation gap for multiclass neural network classifiers such that users can train these models informed by the desired final $F_beta$-Score. Following prior work in binary classification, we utilize the concepts of the soft-set confusion matrices and a piecewise-linear approximation of the Heaviside step function. Our method extends the $2 times 2$ binary soft-set confusion matrix to a multiclass $d times d$ confusion matrix and proposes dynamic adaptation of the threshold value $tau$, which parameterizes the piecewise-linear Heaviside approximation during run-time. We present a theoretical analysis that shows that our method can be used to optimize for a soft-set based approximation of Macro-$F_beta$ that is a consistent estimator of Macro-$F_beta$, and our extensive experiments show the practical effectiveness of our approach. | [
"['Nathan Tsoi' 'Deyuan Li' 'Taesoo Daniel Lee' 'Marynel Vázquez']"
] |
null | null | 2405.20970 | null | null | http://arxiv.org/pdf/2405.20970v1 | 2024-05-31T16:18:06Z | 2024-05-31T16:18:06Z | PUAL: A Classifier on Trifurcate Positive-Unlabeled Data | Positive-unlabeled (PU) learning aims to train a classifier using the data containing only labeled-positive instances and unlabeled instances. However, existing PU learning methods are generally hard to achieve satisfactory performance on trifurcate data, where the positive instances distribute on both sides of the negative instances. To address this issue, firstly we propose a PU classifier with asymmetric loss (PUAL), by introducing a structure of asymmetric loss on positive instances into the objective function of the global and local learning classifier. Then we develop a kernel-based algorithm to enable PUAL to obtain non-linear decision boundary. We show that, through experiments on both simulated and real-world datasets, PUAL can achieve satisfactory classification on trifurcate data. | [
"['Xiaoke Wang' 'Xiaochen Yang' 'Rui Zhu' 'Jing-Hao Xue']"
] |
null | null | 2405.20971 | null | null | http://arxiv.org/pdf/2405.20971v1 | 2024-05-31T16:18:46Z | 2024-05-31T16:18:46Z | Amortizing intractable inference in diffusion models for vision,
language, and control | Diffusion models have emerged as effective distribution estimators in vision, language, and reinforcement learning, but their use as priors in downstream tasks poses an intractable posterior inference problem. This paper studies amortized sampling of the posterior over data, $mathbf{x}sim p^{rm post}(mathbf{x})propto p(mathbf{x})r(mathbf{x})$, in a model that consists of a diffusion generative model prior $p(mathbf{x})$ and a black-box constraint or likelihood function $r(mathbf{x})$. We state and prove the asymptotic correctness of a data-free learning objective, relative trajectory balance, for training a diffusion model that samples from this posterior, a problem that existing methods solve only approximately or in restricted cases. Relative trajectory balance arises from the generative flow network perspective on diffusion models, which allows the use of deep reinforcement learning techniques to improve mode coverage. Experiments illustrate the broad potential of unbiased inference of arbitrary posteriors under diffusion priors: in vision (classifier guidance), language (infilling under a discrete diffusion LLM), and multimodal data (text-to-image generation). Beyond generative modeling, we apply relative trajectory balance to the problem of continuous control with a score-based behavior prior, achieving state-of-the-art results on benchmarks in offline reinforcement learning. | [
"['Siddarth Venkatraman' 'Moksh Jain' 'Luca Scimeca' 'Minsu Kim'\n 'Marcin Sendera' 'Mohsin Hasan' 'Luke Rowe' 'Sarthak Mittal'\n 'Pablo Lemos' 'Emmanuel Bengio' 'Alexandre Adam' 'Jarrid Rector-Brooks'\n 'Yoshua Bengio' 'Glen Berseth' 'Nikolay Malkin']"
] |
null | null | 2405.20973 | null | null | http://arxiv.org/pdf/2405.20973v1 | 2024-05-31T16:21:05Z | 2024-05-31T16:21:05Z | LCQ: Low-Rank Codebook based Quantization for Large Language Models | Large language models~(LLMs) have recently demonstrated promising performance in many tasks. However, the high storage and computational cost of LLMs has become a challenge for deploying LLMs. Weight quantization has been widely used for model compression, which can reduce both storage and computational cost. Most existing weight quantization methods for LLMs use a rank-one codebook for quantization, which results in substantial accuracy loss when the compression ratio is high. In this paper, we propose a novel weight quantization method, called low-rank codebook based quantization~(LCQ), for LLMs. LCQ adopts a low-rank codebook, the rank of which can be larger than one, for quantization. Experiments show that LCQ can achieve better accuracy than existing methods with a negligibly extra storage cost. | [
"['Wen-Pu Cai' 'Wu-Jun Li']"
] |
null | null | 2405.20974 | null | null | http://arxiv.org/pdf/2405.20974v2 | 2024-06-05T17:04:01Z | 2024-05-31T16:21:16Z | SaySelf: Teaching LLMs to Express Confidence with Self-Reflective
Rationales | Large language models (LLMs) often generate inaccurate or fabricated information and generally fail to indicate their confidence, which limits their broader applications. Previous work elicits confidence from LLMs by direct or self-consistency prompting, or constructing specific datasets for supervised finetuning. The prompting-based approaches have inferior performance, and the training-based approaches are limited to binary or inaccurate group-level confidence estimates. In this work, we present the advanced SaySelf, a training framework that teaches LLMs to express more accurate fine-grained confidence estimates. In addition, beyond the confidence scores, SaySelf initiates the process of directing LLMs to produce self-reflective rationales that clearly identify gaps in their parametric knowledge and explain their uncertainty. This is achieved by using an LLM to automatically summarize the uncertainties in specific knowledge via natural language. The summarization is based on the analysis of the inconsistency in multiple sampled reasoning chains, and the resulting data is utilized for supervised fine-tuning. Moreover, we utilize reinforcement learning with a meticulously crafted reward function to calibrate the confidence estimates, motivating LLMs to deliver accurate, high-confidence predictions and to penalize overconfidence in erroneous outputs. Experimental results in both in-distribution and out-of-distribution datasets demonstrate the effectiveness of SaySelf in reducing the confidence calibration error and maintaining the task performance. We show that the generated self-reflective rationales are reasonable and can further contribute to the calibration. The code is made public at https://github.com/xu1868/SaySelf. | [
"['Tianyang Xu' 'Shujin Wu' 'Shizhe Diao' 'Xiaoze Liu' 'Xingyao Wang'\n 'Yangyi Chen' 'Jing Gao']"
] |
null | null | 2405.20975 | null | null | http://arxiv.org/pdf/2405.20975v2 | 2024-06-05T05:10:27Z | 2024-05-31T16:21:55Z | ACE: A Model Poisoning Attack on Contribution Evaluation Methods in
Federated Learning | In Federated Learning (FL), a set of clients collaboratively train a machine learning model (called global model) without sharing their local training data. The local training data of clients is typically non-i.i.d. and heterogeneous, resulting in varying contributions from individual clients to the final performance of the global model. In response, many contribution evaluation methods were proposed, where the server could evaluate the contribution made by each client and incentivize the high-contributing clients to sustain their long-term participation in FL. Existing studies mainly focus on developing new metrics or algorithms to better measure the contribution of each client. However, the security of contribution evaluation methods of FL operating in adversarial environments is largely unexplored. In this paper, we propose the first model poisoning attack on contribution evaluation methods in FL, termed ACE. Specifically, we show that any malicious client utilizing ACE could manipulate the parameters of its local model such that it is evaluated to have a high contribution by the server, even when its local training data is indeed of low quality. We perform both theoretical analysis and empirical evaluations of ACE. Theoretically, we show our design of ACE can effectively boost the malicious client's perceived contribution when the server employs the widely-used cosine distance metric to measure contribution. Empirically, our results show ACE effectively and efficiently deceive five state-of-the-art contribution evaluation methods. In addition, ACE preserves the accuracy of the final global models on testing inputs. We also explore six countermeasures to defend ACE. Our results show they are inadequate to thwart ACE, highlighting the urgent need for new defenses to safeguard the contribution evaluation methods in FL. | [
"['Zhangchen Xu' 'Fengqing Jiang' 'Luyao Niu' 'Jinyuan Jia' 'Bo Li'\n 'Radha Poovendran']"
] |
null | null | 2405.20980 | null | null | http://arxiv.org/pdf/2405.20980v1 | 2024-05-31T16:26:08Z | 2024-05-31T16:26:08Z | Neural Gaussian Scale-Space Fields | Gaussian scale spaces are a cornerstone of signal representation and processing, with applications in filtering, multiscale analysis, anti-aliasing, and many more. However, obtaining such a scale space is costly and cumbersome, in particular for continuous representations such as neural fields. We present an efficient and lightweight method to learn the fully continuous, anisotropic Gaussian scale space of an arbitrary signal. Based on Fourier feature modulation and Lipschitz bounding, our approach is trained self-supervised, i.e., training does not require any manual filtering. Our neural Gaussian scale-space fields faithfully capture multiscale representations across a broad range of modalities, and support a diverse set of applications. These include images, geometry, light-stage data, texture anti-aliasing, and multiscale optimization. | [
"['Felix Mujkanovic' 'Ntumba Elie Nsampi' 'Christian Theobalt'\n 'Hans-Peter Seidel' 'Thomas Leimkühler']"
] |
null | null | 2405.20984 | null | null | http://arxiv.org/pdf/2405.20984v1 | 2024-05-31T16:31:07Z | 2024-05-31T16:31:07Z | Bayesian Design Principles for Offline-to-Online Reinforcement Learning | Offline reinforcement learning (RL) is crucial for real-world applications where exploration can be costly or unsafe. However, offline learned policies are often suboptimal, and further online fine-tuning is required. In this paper, we tackle the fundamental dilemma of offline-to-online fine-tuning: if the agent remains pessimistic, it may fail to learn a better policy, while if it becomes optimistic directly, performance may suffer from a sudden drop. We show that Bayesian design principles are crucial in solving such a dilemma. Instead of adopting optimistic or pessimistic policies, the agent should act in a way that matches its belief in optimal policies. Such a probability-matching agent can avoid a sudden performance drop while still being guaranteed to find the optimal policy. Based on our theoretical findings, we introduce a novel algorithm that outperforms existing methods on various benchmarks, demonstrating the efficacy of our approach. Overall, the proposed approach provides a new perspective on offline-to-online RL that has the potential to enable more effective learning from offline data. | [
"['Hao Hu' 'Yiqin Yang' 'Jianing Ye' 'Chengjie Wu' 'Ziqing Mai' 'Yujing Hu'\n 'Tangjie Lv' 'Changjie Fan' 'Qianchuan Zhao' 'Chongjie Zhang']"
] |
null | null | 2405.20986 | null | null | http://arxiv.org/pdf/2405.20986v1 | 2024-05-31T16:32:46Z | 2024-05-31T16:32:46Z | Uncertainty Quantification for Bird's Eye View Semantic Segmentation:
Methods and Benchmarks | The fusion of raw features from multiple sensors on an autonomous vehicle to create a Bird's Eye View (BEV) representation is crucial for planning and control systems. There is growing interest in using deep learning models for BEV semantic segmentation. Anticipating segmentation errors and improving the explainability of DNNs is essential for autonomous driving, yet it is under-studied. This paper introduces a benchmark for predictive uncertainty quantification in BEV segmentation. The benchmark assesses various approaches across three popular datasets using two representative backbones and focuses on the effectiveness of predicted uncertainty in identifying misclassified and out-of-distribution (OOD) pixels, as well as calibration. Empirical findings highlight the challenges in uncertainty quantification. Our results find that evidential deep learning based approaches show the most promise by efficiently quantifying aleatoric and epistemic uncertainty. We propose the Uncertainty-Focal-Cross-Entropy (UFCE) loss, designed for highly imbalanced data, which consistently improves the segmentation quality and calibration. Additionally, we introduce a vacuity-scaled regularization term that enhances the model's focus on high uncertainty pixels, improving epistemic uncertainty quantification. | [
"['Linlin Yu' 'Bowen Yang' 'Tianhao Wang' 'Kangshuo Li' 'Feng Chen']"
] |
null | null | 2405.20987 | null | null | http://arxiv.org/pdf/2405.20987v1 | 2024-05-31T16:33:20Z | 2024-05-31T16:33:20Z | Early Stopping Criteria for Training Generative Adversarial Networks in
Biomedical Imaging | Generative Adversarial Networks (GANs) have high computational costs to train their complex architectures. Throughout the training process, GANs' output is analyzed qualitatively based on the loss and synthetic images' diversity and quality. Based on this qualitative analysis, training is manually halted once the desired synthetic images are generated. By utilizing an early stopping criterion, the computational cost and dependence on manual oversight can be reduced yet impacted by training problems such as mode collapse, non-convergence, and instability. This is particularly prevalent in biomedical imagery, where training problems degrade the diversity and quality of synthetic images, and the high computational cost associated with training makes complex architectures increasingly inaccessible. This work proposes a novel early stopping criteria to quantitatively detect training problems, halt training, and reduce the computational costs associated with synthesizing biomedical images. Firstly, the range of generator and discriminator loss values is investigated to assess whether mode collapse, non-convergence, and instability occur sequentially, concurrently, or interchangeably throughout the training of GANs. Secondly, utilizing these occurrences in conjunction with the Mean Structural Similarity Index (MS-SSIM) and Fr'echet Inception Distance (FID) scores of synthetic images forms the basis of the proposed early stopping criteria. This work helps identify the occurrence of training problems in GANs using low-resource computational cost and reduces training time to generate diversified and high-quality synthetic images. | [
"['Muhammad Muneeb Saad' 'Mubashir Husain Rehmani' \"Ruairi O'Reilly\"]"
] |
null | null | 2405.20988 | null | null | http://arxiv.org/pdf/2405.20988v2 | 2024-06-06T09:52:16Z | 2024-05-31T16:34:11Z | Communication-Efficient Distributed Deep Learning via Federated Dynamic
Averaging | Driven by the ever-growing volume and decentralized nature of data, coupled with the need to harness this data and generate knowledge from it, has led to the extensive use of distributed deep learning (DDL) techniques for training. These techniques rely on local training that is performed at the distributed nodes based on locally collected data, followed by a periodic synchronization process that combines these models to create a global model. However, frequent synchronization of DL models, encompassing millions to many billions of parameters, creates a communication bottleneck, severely hindering scalability. Worse yet, DDL algorithms typically waste valuable bandwidth, and make themselves less practical in bandwidth-constrained federated settings, by relying on overly simplistic, periodic, and rigid synchronization schedules. These drawbacks also have a direct impact on the time required for the training process, necessitating excessive time for data communication. To address these shortcomings, we propose Federated Dynamic Averaging (FDA), a communication-efficient DDL strategy that dynamically triggers synchronization based on the value of the model variance. In essence, the costly synchronization step is triggered only if the local models, which are initialized from a common global model after each synchronization, have significantly diverged. This decision is facilitated by the communication of a small local state from each distributed node/worker. Through extensive experiments across a wide range of learning tasks we demonstrate that FDA reduces communication cost by orders of magnitude, compared to both traditional and cutting-edge communication-efficient algorithms. Additionally, we show that FDA maintains robust performance across diverse data heterogeneity settings. | [
"['Michail Theologitis' 'Georgios Frangias' 'Georgios Anestis'\n 'Vasilis Samoladas' 'Antonios Deligiannakis']"
] |
null | null | 2405.20990 | null | null | http://arxiv.org/pdf/2405.20990v1 | 2024-05-31T16:35:29Z | 2024-05-31T16:35:29Z | Locking Machine Learning Models into Hardware | Modern Machine Learning models are expensive IP and business competitiveness often depends on keeping this IP confidential. This in turn restricts how these models are deployed -- for example it is unclear how to deploy a model on-device without inevitably leaking the underlying model. At the same time, confidential computing technologies such as Multi-Party Computation or Homomorphic encryption remain impractical for wide adoption. In this paper we take a different approach and investigate feasibility of ML-specific mechanisms that deter unauthorized model use by restricting the model to only be usable on specific hardware, making adoption on unauthorized hardware inconvenient. That way, even if IP is compromised, it cannot be trivially used without specialised hardware or major model adjustment. In a sense, we seek to enable cheap locking of machine learning models into specific hardware. We demonstrate that locking mechanisms are feasible by either targeting efficiency of model representations, such making models incompatible with quantisation, or tie the model's operation on specific characteristics of hardware, such as number of cycles for arithmetic operations. We demonstrate that locking comes with negligible work and latency overheads, while significantly restricting usability of the resultant model on unauthorized hardware. | [
"['Eleanor Clifford' 'Adhithya Saravanan' 'Harry Langford' 'Cheng Zhang'\n 'Yiren Zhao' 'Robert Mullins' 'Ilia Shumailov' 'Jamie Hayes']"
] |
null | null | 2405.20991 | null | null | http://arxiv.org/pdf/2405.20991v1 | 2024-05-31T16:35:41Z | 2024-05-31T16:35:41Z | Hard Cases Detection in Motion Prediction by Vision-Language Foundation
Models | Addressing hard cases in autonomous driving, such as anomalous road users, extreme weather conditions, and complex traffic interactions, presents significant challenges. To ensure safety, it is crucial to detect and manage these scenarios effectively for autonomous driving systems. However, the rarity and high-risk nature of these cases demand extensive, diverse datasets for training robust models. Vision-Language Foundation Models (VLMs) have shown remarkable zero-shot capabilities as being trained on extensive datasets. This work explores the potential of VLMs in detecting hard cases in autonomous driving. We demonstrate the capability of VLMs such as GPT-4v in detecting hard cases in traffic participant motion prediction on both agent and scenario levels. We introduce a feasible pipeline where VLMs, fed with sequential image frames with designed prompts, effectively identify challenging agents or scenarios, which are verified by existing prediction models. Moreover, by taking advantage of this detection of hard cases by VLMs, we further improve the training efficiency of the existing motion prediction pipeline by performing data selection for the training samples suggested by GPT. We show the effectiveness and feasibility of our pipeline incorporating VLMs with state-of-the-art methods on NuScenes datasets. The code is accessible at https://github.com/KTH-RPL/Detect_VLM. | [
"['Yi Yang' 'Qingwen Zhang' 'Kei Ikemura' 'Nazre Batool' 'John Folkesson']"
] |
null | null | 2405.20993 | null | null | http://arxiv.org/pdf/2405.20993v2 | 2024-07-08T16:26:03Z | 2024-05-31T16:38:35Z | Information limits and Thouless-Anderson-Palmer equations for spiked
matrix models with structured noise | We consider a prototypical problem of Bayesian inference for a structured spiked model: a low-rank signal is corrupted by additive noise. While both information-theoretic and algorithmic limits are well understood when the noise is a Gaussian Wigner matrix, the more realistic case of structured noise still proves to be challenging. To capture the structure while maintaining mathematical tractability, a line of work has focused on rotationally invariant noise. However, existing studies either provide sub-optimal algorithms or are limited to special cases of noise ensembles. In this paper, using tools from statistical physics (replica method) and random matrix theory (generalized spherical integrals) we establish the first characterization of the information-theoretic limits for a noise matrix drawn from a general trace ensemble. Remarkably, our analysis unveils the asymptotic equivalence between the rotationally invariant model and a surrogate Gaussian one. Finally, we show how to saturate the predicted statistical limits using an efficient algorithm inspired by the theory of adaptive Thouless-Anderson-Palmer (TAP) equations. | [
"['Jean Barbier' 'Francesco Camilli' 'Marco Mondelli' 'Yizhou Xu']"
] |
null | null | 2405.21003 | null | null | http://arxiv.org/abs/2405.21003v1 | 2024-05-31T16:44:40Z | 2024-05-31T16:44:40Z | Explaining Predictions by Characteristic Rules | Characteristic rules have been advocated for their ability to improve interpretability over discriminative rules within the area of rule learning. However, the former type of rule has not yet been used by techniques for explaining predictions. A novel explanation technique, called CEGA (Characteristic Explanatory General Association rules), is proposed, which employs association rule mining to aggregate multiple explanations generated by any standard local explanation technique into a set of characteristic rules. An empirical investigation is presented, in which CEGA is compared to two state-of-the-art methods, Anchors and GLocalX, for producing local and aggregated explanations in the form of discriminative rules. The results suggest that the proposed approach provides a better trade-off between fidelity and complexity compared to the two state-of-the-art approaches; CEGA and Anchors significantly outperform GLocalX with respect to fidelity, while CEGA and GLocalX significantly outperform Anchors with respect to the number of generated rules. The effect of changing the format of the explanations of CEGA to discriminative rules and using LIME and SHAP as local explanation techniques instead of Anchors are also investigated. The results show that the characteristic explanatory rules still compete favorably with rules in the standard discriminative format. The results also indicate that using CEGA in combination with either SHAP or Anchors consistently leads to a higher fidelity compared to using LIME as the local explanation technique. | [
"['Amr Alkhatib' 'Henrik Boström' 'Michalis Vazirgiannis']"
] |
null | null | 2405.21012 | null | null | http://arxiv.org/pdf/2405.21012v1 | 2024-05-31T16:52:51Z | 2024-05-31T16:52:51Z | G-Transformer for Conditional Average Potential Outcome Estimation over
Time | Estimating potential outcomes for treatments over time based on observational data is important for personalized decision-making in medicine. Yet, existing neural methods for this task suffer from either (a) bias or (b) large variance. In order to address both limitations, we introduce the G-transformer (GT). Our GT is a novel, neural end-to-end model designed for unbiased, low-variance estimation of conditional average potential outcomes (CAPOs) over time. Specifically, our GT is the first neural model to perform regression-based iterative G-computation for CAPOs in the time-varying setting. We evaluate the effectiveness of our GT across various experiments. In sum, this work represents a significant step towards personalized decision-making from electronic health records. | [
"['Konstantin Hess' 'Dennis Frauen' 'Valentyn Melnychuk'\n 'Stefan Feuerriegel']"
] |
null | null | 2405.21018 | null | null | http://arxiv.org/pdf/2405.21018v2 | 2024-06-05T16:35:49Z | 2024-05-31T17:07:15Z | Improved Techniques for Optimization-Based Jailbreaking on Large
Language Models | Large language models (LLMs) are being rapidly developed, and a key component of their widespread deployment is their safety-related alignment. Many red-teaming efforts aim to jailbreak LLMs, where among these efforts, the Greedy Coordinate Gradient (GCG) attack's success has led to a growing interest in the study of optimization-based jailbreaking techniques. Although GCG is a significant milestone, its attacking efficiency remains unsatisfactory. In this paper, we present several improved (empirical) techniques for optimization-based jailbreaks like GCG. We first observe that the single target template of "Sure" largely limits the attacking performance of GCG; given this, we propose to apply diverse target templates containing harmful self-suggestion and/or guidance to mislead LLMs. Besides, from the optimization aspects, we propose an automatic multi-coordinate updating strategy in GCG (i.e., adaptively deciding how many tokens to replace in each step) to accelerate convergence, as well as tricks like easy-to-hard initialisation. Then, we combine these improved technologies to develop an efficient jailbreak method, dubbed I-GCG. In our experiments, we evaluate on a series of benchmarks (such as NeurIPS 2023 Red Teaming Track). The results demonstrate that our improved techniques can help GCG outperform state-of-the-art jailbreaking attacks and achieve nearly 100% attack success rate. The code is released at https://github.com/jiaxiaojunQAQ/I-GCG. | [
"['Xiaojun Jia' 'Tianyu Pang' 'Chao Du' 'Yihao Huang' 'Jindong Gu'\n 'Yang Liu' 'Xiaochun Cao' 'Min Lin']"
] |
null | null | 2405.21021 | null | null | http://arxiv.org/pdf/2405.21021v1 | 2024-05-31T17:09:07Z | 2024-05-31T17:09:07Z | Beyond Conventional Parametric Modeling: Data-Driven Framework for
Estimation and Prediction of Time Activity Curves in Dynamic PET Imaging | Dynamic Positron Emission Tomography (dPET) imaging and Time-Activity Curve (TAC) analyses are essential for understanding and quantifying the biodistribution of radiopharmaceuticals over time and space. Traditional compartmental modeling, while foundational, commonly struggles to fully capture the complexities of biological systems, including non-linear dynamics and variability. This study introduces an innovative data-driven neural network-based framework, inspired by Reaction Diffusion systems, designed to address these limitations. Our approach, which adaptively fits TACs from dPET, enables the direct calibration of diffusion coefficients and reaction terms from observed data, offering significant improvements in predictive accuracy and robustness over traditional methods, especially in complex biological scenarios. By more accurately modeling the spatio-temporal dynamics of radiopharmaceuticals, our method advances modeling of pharmacokinetic and pharmacodynamic processes, enabling new possibilities in quantitative nuclear medicine. | [
"['Niloufar Zakariaei' 'Arman Rahmim' 'Eldad Haber']"
] |
null | null | 2405.21027 | null | null | http://arxiv.org/pdf/2405.21027v4 | 2024-06-21T04:28:53Z | 2024-05-31T17:16:29Z | Fusion-PSRO: Nash Policy Fusion for Policy Space Response Oracles | A popular approach for solving zero-sum games is to maintain populations of policies to approximate the Nash Equilibrium (NE). Previous studies have shown that Policy Space Response Oracle (PSRO) algorithm is an effective multi-agent reinforcement learning framework for solving such games. However, repeatedly training new policies from scratch to approximate Best Response (BR) to opponents' mixed policies at each iteration is both inefficient and costly. While some PSRO variants initialize a new policy by inheriting from past BR policies, this approach limits the exploration of new policies, especially against challenging opponents. To address this issue, we propose Fusion-PSRO, which employs policy fusion to initialize policies for better approximation to BR. By selecting high-quality base policies from meta-NE, policy fusion fuses the base policies into a new policy through model averaging. This approach allows the initialized policies to incorporate multiple expert policies, making it easier to handle difficult opponents compared to inheriting from past BR policies or initializing from scratch. Moreover, our method only modifies the policy initialization phase, allowing its application to nearly all PSRO variants without additional training overhead. Our experiments on non-transitive matrix games, Leduc Poker, and the more complex Liars Dice demonstrate that Fusion-PSRO enhances the performance of nearly all PSRO variants, achieving lower exploitability. | [
"['Jiesong Lian']"
] |
null | null | 2405.21036 | null | null | http://arxiv.org/pdf/2405.21036v1 | 2024-05-31T17:29:39Z | 2024-05-31T17:29:39Z | A-PETE: Adaptive Prototype Explanations of Tree Ensembles | The need for interpreting machine learning models is addressed through prototype explanations within the context of tree ensembles. An algorithm named Adaptive Prototype Explanations of Tree Ensembles (A-PETE) is proposed to automatise the selection of prototypes for these classifiers. Its unique characteristics is using a specialised distance measure and a modified k-medoid approach. Experiments demonstrated its competitive predictive accuracy with respect to earlier explanation algorithms. It also provides a a sufficient number of prototypes for the purpose of interpreting the random forest classifier. | [
"['Jacek Karolczak' 'Jerzy Stefanowski']"
] |
null | null | 2405.21042 | null | null | http://arxiv.org/pdf/2405.21042v1 | 2024-05-31T17:33:07Z | 2024-05-31T17:33:07Z | Comparing information content of representation spaces for
disentanglement with VAE ensembles | Disentanglement is the endeavour to use machine learning to divide information about a dataset into meaningful fragments. In practice these fragments are representation (sub)spaces, often the set of channels in the latent space of a variational autoencoder (VAE). Assessments of disentanglement predominantly employ metrics that are coarse-grained at the model level, but this approach can obscure much about the process of information fragmentation. Here we propose to study the learned channels in aggregate, as the fragments of information learned by an ensemble of repeat training runs. Additionally, we depart from prior work where measures of similarity between individual subspaces neglected the nature of data embeddings as probability distributions. Instead, we view representation subspaces as communication channels that perform a soft clustering of the data; consequently, we generalize two classic information-theoretic measures of similarity between clustering assignments to compare representation spaces. We develop a lightweight method of estimation based on fingerprinting representation subspaces by their ability to distinguish dataset samples, allowing us to identify, analyze, and leverage meaningful structure in ensembles of VAEs trained on synthetic and natural datasets. Using this fully unsupervised pipeline we identify "hotspots" in the space of information fragments: groups of nearly identical representation subspaces that appear repeatedly in an ensemble of VAEs, particularly as regularization is increased. Finally, we leverage the proposed methodology to achieve ensemble learning with VAEs, boosting the information content of a set of weak learners -- a capability not possible with previous methods of assessing channel similarity. | [
"['Kieran A. Murphy' 'Sam Dillavou' 'Dani S. Bassett']"
] |
null | null | 2405.21043 | null | null | http://arxiv.org/pdf/2405.21043v1 | 2024-05-31T17:36:16Z | 2024-05-31T17:36:16Z | Target Networks and Over-parameterization Stabilize Off-policy
Bootstrapping with Function Approximation | We prove that the combination of a target network and over-parameterized linear function approximation establishes a weaker convergence condition for bootstrapped value estimation in certain cases, even with off-policy data. Our condition is naturally satisfied for expected updates over the entire state-action space or learning with a batch of complete trajectories from episodic Markov decision processes. Notably, using only a target network or an over-parameterized model does not provide such a convergence guarantee. Additionally, we extend our results to learning with truncated trajectories, showing that convergence is achievable for all tasks with minor modifications, akin to value truncation for the final states in trajectories. Our primary result focuses on temporal difference estimation for prediction, providing high-probability value estimation error bounds and empirical analysis on Baird's counterexample and a Four-room task. Furthermore, we explore the control setting, demonstrating that similar convergence conditions apply to Q-learning. | [
"['Fengdi Che' 'Chenjun Xiao' 'Jincheng Mei' 'Bo Dai' 'Ramki Gummadi'\n 'Oscar A Ramirez' 'Christopher K Harris' 'A. Rupam Mahmood'\n 'Dale Schuurmans']"
] |
null | null | 2405.21045 | null | null | http://arxiv.org/pdf/2405.21045v1 | 2024-05-31T17:38:49Z | 2024-05-31T17:38:49Z | An Attention-Based Multi-Context Convolutional Encoder-Decoder Neural
Network for Work Zone Traffic Impact Prediction | Work zone is one of the major causes of non-recurrent traffic congestion and road incidents. Despite the significance of its impact, studies on predicting the traffic impact of work zones remain scarce. In this paper, we propose a data integration pipeline that enhances the utilization of work zone and traffic data from diversified platforms, and introduce a novel deep learning model to predict the traffic speed and incident likelihood during planned work zone events. The proposed model transforms traffic patterns into 2D space-time images for both model input and output and employs an attention-based multi-context convolutional encoder-decoder architecture to capture the spatial-temporal dependencies between work zone events and traffic variations. Trained and validated on four years of archived work zone traffic data from Maryland, USA, the model demonstrates superior performance over baseline models in predicting traffic speed, incident likelihood, and inferred traffic attributes such as queue length and congestion timings (i.e., start time and duration). Specifically, the proposed model outperforms the baseline models by reducing the prediction error of traffic speed by 5% to 34%, queue length by 11% to 29%, congestion timing by 6% to 17%, and increasing the accuracy of incident predictions by 5% to 7%. Consequently, this model offers substantial promise for enhancing the planning and traffic management of work zones. | [
"['Qinhua Jiang' 'Xishun Liao' 'Yaofa Gong' 'Jiaqi Ma']"
] |
null | null | 2405.21046 | null | null | http://arxiv.org/pdf/2405.21046v1 | 2024-05-31T17:39:06Z | 2024-05-31T17:39:06Z | Exploratory Preference Optimization: Harnessing Implicit
Q*-Approximation for Sample-Efficient RLHF | Reinforcement learning from human feedback (RLHF) has emerged as a central tool for language model alignment. We consider online exploration in RLHF, which exploits interactive access to human or AI feedback by deliberately encouraging the model to produce diverse, maximally informative responses. By allowing RLHF to confidently stray from the pre-trained model, online exploration offers the possibility of novel, potentially super-human capabilities, but its full potential as a paradigm for language model training has yet to be realized, owing to computational and statistical bottlenecks in directly adapting existing reinforcement learning techniques. We propose a new algorithm for online exploration in RLHF, Exploratory Preference Optimization (XPO), which is simple and practical -- a one-line change to (online) Direct Preference Optimization (DPO; Rafailov et al., 2023) -- yet enjoys the strongest known provable guarantees and promising empirical performance. XPO augments the DPO objective with a novel and principled exploration bonus, empowering the algorithm to explore outside the support of the initial model and human feedback data. In theory, we show that XPO is provably sample-efficient and converges to a near-optimal language model policy under natural exploration conditions, irrespective of whether the initial model has good coverage. Our analysis, which builds on the observation that DPO implicitly performs a form of $Q^{star}$-approximation (or, Bellman error minimization), combines previously disparate techniques from language modeling and theoretical reinforcement learning in a serendipitous fashion through the perspective of KL-regularized Markov decision processes. Empirically, we find that XPO is more sample-efficient than non-exploratory DPO variants in a preliminary evaluation. | [
"['Tengyang Xie' 'Dylan J. Foster' 'Akshay Krishnamurthy' 'Corby Rosset'\n 'Ahmed Awadallah' 'Alexander Rakhlin']"
] |
null | null | 2405.21047 | null | null | http://arxiv.org/pdf/2405.21047v1 | 2024-05-31T17:39:15Z | 2024-05-31T17:39:15Z | Grammar-Aligned Decoding | Large Language Models (LLMs) struggle with reliably generating highly structured outputs, such as program code, mathematical formulas, or well-formed markup. Constrained decoding approaches mitigate this problem by greedily restricting what tokens an LLM can output at each step to guarantee that the output matches a given constraint. Specifically, in grammar-constrained decoding (GCD), the LLM's output must follow a given grammar. In this paper we demonstrate that GCD techniques (and in general constrained decoding techniques) can distort the LLM's distribution, leading to outputs that are grammatical but appear with likelihoods that are not proportional to the ones given by the LLM, and so ultimately are low-quality. We call the problem of aligning sampling with a grammar constraint, grammar-aligned decoding (GAD), and propose adaptive sampling with approximate expected futures (ASAp), a decoding algorithm that guarantees the output to be grammatical while provably producing outputs that match the conditional probability of the LLM's distribution conditioned on the given grammar constraint. Our algorithm uses prior sample outputs to soundly overapproximate the future grammaticality of different output prefixes. Our evaluation on code generation and structured NLP tasks shows how ASAp often produces outputs with higher likelihood (according to the LLM's distribution) than existing GCD techniques, while still enforcing the desired grammatical constraints. | [
"['Kanghee Park' 'Jiayu Wang' 'Taylor Berg-Kirkpatrick' 'Nadia Polikarpova'\n \"Loris D'Antoni\"]"
] |
null | null | 2405.21050 | null | null | http://arxiv.org/pdf/2405.21050v1 | 2024-05-31T17:43:35Z | 2024-05-31T17:43:35Z | Spectrum-Aware Parameter Efficient Fine-Tuning for Diffusion Models | Adapting large-scale pre-trained generative models in a parameter-efficient manner is gaining traction. Traditional methods like low rank adaptation achieve parameter efficiency by imposing constraints but may not be optimal for tasks requiring high representation capacity. We propose a novel spectrum-aware adaptation framework for generative models. Our method adjusts both singular values and their basis vectors of pretrained weights. Using the Kronecker product and efficient Stiefel optimizers, we achieve parameter-efficient adaptation of orthogonal matrices. We introduce Spectral Orthogonal Decomposition Adaptation (SODA), which balances computational efficiency and representation capacity. Extensive evaluations on text-to-image diffusion models demonstrate SODA's effectiveness, offering a spectrum-aware alternative to existing fine-tuning methods. | [
"['Xinxi Zhang' 'Song Wen' 'Ligong Han' 'Felix Juefei-Xu'\n 'Akash Srivastava' 'Junzhou Huang' 'Hao Wang' 'Molei Tao'\n 'Dimitris N. Metaxas']"
] |
null | null | 2405.21060 | null | null | http://arxiv.org/pdf/2405.21060v1 | 2024-05-31T17:50:01Z | 2024-05-31T17:50:01Z | Transformers are SSMs: Generalized Models and Efficient Algorithms
Through Structured State Space Duality | While Transformers have been the main architecture behind deep learning's success in language modeling, state-space models (SSMs) such as Mamba have recently been shown to match or outperform Transformers at small to medium scale. We show that these families of models are actually quite closely related, and develop a rich framework of theoretical connections between SSMs and variants of attention, connected through various decompositions of a well-studied class of structured semiseparable matrices. Our state space duality (SSD) framework allows us to design a new architecture (Mamba-2) whose core layer is an a refinement of Mamba's selective SSM that is 2-8X faster, while continuing to be competitive with Transformers on language modeling. | [
"['Tri Dao' 'Albert Gu']"
] |
null | null | 2405.21061 | null | null | http://arxiv.org/pdf/2405.21061v2 | 2024-06-03T14:20:27Z | 2024-05-31T17:50:27Z | Graph External Attention Enhanced Transformer | The Transformer architecture has recently gained considerable attention in the field of graph representation learning, as it naturally overcomes several limitations of Graph Neural Networks (GNNs) with customized attention mechanisms or positional and structural encodings. Despite making some progress, existing works tend to overlook external information of graphs, specifically the correlation between graphs. Intuitively, graphs with similar structures should have similar representations. Therefore, we propose Graph External Attention (GEA) -- a novel attention mechanism that leverages multiple external node/edge key-value units to capture inter-graph correlations implicitly. On this basis, we design an effective architecture called Graph External Attention Enhanced Transformer (GEAET), which integrates local structure and global interaction information for more comprehensive graph representations. Extensive experiments on benchmark datasets demonstrate that GEAET achieves state-of-the-art empirical performance. The source code is available for reproducibility at: https://github.com/icm1018/GEAET. | [
"['Jianqing Liang' 'Min Chen' 'Jiye Liang']"
] |
null | null | 2405.21063 | null | null | http://arxiv.org/pdf/2405.21063v1 | 2024-05-31T17:51:07Z | 2024-05-31T17:51:07Z | Neural Network Verification with Branch-and-Bound for General
Nonlinearities | Branch-and-bound (BaB) is among the most effective methods for neural network (NN) verification. However, existing works on BaB have mostly focused on NNs with piecewise linear activations, especially ReLU networks. In this paper, we develop a general framework, named GenBaB, to conduct BaB for general nonlinearities in general computational graphs based on linear bound propagation. To decide which neuron to branch, we design a new branching heuristic which leverages linear bounds as shortcuts to efficiently estimate the potential improvement after branching. To decide nontrivial branching points for general nonlinear functions, we propose to optimize branching points offline, which can be efficiently leveraged during verification with a lookup table. We demonstrate the effectiveness of our GenBaB on verifying a wide range of NNs, including networks with activation functions such as Sigmoid, Tanh, Sine and GeLU, as well as networks involving multi-dimensional nonlinear operations such as multiplications in LSTMs and Vision Transformers. Our framework also allows the verification of general nonlinear computation graphs and enables verification applications beyond simple neural networks, particularly for AC Optimal Power Flow (ACOPF). GenBaB is part of the latest $alpha,!beta$-CROWN, the winner of the 4th International Verification of Neural Networks Competition (VNN-COMP 2023). | [
"['Zhouxing Shi' 'Qirui Jin' 'Zico Kolter' 'Suman Jana' 'Cho-Jui Hsieh'\n 'Huan Zhang']"
] |
null | null | 2405.21064 | null | null | http://arxiv.org/pdf/2405.21064v1 | 2024-05-31T17:53:00Z | 2024-05-31T17:53:00Z | Recurrent neural networks: vanishing and exploding gradients are not the
end of the story | Recurrent neural networks (RNNs) notoriously struggle to learn long-term memories, primarily due to vanishing and exploding gradients. The recent success of state-space models (SSMs), a subclass of RNNs, to overcome such difficulties challenges our theoretical understanding. In this paper, we delve into the optimization challenges of RNNs and discover that, as the memory of a network increases, changes in its parameters result in increasingly large output variations, making gradient-based learning highly sensitive, even without exploding gradients. Our analysis further reveals the importance of the element-wise recurrence design pattern combined with careful parametrizations in mitigating this effect. This feature is present in SSMs, as well as in other architectures, such as LSTMs. Overall, our insights provide a new explanation for some of the difficulties in gradient-based learning of RNNs and why some architectures perform better than others. | [
"['Nicolas Zucchet' 'Antonio Orvieto']"
] |
null | null | 2405.21070 | null | null | http://arxiv.org/pdf/2405.21070v2 | 2024-06-14T16:42:47Z | 2024-05-31T17:57:24Z | Generalization Beyond Data Imbalance: A Controlled Study on CLIP for
Transferable Insights | Severe data imbalance naturally exists among web-scale vision-language datasets. Despite this, we find CLIP pre-trained thereupon exhibits notable robustness to the data imbalance compared to supervised learning, and demonstrates significant effectiveness in learning generalizable representations. With an aim to investigate the reasons behind this finding, we conduct controlled experiments to study various underlying factors, and reveal that CLIP's pretext task forms a dynamic classification problem wherein only a subset of classes is present in training. This isolates the bias from dominant classes and implicitly balances the learning signal. Furthermore, the robustness and discriminability of CLIP improve with more descriptive language supervision, larger data scale, and broader open-world concepts, which are inaccessible to supervised learning. Our study not only uncovers the mechanisms behind CLIP's generalizability beyond data imbalance but also provides transferable insights for the research community. The findings are validated in both supervised and self-supervised learning, enabling models trained on imbalanced data to achieve CLIP-level performance on diverse recognition tasks. Code and data are available at: https://github.com/CVMI-Lab/clip-beyond-tail. | [
"['Xin Wen' 'Bingchen Zhao' 'Yilun Chen' 'Jiangmiao Pang' 'Xiaojuan Qi']"
] |
null | null | 2406.00001 | null | null | http://arxiv.org/pdf/2406.00001v1 | 2024-04-22T06:35:08Z | 2024-04-22T06:35:08Z | PhyPlan: Generalizable and Rapid Physical Task Planning with Physics
Informed Skill Networks for Robot Manipulators | Given the task of positioning a ball-like object to a goal region beyond direct reach, humans can often throw, slide, or rebound objects against the wall to attain the goal. However, enabling robots to reason similarly is non-trivial. Existing methods for physical reasoning are data-hungry and struggle with complexity and uncertainty inherent in the real world. This paper presents PhyPlan, a novel physics-informed planning framework that combines physics-informed neural networks (PINNs) with modified Monte Carlo Tree Search (MCTS) to enable embodied agents to perform dynamic physical tasks. PhyPlan leverages PINNs to simulate and predict outcomes of actions in a fast and accurate manner and uses MCTS for planning. It dynamically determines whether to consult a PINN-based simulator (coarse but fast) or engage directly with the actual environment (fine but slow) to determine optimal policy. Given an unseen task, PhyPlan can infer the sequence of actions and learn the latent parameters, resulting in a generalizable approach that can rapidly learn to perform novel physical tasks. Evaluation with robots in simulated 3D environments demonstrates the ability of our approach to solve 3D-physical reasoning tasks involving the composition of dynamic skills. Quantitatively, PhyPlan excels in several aspects: (i) it achieves lower regret when learning novel tasks compared to the state-of-the-art, (ii) it expedites skill learning and enhances the speed of physical reasoning, (iii) it demonstrates higher data efficiency compared to a physics un-informed approach. | [
"['Mudit Chopra' 'Abhinav Barnawal' 'Harshil Vagadia' 'Tamajit Banerjee'\n 'Shreshth Tuli' 'Souvik Chakraborty' 'Rohan Paul']"
] |
null | null | 2406.00004 | null | null | http://arxiv.org/pdf/2406.00004v2 | 2024-06-04T03:10:54Z | 2024-05-12T04:15:05Z | Navigating the Future of Federated Recommendation Systems with
Foundation Models | In recent years, the integration of federated learning (FL) and recommendation systems (RS), known as Federated Recommendation Systems (FRS), has attracted attention for preserving user privacy by keeping private data on client devices. However, FRS faces inherent limitations such as data heterogeneity and scarcity, due to the privacy requirements of FL and the typical data sparsity issues of RSs. Models like ChatGPT are empowered by the concept of transfer learning and self-supervised learning, so they can be easily applied to the downstream tasks after fine-tuning or prompting. These models, so-called Foundation Models (FM), fouce on understanding the human's intent and perform following their designed roles in the specific tasks, which are widely recognized for producing high-quality content in the image and language domains. Thus, the achievements of FMs inspire the design of FRS and suggest a promising research direction: integrating foundation models to address the above limitations. In this study, we conduct a comprehensive review of FRSs with FMs. Specifically, we: 1) summarise the common approaches of current FRSs and FMs; 2) review the challenges posed by FRSs and FMs; 3) discuss potential future research directions; and 4) introduce some common benchmarks and evaluation metrics in the FRS field. We hope that this position paper provides the necessary background and guidance to explore this interesting and emerging topic. | [
"['Zhiwei Li' 'Guodong Long']"
] |
null | null | 2406.00013 | null | null | http://arxiv.org/pdf/2406.00013v1 | 2024-05-20T21:27:18Z | 2024-05-20T21:27:18Z | Thesis: Document Summarization with applications to Keyword extraction
and Image Retrieval | Automatic summarization is the process of reducing a text document in order to generate a summary that retains the most important points of the original document. In this work, we study two problems - i) summarizing a text document as set of keywords/caption, for image recommedation, ii) generating opinion summary which good mix of relevancy and sentiment with the text document. Intially, we present our work on an recommending images for enhancing a substantial amount of existing plain text news articles. We use probabilistic models and word similarity heuristics to generate captions and extract Key-phrases which are re-ranked using a rank aggregation framework with relevance feedback mechanism. We show that such rank aggregation and relevant feedback which are typically used in Tagging Documents, Text Information Retrieval also helps in improving image retrieval. These queries are fed to the Yahoo Search Engine to obtain relevant images 1. Our proposed method is observed to perform better than all existing baselines. Additonally, We propose a set of submodular functions for opinion summarization. Opinion summarization has built in it the tasks of summarization and sentiment detection. However, it is not easy to detect sentiment and simultaneously extract summary. The two tasks conflict in the sense that the demand of compression may drop sentiment bearing sentences, and the demand of sentiment detection may bring in redundant sentences. However, using submodularity we show how to strike a balance between the two requirements. Our functions generate summaries such that there is good correlation between document sentiment and summary sentiment along with good ROUGE score. We also compare the performances of the proposed submodular functions. | [
"['Jayaprakash Sundararaj']"
] |
null | null | 2406.00024 | null | null | http://arxiv.org/pdf/2406.00024v1 | 2024-05-24T06:11:17Z | 2024-05-24T06:11:17Z | Embedding-Aligned Language Models | We propose a novel approach for training large language models (LLMs) to adhere to objectives defined within a latent embedding space. Our method leverages reinforcement learning (RL), treating a pre-trained LLM as an environment. Our embedding-aligned guided language (EAGLE) agent is trained to iteratively steer the LLM's generation towards optimal regions of the latent embedding space, w.r.t. some predefined criterion. We demonstrate the effectiveness of the EAGLE agent using the MovieLens 25M dataset to surface content gaps that satisfy latent user demand. We also demonstrate the benefit of using an optimal design of a state-dependent action set to improve EAGLE's efficiency. Our work paves the way for controlled and grounded text generation using LLMs, ensuring consistency with domain-specific knowledge and data representations. | [
"['Guy Tennenholtz' 'Yinlam Chow' 'Chih-Wei Hsu' 'Lior Shani' 'Ethan Liang'\n 'Craig Boutilier']"
] |
null | null | 2406.00027 | null | null | http://arxiv.org/pdf/2406.00027v1 | 2024-05-24T13:39:47Z | 2024-05-24T13:39:47Z | Adapting PromptORE for Modern History: Information Extraction from
Hispanic Monarchy Documents of the XVIth Century | Semantic relations among entities are a widely accepted method for relation extraction. PromptORE (Prompt-based Open Relation Extraction) was designed to improve relation extraction with Large Language Models on generalistic documents. However, it is less effective when applied to historical documents, in languages other than English. In this study, we introduce an adaptation of PromptORE to extract relations from specialized documents, namely digital transcripts of trials from the Spanish Inquisition. Our approach involves fine-tuning transformer models with their pretraining objective on the data they will perform inference. We refer to this process as "biasing". Our Biased PromptORE addresses complex entity placements and genderism that occur in Spanish texts. We solve these issues by prompt engineering. We evaluate our method using Encoder-like models, corroborating our findings with experts' assessments. Additionally, we evaluate the performance using a binomial classification benchmark. Our results show a substantial improvement in accuracy -up to a 50% improvement with our Biased PromptORE models in comparison to the baseline models using standard PromptORE. | [
"['Hèctor Loopez Hidalgo' 'Michel Boeglin' 'David Kahn' 'Josiane Mothe'\n 'Diego Ortiz' 'David Panzoli']"
] |
null | null | 2406.00028 | null | null | http://arxiv.org/pdf/2406.00028v1 | 2024-05-24T14:56:36Z | 2024-05-24T14:56:36Z | Persian Homograph Disambiguation: Leveraging ParsBERT for Enhanced
Sentence Understanding with a Novel Word Disambiguation Dataset | Homograph disambiguation, the task of distinguishing words with identical spellings but different meanings, poses a substantial challenge in natural language processing. In this study, we introduce a novel dataset tailored for Persian homograph disambiguation. Our work encompasses a thorough exploration of various embeddings, evaluated through the cosine similarity method and their efficacy in downstream tasks like classification. Our investigation entails training a diverse array of lightweight machine learning and deep learning models for phonograph disambiguation. We scrutinize the models' performance in terms of Accuracy, Recall, and F1 Score, thereby gaining insights into their respective strengths and limitations. The outcomes of our research underscore three key contributions. First, we present a newly curated Persian dataset, providing a solid foundation for future research in homograph disambiguation. Second, our comparative analysis of embeddings highlights their utility in different contexts, enriching the understanding of their capabilities. Third, by training and evaluating a spectrum of models, we extend valuable guidance for practitioners in selecting suitable strategies for homograph disambiguation tasks. In summary, our study unveils a new dataset, scrutinizes embeddings through diverse perspectives, and benchmarks various models for homograph disambiguation. These findings empower researchers and practitioners to navigate the intricate landscape of homograph-related challenges effectively. | [
"['Seyed Moein Ayyoubzadeh']"
] |
null | null | 2406.00030 | null | null | http://arxiv.org/pdf/2406.00030v1 | 2024-05-24T18:22:15Z | 2024-05-24T18:22:15Z | Large Language Model Pruning | We surely enjoy the larger the better models for their superior performance in the last couple of years when both the hardware and software support the birth of such extremely huge models. The applied fields include text mining and others. In particular, the success of LLMs on text understanding and text generation draws attention from researchers who have worked on NLP and related areas for years or even decades. On the side, LLMs may suffer from problems like model overfitting, hallucination, and device limitation to name a few. In this work, we suggest a model pruning technique specifically focused on LLMs. The proposed methodology emphasizes the explainability of deep learning models. By having the theoretical foundation, we obtain a trustworthy deep model so that huge models with a massive number of model parameters become not quite necessary. A mutual information-based estimation is adopted to find neurons with redundancy to eliminate. Moreover, an estimator with well-tuned parameters helps to find precise estimation to guide the pruning procedure. At the same time, we also explore the difference between pruning on large-scale models vs. pruning on small-scale models. The choice of pruning criteria is sensitive in small models but not for large-scale models. It is a novel finding through this work. Overall, we demonstrate the superiority of the proposed model to the state-of-the-art models. | [
"['Hanjuan Huang' 'Hao-Jia Song' 'Hsing-Kuo Pao']"
] |
null | null | 2406.00031 | null | null | http://arxiv.org/pdf/2406.00031v1 | 2024-05-24T20:03:32Z | 2024-05-24T20:03:32Z | AMGPT: a Large Language Model for Contextual Querying in Additive
Manufacturing | Generalized large language models (LLMs) such as GPT-4 may not provide specific answers to queries formulated by materials science researchers. These models may produce a high-level outline but lack the capacity to return detailed instructions on manufacturing and material properties of novel alloys. Enhancing a smaller model with specialized domain knowledge may provide an advantage over large language models which cannot be retrained quickly enough to keep up with the rapid pace of research in metal additive manufacturing (AM). We introduce "AMGPT," a specialized LLM text generator designed for metal AM queries. The goal of AMGPT is to assist researchers and users in navigating the extensive corpus of literature in AM. Instead of training from scratch, we employ a pre-trained Llama2-7B model from Hugging Face in a Retrieval-Augmented Generation (RAG) setup, utilizing it to dynamically incorporate information from $sim$50 AM papers and textbooks in PDF format. Mathpix is used to convert these PDF documents into TeX format, facilitating their integration into the RAG pipeline managed by LlamaIndex. Expert evaluations of this project highlight that specific embeddings from the RAG setup accelerate response times and maintain coherence in the generated text. | [
"['Achuth Chandrasekhar' 'Jonathan Chan' 'Francis Ogoke'\n 'Olabode Ajenifujah' 'Amir Barati Farimani']"
] |
null | null | 2406.00036 | null | null | http://arxiv.org/pdf/2406.00036v1 | 2024-05-27T10:53:15Z | 2024-05-27T10:53:15Z | EMERGE: Integrating RAG for Improved Multimodal EHR Predictive Modeling | The integration of multimodal Electronic Health Records (EHR) data has notably advanced clinical predictive capabilities. However, current models that utilize clinical notes and multivariate time-series EHR data often lack the necessary medical context for precise clinical tasks. Previous methods using knowledge graphs (KGs) primarily focus on structured knowledge extraction. To address this, we propose EMERGE, a Retrieval-Augmented Generation (RAG) driven framework aimed at enhancing multimodal EHR predictive modeling. Our approach extracts entities from both time-series data and clinical notes by prompting Large Language Models (LLMs) and aligns them with professional PrimeKG to ensure consistency. Beyond triplet relationships, we include entities' definitions and descriptions to provide richer semantics. The extracted knowledge is then used to generate task-relevant summaries of patients' health statuses. These summaries are fused with other modalities utilizing an adaptive multimodal fusion network with cross-attention. Extensive experiments on the MIMIC-III and MIMIC-IV datasets for in-hospital mortality and 30-day readmission tasks demonstrate the superior performance of the EMERGE framework compared to baseline models. Comprehensive ablation studies and analyses underscore the efficacy of each designed module and the framework's robustness to data sparsity. EMERGE significantly enhances the use of multimodal EHR data in healthcare, bridging the gap with nuanced medical contexts crucial for informed clinical predictions. | [
"['Yinghao Zhu' 'Changyu Ren' 'Zixiang Wang' 'Xiaochen Zheng' 'Shiyun Xie'\n 'Junlan Feng' 'Xi Zhu' 'Zhoujun Li' 'Liantao Ma' 'Chengwei Pan']"
] |
null | null | 2406.00044 | null | null | http://arxiv.org/pdf/2406.00044v1 | 2024-05-28T00:02:38Z | 2024-05-28T00:02:38Z | Stochastic Adversarial Networks for Multi-Domain Text Classification | Adversarial training has been instrumental in advancing multi-domain text classification (MDTC). Traditionally, MDTC methods employ a shared-private paradigm, with a shared feature extractor for domain-invariant knowledge and individual private feature extractors for domain-specific knowledge. Despite achieving state-of-the-art results, these methods grapple with the escalating model parameters due to the continuous addition of new domains. To address this challenge, we introduce the Stochastic Adversarial Network (SAN), which innovatively models the parameters of the domain-specific feature extractor as a multivariate Gaussian distribution, as opposed to a traditional weight vector. This design allows for the generation of numerous domain-specific feature extractors without a substantial increase in model parameters, maintaining the model's size on par with that of a single domain-specific extractor. Furthermore, our approach integrates domain label smoothing and robust pseudo-label regularization to fortify the stability of adversarial training and to refine feature discriminability, respectively. The performance of our SAN, evaluated on two leading MDTC benchmarks, demonstrates its competitive edge against the current state-of-the-art methodologies. The code is available at https://github.com/wangxu0820/SAN. | [
"['Xu Wang' 'Yuan Wu']"
] |
null | null | 2406.00045 | null | null | http://arxiv.org/pdf/2406.00045v1 | 2024-05-28T05:10:40Z | 2024-05-28T05:10:40Z | Personalized Steering of Large Language Models: Versatile Steering
Vectors Through Bi-directional Preference Optimization | Researchers have been studying approaches to steer the behavior of Large Language Models (LLMs) and build personalized LLMs tailored for various applications. While fine-tuning seems to be a direct solution, it requires substantial computational resources and may significantly affect the utility of the original LLM. Recent endeavors have introduced more lightweight strategies, focusing on extracting "steering vectors" to guide the model's output toward desired behaviors by adjusting activations within specific layers of the LLM's transformer architecture. However, such steering vectors are directly extracted from the activations of human preference data and thus often lead to suboptimal results and occasional failures, especially in alignment-related scenarios. This work proposes an innovative approach that could produce more effective steering vectors through bi-directional preference optimization. Our method is designed to allow steering vectors to directly influence the generation probability of contrastive human preference data pairs, thereby offering a more precise representation of the target behavior. By carefully adjusting the direction and magnitude of the steering vector, we enabled personalized control over the desired behavior across a spectrum of intensities. Extensive experimentation across various open-ended generation tasks, particularly focusing on steering AI personas, has validated the efficacy of our approach. Moreover, we comprehensively investigate critical alignment-concerning scenarios, such as managing truthfulness, mitigating hallucination, and addressing jailbreaking attacks. Remarkably, our method can still demonstrate outstanding steering effectiveness across these scenarios. Furthermore, we showcase the transferability of our steering vectors across different models/LoRAs and highlight the synergistic benefits of applying multiple vectors simultaneously. | [
"['Yuanpu Cao' 'Tianrong Zhang' 'Bochuan Cao' 'Ziyi Yin' 'Lu Lin'\n 'Fenglong Ma' 'Jinghui Chen']"
] |
null | null | 2406.00046 | null | null | http://arxiv.org/pdf/2406.00046v2 | 2024-06-11T13:18:14Z | 2024-05-28T13:09:22Z | Hate Speech Detection with Generalizable Target-aware Fairness | To counter the side effect brought by the proliferation of social media platforms, hate speech detection (HSD) plays a vital role in halting the dissemination of toxic online posts at an early stage. However, given the ubiquitous topical communities on social media, a trained HSD classifier easily becomes biased towards specific targeted groups (e.g., female and black people), where a high rate of false positive/negative results can significantly impair public trust in the fairness of content moderation mechanisms, and eventually harm the diversity of online society. Although existing fairness-aware HSD methods can smooth out some discrepancies across targeted groups, they are mostly specific to a narrow selection of targets that are assumed to be known and fixed. This inevitably prevents those methods from generalizing to real-world use cases where new targeted groups constantly emerge over time. To tackle this defect, we propose Generalizable target-aware Fairness (GetFair), a new method for fairly classifying each post that contains diverse and even unseen targets during inference. To remove the HSD classifier's spurious dependence on target-related features, GetFair trains a series of filter functions in an adversarial pipeline, so as to deceive the discriminator that recovers the targeted group from filtered post embeddings. To maintain scalability and generalizability, we innovatively parameterize all filter functions via a hypernetwork that is regularized by the semantic affinity among targets. Taking a target's pretrained word embedding as input, the hypernetwork generates the weights used by each target-specific filter on-the-fly without storing dedicated filter parameters. Finally, comparative experiments on two HSD datasets have shown advantageous performance of GetFair on out-of-sample targets. | [
"['Tong Chen' 'Danny Wang' 'Xurong Liang' 'Marten Risius'\n 'Gianluca Demartini' 'Hongzhi Yin']"
] |
null | null | 2406.00047 | null | null | http://arxiv.org/pdf/2406.00047v1 | 2024-05-28T15:42:15Z | 2024-05-28T15:42:15Z | A Theoretical Framework for an Efficient Normalizing Flow-Based Solution
to the Schrodinger Equation | A central problem in quantum mechanics involves solving the Electronic Schrodinger Equation for a molecule or material. The Variational Monte Carlo approach to this problem approximates a particular variational objective via sampling, and then optimizes this approximated objective over a chosen parameterized family of wavefunctions, known as the ansatz. Recently neural networks have been used as the ansatz, with accompanying success. However, sampling from such wavefunctions has required the use of a Markov Chain Monte Carlo approach, which is inherently inefficient. In this work, we propose a solution to this problem via an ansatz which is cheap to sample from, yet satisfies the requisite quantum mechanical properties. We prove that a normalizing flow using the following two essential ingredients satisfies our requirements: (a) a base distribution which is constructed from Determinantal Point Processes; (b) flow layers which are equivariant to a particular subgroup of the permutation group. We then show how to construct both continuous and discrete normalizing flows which satisfy the requisite equivariance. We further demonstrate the manner in which the non-smooth nature ("cusps") of the wavefunction may be captured, and how the framework may be generalized to provide induction across multiple molecules. The resulting theoretical framework entails an efficient approach to solving the Electronic Schrodinger Equation. | [
"['Daniel Freedman' 'Eyal Rozenberg' 'Alex Bronstein']"
] |
null | null | 2406.00048 | null | null | http://arxiv.org/pdf/2406.00048v1 | 2024-05-28T17:01:22Z | 2024-05-28T17:01:22Z | Towards a theory of how the structure of language is acquired by deep
neural networks | How much data is required to learn the structure of a language via next-token prediction? We study this question for synthetic datasets generated via a Probabilistic Context-Free Grammar (PCFG) -- a hierarchical generative model that captures the tree-like structure of natural languages. We determine token-token correlations analytically in our model and show that they can be used to build a representation of the grammar's hidden variables, the longer the range the deeper the variable. In addition, a finite training set limits the resolution of correlations to an effective range, whose size grows with that of the training set. As a result, a Language Model trained with increasingly many examples can build a deeper representation of the grammar's structure, thus reaching good performance despite the high dimensionality of the problem. We conjecture that the relationship between training set size and effective range of correlations holds beyond our synthetic datasets. In particular, our conjecture predicts how the scaling law for the test loss behaviour with training set size depends on the length of the context window, which we confirm empirically for a collection of lines from Shakespeare's plays. | [
"['Francesco Cagnetta' 'Matthieu Wyart']"
] |
null | null | 2406.00049 | null | null | http://arxiv.org/pdf/2406.00049v1 | 2024-05-28T17:36:06Z | 2024-05-28T17:36:06Z | QUEST: Quality-Aware Metropolis-Hastings Sampling for Machine
Translation | An important challenge in machine translation (MT) is to generate high-quality and diverse translations. Prior work has shown that the estimated likelihood from the MT model correlates poorly with translation quality. In contrast, quality evaluation metrics (such as COMET or BLEURT) exhibit high correlations with human judgments, which has motivated their use as rerankers (such as quality-aware and minimum Bayes risk decoding). However, relying on a single translation with high estimated quality increases the chances of "gaming the metric''. In this paper, we address the problem of sampling a set of high-quality and diverse translations. We provide a simple and effective way to avoid over-reliance on noisy quality estimates by using them as the energy function of a Gibbs distribution. Instead of looking for a mode in the distribution, we generate multiple samples from high-density areas through the Metropolis-Hastings algorithm, a simple Markov chain Monte Carlo approach. The results show that our proposed method leads to high-quality and diverse outputs across multiple language pairs (English$leftrightarrow${German, Russian}) with two strong decoder-only LLMs (Alma-7b, Tower-7b). | [
"['Gonçalo R. A. Faria' 'Sweta Agrawal' 'António Farinhas' 'Ricardo Rei'\n 'José G. C. de Souza' 'André F. T. Martins']"
] |
null | null | 2406.00053 | null | null | http://arxiv.org/pdf/2406.00053v2 | 2024-07-01T18:23:43Z | 2024-05-28T21:38:20Z | Dual Process Learning: Controlling Use of In-Context vs. In-Weights
Strategies with Weight Forgetting | Language models have the ability to perform in-context learning (ICL), allowing them to flexibly adapt their behavior based on context. This contrasts with in-weights learning, where information is statically encoded in model parameters from iterated observations of the data. Despite this apparent ability to learn in-context, language models are known to struggle when faced with unseen or rarely seen tokens. Hence, we study $textbf{structural in-context learning}$, which we define as the ability of a model to execute in-context learning on arbitrary tokens -- so called because the model must generalize on the basis of e.g. sentence structure or task structure, rather than semantic content encoded in token embeddings. An ideal model would be able to do both: flexibly deploy in-weights operations (in order to robustly accommodate ambiguous or unknown contexts using encoded semantic information) and structural in-context operations (in order to accommodate novel tokens). We study structural in-context algorithms in a simple part-of-speech setting using both practical and toy models. We find that active forgetting, a technique that was recently introduced to help models generalize to new languages, forces models to adopt structural in-context learning solutions. Finally, we introduce $textbf{temporary forgetting}$, a straightforward extension of active forgetting that enables one to control how much a model relies on in-weights vs. in-context solutions. Importantly, temporary forgetting allows us to induce a $textit{dual process strategy}$ where in-context and in-weights solutions coexist within a single model. | [
"['Suraj Anand' 'Michael A. Lepori' 'Jack Merullo' 'Ellie Pavlick']"
] |
null | null | 2406.00054 | null | null | http://arxiv.org/pdf/2406.00054v1 | 2024-05-29T08:34:01Z | 2024-05-29T08:34:01Z | $ε$-Optimally Solving Zero-Sum POSGs | A recent method for solving zero-sum partially observable stochastic games (zs-POSGs) embeds the original game into a new one called the occupancy Markov game. This reformulation allows applying Bellman's principle of optimality to solve zs-POSGs. However, improving a current solution requires solving a linear program with exponentially many potential constraints, which significantly restricts the scalability of this approach. This paper exploits the optimal value function's novel uniform continuity properties to overcome this limitation. We first construct a new operator that is computationally more efficient than the state-of-the-art update rules without compromising optimality. In particular, improving a current solution now involves a linear program with an exponential drop in constraints. We then also show that point-based value iteration algorithms utilizing our findings improve the scalability of existing methods while maintaining guarantees in various domains. | [
"['Erwan Escudie' 'Matthia Sabatelli' 'Jilles Dibangoye']"
] |
null | null | 2406.00057 | null | null | http://arxiv.org/pdf/2406.00057v2 | 2024-06-04T18:01:03Z | 2024-05-29T18:19:46Z | Toward Conversational Agents with Context and Time Sensitive Long-term
Memory | There has recently been growing interest in conversational agents with long-term memory which has led to the rapid development of language models that use retrieval-augmented generation (RAG). Until recently, most work on RAG has focused on information retrieval from large databases of texts, like Wikipedia, rather than information from long-form conversations. In this paper, we argue that effective retrieval from long-form conversational data faces two unique problems compared to static database retrieval: 1) time/event-based queries, which requires the model to retrieve information about previous conversations based on time or the order of a conversational event (e.g., the third conversation on Tuesday), and 2) ambiguous queries that require surrounding conversational context to understand. To better develop RAG-based agents that can deal with these challenges, we generate a new dataset of ambiguous and time-based questions that build upon a recent dataset of long-form, simulated conversations, and demonstrate that standard RAG based approaches handle such questions poorly. We then develop a novel retrieval model which combines chained-of-table search methods, standard vector-database retrieval, and a prompting method to disambiguate queries, and demonstrate that this approach substantially improves over current methods at solving these tasks. We believe that this new dataset and more advanced RAG agent can act as a key benchmark and stepping stone towards effective memory augmented conversational agents that can be used in a wide variety of AI applications. | [
"['Nick Alonso' 'Tomás Figliolia' 'Anthony Ndirango' 'Beren Millidge']"
] |
null | null | 2406.00059 | null | null | http://arxiv.org/pdf/2406.00059v2 | 2024-06-04T19:00:36Z | 2024-05-29T21:24:15Z | Conveyor: Efficient Tool-aware LLM Serving with Tool Partial Execution | The complexity of large language model (LLM) serving workloads has substantially increased due to the integration with external tool invocations, such as ChatGPT plugins. In this paper, we identify a new opportunity for efficient LLM serving for requests that trigger tools: tool partial execution alongside LLM decoding. To this end, we design Conveyor, an efficient LLM serving system optimized for handling requests involving external tools. We introduce a novel interface for tool developers to expose partial execution opportunities to the LLM serving system and a request scheduler that facilitates partial tool execution. Our results demonstrate that tool partial execution can improve request completion latency by up to 38.8%. | [
"['Yechen Xu' 'Xinhao Kong' 'Tingjun Chen' 'Danyang Zhuo']"
] |
null | null | 2406.00060 | null | null | http://arxiv.org/pdf/2406.00060v1 | 2024-05-29T22:28:46Z | 2024-05-29T22:28:46Z | Cascade-Aware Training of Language Models | Reducing serving cost and latency is a fundamental concern for the deployment of language models (LMs) in business applications. To address this, cascades of LMs offer an effective solution that conditionally employ smaller models for simpler queries. Cascaded systems are typically built with independently trained models, neglecting the advantages of considering inference-time interactions of the cascaded LMs during training. In this paper, we present cascade-aware training(CAT), an approach to optimizing the overall quality-cost performance tradeoff of a cascade of LMs. We achieve inference-time benefits by training the small LM with awareness of its place in a cascade and downstream capabilities. We demonstrate the value of the proposed method with over 60 LM tasks of the SuperGLUE, WMT22, and FLAN2021 datasets. | [
"['Congchao Wang' 'Sean Augenstein' 'Keith Rush' 'Wittawat Jitkrittum'\n 'Harikrishna Narasimhan' 'Ankit Singh Rawat' 'Aditya Krishna Menon'\n 'Alec Go']"
] |
null | null | 2406.00061 | null | null | http://arxiv.org/pdf/2406.00061v1 | 2024-05-29T22:59:11Z | 2024-05-29T22:59:11Z | STAT: Shrinking Transformers After Training | We present STAT: a simple algorithm to prune transformer models without any fine-tuning. STAT eliminates both attention heads and neurons from the network, while preserving accuracy by calculating a correction to the weights of the next layer. Each layer block in the network is compressed using a series of principled matrix factorizations that preserve the network structure. Our entire algorithm takes minutes to compress BERT, and less than three hours to compress models with 7B parameters using a single GPU. Using only several hundred data examples, STAT preserves the output of the network and improves upon existing gradient-free pruning methods. It is even competitive with methods that include significant fine-tuning. We demonstrate our method on both encoder and decoder architectures, including BERT, DistilBERT, and Llama-2 using benchmarks such as GLUE, Squad, WikiText2. | [
"['Megan Flynn' 'Alexander Wang' 'Dean Edward Alvarez' 'Christopher De Sa'\n 'Anil Damle']"
] |
null | null | 2406.00062 | null | null | http://arxiv.org/pdf/2406.00062v1 | 2024-05-29T23:07:58Z | 2024-05-29T23:07:58Z | Unlocking the Potential of Large Language Models for Clinical Text
Anonymization: A Comparative Study | Automated clinical text anonymization has the potential to unlock the widespread sharing of textual health data for secondary usage while assuring patient privacy and safety. Despite the proposal of many complex and theoretically successful anonymization solutions in literature, these techniques remain flawed. As such, clinical institutions are still reluctant to apply them for open access to their data. Recent advances in developing Large Language Models (LLMs) pose a promising opportunity to further the field, given their capability to perform various tasks. This paper proposes six new evaluation metrics tailored to the challenges of generative anonymization with LLMs. Moreover, we present a comparative study of LLM-based methods, testing them against two baseline techniques. Our results establish LLM-based models as a reliable alternative to common approaches, paving the way toward trustworthy anonymization of clinical text. | [
"['David Pissarra' 'Isabel Curioso' 'João Alveira' 'Duarte Pereira'\n 'Bruno Ribeiro' 'Tomás Souper' 'Vasco Gomes' 'André V. Carreiro'\n 'Vitor Rolla']"
] |
null | null | 2406.00069 | null | null | http://arxiv.org/pdf/2406.00069v1 | 2024-05-30T18:21:05Z | 2024-05-30T18:21:05Z | Confidence-Aware Sub-Structure Beam Search (CABS): Mitigating
Hallucination in Structured Data Generation with Large Language Models | Large Language Models (LLMs) have facilitated structured data generation, with applications in domains like tabular data, document databases, product catalogs, etc. However, concerns persist about generation veracity due to incorrect references or hallucinations, necessitating the incorporation of some form of model confidence for mitigation. Existing confidence estimation methods on LLM generations primarily focus on the confidence at the individual token level or the entire output sequence level, limiting their applicability to structured data generation, which consists of an intricate mix of both independent and correlated entries at the sub-structure level. In this paper, we first investigate confidence estimation methods for generated sub-structure-level data. We introduce the concept of Confidence Network that applies on the hidden state of the LLM transformer, as a more targeted estimate than the traditional token conditional probability. We further propose Confidence-Aware sub-structure Beam Search (CABS), a novel decoding method operating at the sub-structure level in structured data generation. CABS enhances the faithfulness of structured data generation by considering confidence scores from the Confidence Network for each sub-structure-level data and iteratively refining the prompts. Results show that CABS outperforms traditional token-level beam search for structured data generation by 16.7% Recall at 90% precision averagely on the problem of product attribute generation. | [
"['Chengwei Wei' 'Kee Kiat Koo' 'Amir Tavanaei' 'Karim Bouyarmane']"
] |
null | null | 2406.00071 | null | null | http://arxiv.org/abs/2406.00071v1 | 2024-05-30T18:57:52Z | 2024-05-30T18:57:52Z | Optimizing Photometric Light Curve Analysis: Evaluating Scipy's Minimize
Function for Eclipse Mapping of Cataclysmic Variables | With a particular focus on Scipy's minimize function the eclipse mapping method is thoroughly researched and implemented utilizing Python and essential libraries. Many optimization techniques are used, including Sequential Least Squares Programming (SLSQP), Nelder-Mead, and Conjugate Gradient (CG). However, for the purpose of examining photometric light curves these methods seek to solve the maximum entropy equation under a chi-squared constraint. Therefore, these techniques are first evaluated on two-dimensional Gaussian data without a chi-squared restriction, and then they are used to map the accretion disc and uncover the Gaussian structure of the Cataclysmic Variable KIC 201325107. Critical analysis is performed on the code structure to find possible faults and design problems. Additionally, the analysis shows how several factors impacting computing time and image quality are included including the variance in Gaussian weighting, disc image resolution, number of data points in the light curve, and degree of constraint. | [
"['Anoop Kumar' 'Madan Mohan Tito Ayyalasomayajula' 'Dheerendra Panwar'\n 'Yeshwanth Vasa']"
] |
null | null | 2406.00073 | null | null | http://arxiv.org/pdf/2406.00073v1 | 2024-05-31T00:30:29Z | 2024-05-31T00:30:29Z | A Novel Review of Stability Techniques for Improved Privacy-Preserving
Machine Learning | Machine learning models have recently enjoyed a significant increase in size and popularity. However, this growth has created concerns about dataset privacy. To counteract data leakage, various privacy frameworks guarantee that the output of machine learning models does not compromise their training data. However, this privatization comes at a cost by adding random noise to the training process, which reduces model performance. By making models more resistant to small changes in input and thus more stable, the necessary amount of noise can be decreased while still protecting privacy. This paper investigates various techniques to enhance stability, thereby minimizing the negative effects of privatization in machine learning. | [
"['Coleman DuPlessie' 'Aidan Gao']"
] |
null | null | 2406.00075 | null | null | http://arxiv.org/pdf/2406.00075v2 | 2024-06-12T03:40:35Z | 2024-05-31T03:01:16Z | Arbitrary-Length Generalization for Addition in a Tiny Transformer | This paper introduces a novel training methodology that enables a Transformer model to generalize the addition of two-digit numbers to numbers with unseen lengths of digits. The proposed approach employs an autoregressive generation technique, processing from right to left, which mimics a common manual method for adding large numbers. To the best of my knowledge, this methodology has not been previously explored in the literature. All results are reproducible, and the corresponding R code is available at github.com/AGPatriota/ALGA-R/. | [
"['Alexandre Galvao Patriota']"
] |
null | null | 2406.00079 | null | null | http://arxiv.org/pdf/2406.00079v1 | 2024-05-31T10:41:03Z | 2024-05-31T10:41:03Z | Decision Mamba: Reinforcement Learning via Hybrid Selective Sequence
Modeling | Recent works have shown the remarkable superiority of transformer models in reinforcement learning (RL), where the decision-making problem is formulated as sequential generation. Transformer-based agents could emerge with self-improvement in online environments by providing task contexts, such as multiple trajectories, called in-context RL. However, due to the quadratic computation complexity of attention in transformers, current in-context RL methods suffer from huge computational costs as the task horizon increases. In contrast, the Mamba model is renowned for its efficient ability to process long-term dependencies, which provides an opportunity for in-context RL to solve tasks that require long-term memory. To this end, we first implement Decision Mamba (DM) by replacing the backbone of Decision Transformer (DT). Then, we propose a Decision Mamba-Hybrid (DM-H) with the merits of transformers and Mamba in high-quality prediction and long-term memory. Specifically, DM-H first generates high-value sub-goals from long-term memory through the Mamba model. Then, we use sub-goals to prompt the transformer, establishing high-quality predictions. Experimental results demonstrate that DM-H achieves state-of-the-art in long and short-term tasks, such as D4RL, Grid World, and Tmaze benchmarks. Regarding efficiency, the online testing of DM-H in the long-term task is 28$times$ times faster than the transformer-based baselines. | [
"['Sili Huang' 'Jifeng Hu' 'Zhejian Yang' 'Liwei Yang' 'Tao Luo'\n 'Hechang Chen' 'Lichao Sun' 'Bo Yang']"
] |
null | null | 2406.00080 | null | null | http://arxiv.org/pdf/2406.00080v1 | 2024-05-31T12:04:54Z | 2024-05-31T12:04:54Z | An Efficient Multi Quantile Regression Network with Ad Hoc Prevention of
Quantile Crossing | This article presents the Sorting Composite Quantile Regression Neural Network (SCQRNN), an advanced quantile regression model designed to prevent quantile crossing and enhance computational efficiency. Integrating ad hoc sorting in training, the SCQRNN ensures non-intersecting quantiles, boosting model reliability and interpretability. We demonstrate that the SCQRNN not only prevents quantile crossing and reduces computational complexity but also achieves faster convergence than traditional models. This advancement meets the requirements of high-performance computing for sustainable, accurate computation. In organic computing, the SCQRNN enhances self-aware systems with predictive uncertainties, enriching applications across finance, meteorology, climate science, and engineering. | [
"['Jens Decke' 'Arne Jenß' 'Bernhard Sick' 'Christian Gruhl']"
] |
null | null | 2406.00081 | null | null | http://arxiv.org/pdf/2406.00081v1 | 2024-05-31T12:21:26Z | 2024-05-31T12:21:26Z | From Structured to Unstructured:A Comparative Analysis of Computer
Vision and Graph Models in solving Mesh-based PDEs | This article investigates the application of computer vision and graph-based models in solving mesh-based partial differential equations within high-performance computing environments. Focusing on structured, graded structured, and unstructured meshes, the study compares the performance and computational efficiency of three computer vision-based models against three graph-based models across three data-sets. The research aims to identify the most suitable models for different mesh topographies, particularly highlighting the exploration of graded meshes, a less studied area. Results demonstrate that computer vision-based models, notably U-Net, outperform the graph models in prediction performance and efficiency in two (structured and graded) out of three mesh topographies. The study also reveals the unexpected effectiveness of computer vision-based models in handling unstructured meshes, suggesting a potential shift in methodological approaches for data-driven partial differential equation learning. The article underscores deep learning as a viable and potentially sustainable way to enhance traditional high-performance computing methods, advocating for informed model selection based on the topography of the mesh. | [
"['Jens Decke' 'Olaf Wünsch' 'Bernhard Sick' 'Christian Gruhl']"
] |
null | null | 2406.00083 | null | null | http://arxiv.org/pdf/2406.00083v2 | 2024-06-06T13:38:42Z | 2024-06-03T02:25:33Z | BadRAG: Identifying Vulnerabilities in Retrieval Augmented Generation of
Large Language Models | Large Language Models (LLMs) are constrained by outdated information and a tendency to generate incorrect data, commonly referred to as "hallucinations." Retrieval-Augmented Generation (RAG) addresses these limitations by combining the strengths of retrieval-based methods and generative models. This approach involves retrieving relevant information from a large, up-to-date dataset and using it to enhance the generation process, leading to more accurate and contextually appropriate responses. Despite its benefits, RAG introduces a new attack surface for LLMs, particularly because RAG databases are often sourced from public data, such as the web. In this paper, we propose TrojRAG{} to identify the vulnerabilities and attacks on retrieval parts (RAG database) and their indirect attacks on generative parts (LLMs). Specifically, we identify that poisoning several customized content passages could achieve a retrieval backdoor, where the retrieval works well for clean queries but always returns customized poisoned adversarial queries. Triggers and poisoned passages can be highly customized to implement various attacks. For example, a trigger could be a semantic group like "The Republican Party, Donald Trump, etc." Adversarial passages can be tailored to different contents, not only linked to the triggers but also used to indirectly attack generative LLMs without modifying them. These attacks can include denial-of-service attacks on RAG and semantic steering attacks on LLM generations conditioned by the triggers. Our experiments demonstrate that by just poisoning 10 adversarial passages can induce 98.2% success rate to retrieve the adversarial passages. Then, these passages can increase the reject ratio of RAG-based GPT-4 from 0.01% to 74.6% or increase the rate of negative responses from 0.22% to 72% for targeted queries. | [
"['Jiaqi Xue' 'Mengxin Zheng' 'Yebowen Hu' 'Fei Liu' 'Xun Chen' 'Qian Lou']"
] |
null | null | 2406.00085 | null | null | http://arxiv.org/pdf/2406.00085v2 | 2024-06-07T03:03:00Z | 2024-05-31T13:55:33Z | Augmentation-based Unsupervised Cross-Domain Functional MRI Adaptation
for Major Depressive Disorder Identification | Major depressive disorder (MDD) is a common mental disorder that typically affects a person's mood, cognition, behavior, and physical health. Resting-state functional magnetic resonance imaging (rs-fMRI) data are widely used for computer-aided diagnosis of MDD. While multi-site fMRI data can provide more data for training reliable diagnostic models, significant cross-site data heterogeneity would result in poor model generalizability. Many domain adaptation methods are designed to reduce the distributional differences between sites to some extent, but usually ignore overfitting problem of the model on the source domain. Intuitively, target data augmentation can alleviate the overfitting problem by forcing the model to learn more generalized features and reduce the dependence on source domain data. In this work, we propose a new augmentation-based unsupervised cross-domain fMRI adaptation (AUFA) framework for automatic diagnosis of MDD. The AUFA consists of 1) a graph representation learning module for extracting rs-fMRI features with spatial attention, 2) a domain adaptation module for feature alignment between source and target data, 3) an augmentation-based self-optimization module for alleviating model overfitting on the source domain, and 4) a classification module. Experimental results on 1,089 subjects suggest that AUFA outperforms several state-of-the-art methods in MDD identification. Our approach not only reduces data heterogeneity between different sites, but also localizes disease-related functional connectivity abnormalities and provides interpretability for the model. | [
"['Yunling Ma' 'Chaojun Zhang' 'Xiaochuan Wang' 'Qianqian Wang' 'Liang Cao'\n 'Limei Zhang' 'Mingxia Liu']"
] |
null | null | 2406.00092 | null | null | http://arxiv.org/pdf/2406.00092v1 | 2024-05-31T17:56:07Z | 2024-05-31T17:56:07Z | How Random is Random? Evaluating the Randomness and Humaness of LLMs'
Coin Flips | One uniquely human trait is our inability to be random. We see and produce patterns where there should not be any and we do so in a predictable way. LLMs are supplied with human data and prone to human biases. In this work, we explore how LLMs approach randomness and where and how they fail through the lens of the well studied phenomena of generating binary random sequences. We find that GPT 4 and Llama 3 exhibit and exacerbate nearly every human bias we test in this context, but GPT 3.5 exhibits more random behavior. This dichotomy of randomness or humaness is proposed as a fundamental question of LLMs and that either behavior may be useful in different circumstances. | [
"['Katherine Van Koevering' 'Jon Kleinberg']"
] |
null | null | 2406.00093 | null | null | http://arxiv.org/pdf/2406.00093v1 | 2024-05-31T17:59:56Z | 2024-05-31T17:59:56Z | Bootstrap3D: Improving 3D Content Creation with Synthetic Data | Recent years have witnessed remarkable progress in multi-view diffusion models for 3D content creation. However, there remains a significant gap in image quality and prompt-following ability compared to 2D diffusion models. A critical bottleneck is the scarcity of high-quality 3D assets with detailed captions. To address this challenge, we propose Bootstrap3D, a novel framework that automatically generates an arbitrary quantity of multi-view images to assist in training multi-view diffusion models. Specifically, we introduce a data generation pipeline that employs (1) 2D and video diffusion models to generate multi-view images based on constructed text prompts, and (2) our fine-tuned 3D-aware MV-LLaVA for filtering high-quality data and rewriting inaccurate captions. Leveraging this pipeline, we have generated 1 million high-quality synthetic multi-view images with dense descriptive captions to address the shortage of high-quality 3D data. Furthermore, we present a Training Timestep Reschedule (TTR) strategy that leverages the denoising process to learn multi-view consistency while maintaining the original 2D diffusion prior. Extensive experiments demonstrate that Bootstrap3D can generate high-quality multi-view images with superior aesthetic quality, image-text alignment, and maintained view consistency. | [
"['Zeyi Sun' 'Tong Wu' 'Pan Zhang' 'Yuhang Zang' 'Xiaoyi Dong'\n 'Yuanjun Xiong' 'Dahua Lin' 'Jiaqi Wang']"
] |
null | null | 2406.00104 | null | null | http://arxiv.org/pdf/2406.00104v1 | 2024-05-31T18:00:12Z | 2024-05-31T18:00:12Z | Scalable Bayesian Learning with posteriors | Although theoretically compelling, Bayesian learning with modern machine learning models is computationally challenging since it requires approximating a high dimensional posterior distribution. In this work, we (i) introduce posteriors, an easily extensible PyTorch library hosting general-purpose implementations making Bayesian learning accessible and scalable to large data and parameter regimes; (ii) present a tempered framing of stochastic gradient Markov chain Monte Carlo, as implemented in posteriors, that transitions seamlessly into optimization and unveils a minor modification to deep ensembles to ensure they are asymptotically unbiased for the Bayesian posterior, and (iii) demonstrate and compare the utility of Bayesian approximations through experiments including an investigation into the cold posterior effect and applications with large language models. | [
"['Samuel Duffield' 'Kaelan Donatella' 'Johnathan Chiu' 'Phoebe Klett'\n 'Daniel Simpson']"
] |
null | null | 2406.00116 | null | null | http://arxiv.org/pdf/2406.00116v1 | 2024-05-31T18:08:35Z | 2024-05-31T18:08:35Z | A Sim2Real Approach for Identifying Task-Relevant Properties in
Interpretable Machine Learning | Existing user studies suggest that different tasks may require explanations with different properties. However, user studies are expensive. In this paper, we introduce a generalizable, cost-effective method for identifying task-relevant explanation properties in silico, which can guide the design of more expensive user studies. We use our approach to identify relevant proxies for three example tasks and validate our simulation with real user studies. | [
"['Eura Nofshin' 'Esther Brown' 'Brian Lim' 'Weiwei Pan'\n 'Finale Doshi-Velez']"
] |
null | null | 2406.00118 | null | null | http://arxiv.org/pdf/2406.00118v1 | 2024-05-31T18:20:17Z | 2024-05-31T18:20:17Z | ADEP: A Novel Approach Based on Discriminator-Enhanced Encoder-Decoder
Architecture for Accurate Prediction of Adverse Effects in Polypharmacy | Motivation: Unanticipated drug-drug interactions (DDIs) pose significant risks in polypharmacy, emphasizing the need for predictive methods. Recent advancements in computational techniques aim to address this challenge. Methods: We introduce ADEP, a novel approach integrating a discriminator and an encoder-decoder model to address data sparsity and enhance feature extraction. ADEP employs a three-part model, including multiple classification methods, to predict adverse effects in polypharmacy. Results: Evaluation on benchmark datasets shows ADEP outperforms well-known methods such as GGI-DDI, SSF-DDI, LSFC, DPSP, GNN-DDI, MSTE, MDF-SA-DDI, NNPS, DDIMDL, Random Forest, K-Nearest-Neighbor, Logistic Regression, and Decision Tree. Key metrics include Accuracy, AUROC, AUPRC, F-score, Recall, Precision, False Negatives, and False Positives. ADEP achieves more accurate predictions of adverse effects in polypharmacy. A case study with real-world data illustrates ADEP's practical application in identifying potential DDIs and preventing adverse effects. Conclusions: ADEP significantly advances the prediction of polypharmacy adverse effects, offering improved accuracy and reliability. Its innovative architecture enhances feature extraction from sparse medical data, improving medication safety and patient outcomes. Availability: Source code and datasets are available at https://github.com/m0hssn/ADEP. | [
"['Katayoun Kobraei' 'Mehrdad Baradaran' 'Seyed Mohsen Sadeghi'\n 'Raziyeh Masumshah' 'Changiz Eslahchi']"
] |
null | null | 2406.00120 | null | null | http://arxiv.org/pdf/2406.00120v2 | 2024-06-17T16:39:08Z | 2024-05-31T18:22:09Z | Reward Machines for Deep RL in Noisy and Uncertain Environments | Reward Machines provide an automata-inspired structure for specifying instructions, safety constraints, and other temporally extended reward-worthy behaviour. By exposing complex reward function structure, they enable counterfactual learning updates that have resulted in impressive sample efficiency gains. While Reward Machines have been employed in both tabular and deep RL settings, they have typically relied on a ground-truth interpretation of the domain-specific vocabulary that form the building blocks of the reward function. Such ground-truth interpretations can be elusive in many real-world settings, due in part to partial observability or noisy sensing. In this paper, we explore the use of Reward Machines for Deep RL in noisy and uncertain environments. We characterize this problem as a POMDP and propose a suite of RL algorithms that leverage task structure under uncertain interpretation of domain-specific vocabulary. Theoretical analysis exposes pitfalls in naive approaches to this problem, while experimental results show that our algorithms successfully leverage task structure to improve performance under noisy interpretations of the vocabulary. Our results provide a general framework for exploiting Reward Machines in partially observable environments. | [
"['Andrew C. Li' 'Zizhao Chen' 'Toryn Q. Klassen' 'Pashootan Vaezipoor'\n 'Rodrigo Toro Icarte' 'Sheila A. McIlraith']"
] |
null | null | 2406.00125 | null | null | http://arxiv.org/pdf/2406.00125v1 | 2024-05-31T18:32:46Z | 2024-05-31T18:32:46Z | TotalVibeSegmentator: Full Torso Segmentation for the NAKO and UK
Biobank in Volumetric Interpolated Breath-hold Examination Body Images | Objectives: To present a publicly available torso segmentation network for large epidemiology datasets on volumetric interpolated breath-hold examination (VIBE) images. Materials & Methods: We extracted preliminary segmentations from TotalSegmentator, spine, and body composition networks for VIBE images, then improved them iteratively and retrained a nnUNet network. Using subsets of NAKO (85 subjects) and UK Biobank (16 subjects), we evaluated with Dice-score on a holdout set (12 subjects) and existing organ segmentation approach (1000 subjects), generating 71 semantic segmentation types for VIBE images. We provide an additional network for the vertebra segments 22 individual vertebra types. Results: We achieved an average Dice score of 0.89 +- 0.07 overall 71 segmentation labels. We scored > 0.90 Dice-score on the abdominal organs except for the pancreas with a Dice of 0.70. Conclusion: Our work offers a detailed and refined publicly available full torso segmentation on VIBE images. | [
"['Robert Graf' 'Paul-Sören Platzek' 'Evamaria Olga Riedel'\n 'Constanze Ramschütz' 'Sophie Starck' 'Hendrik Kristian Möller'\n 'Matan Atad' 'Henry Völzke' 'Robin Bülow' 'Carsten Oliver Schmidt'\n 'Julia Rüdebusch' 'Matthias Jung' 'Marco Reisert' 'Jakob Weiss'\n 'Maximilian Löffler' 'Fabian Bamberg' 'Bene Wiestler'\n 'Johannes C. Paetzold' 'Daniel Rueckert' 'Jan Stefan Kirschke']"
] |
null | null | 2406.00127 | null | null | http://arxiv.org/pdf/2406.00127v1 | 2024-05-31T18:37:06Z | 2024-05-31T18:37:06Z | Training on the Edge of Stability Is Caused by Layerwise Jacobian
Alignment | During neural network training, the sharpness of the Hessian matrix of the training loss rises until training is on the edge of stability. As a result, even nonstochastic gradient descent does not accurately model the underlying dynamical system defined by the gradient flow of the training loss. We use an exponential Euler solver to train the network without entering the edge of stability, so that we accurately approximate the true gradient descent dynamics. We demonstrate experimentally that the increase in the sharpness of the Hessian matrix is caused by the layerwise Jacobian matrices of the network becoming aligned, so that a small change in the network preactivations near the inputs of the network can cause a large change in the outputs of the network. We further demonstrate that the degree of alignment scales with the size of the dataset by a power law with a coefficient of determination between 0.74 and 0.98. | [
"['Mark Lowell' 'Catharine Kastner']"
] |
null | null | 2406.00131 | null | null | http://arxiv.org/pdf/2406.00131v1 | 2024-05-31T18:46:06Z | 2024-05-31T18:46:06Z | How In-Context Learning Emerges from Training on Unstructured Data: On
the Role of Co-Occurrence, Positional Information, and Noise Structures | Large language models (LLMs) like transformers have impressive in-context learning (ICL) capabilities; they can generate predictions for new queries based on input-output sequences in prompts without parameter updates. While many theories have attempted to explain ICL, they often focus on structured training data similar to ICL tasks, such as regression. In practice, however, these models are trained in an unsupervised manner on unstructured text data, which bears little resemblance to ICL tasks. To this end, we investigate how ICL emerges from unsupervised training on unstructured data. The key observation is that ICL can arise simply by modeling co-occurrence information using classical language models like continuous bag of words (CBOW), which we theoretically prove and empirically validate. Furthermore, we establish the necessity of positional information and noise structure to generalize ICL to unseen data. Finally, we present instances where ICL fails and provide theoretical explanations; they suggest that the ICL ability of LLMs to identify certain tasks can be sensitive to the structure of the training data. | [
"['Kevin Christian Wibisono' 'Yixin Wang']"
] |
null | null | 2406.00132 | null | null | http://arxiv.org/pdf/2406.00132v1 | 2024-05-31T18:47:30Z | 2024-05-31T18:47:30Z | QuanTA: Efficient High-Rank Fine-Tuning of LLMs with Quantum-Informed
Tensor Adaptation | We propose Quantum-informed Tensor Adaptation (QuanTA), a novel, easy-to-implement, fine-tuning method with no inference overhead for large-scale pre-trained language models. By leveraging quantum-inspired methods derived from quantum circuit structures, QuanTA enables efficient high-rank fine-tuning, surpassing the limitations of Low-Rank Adaptation (LoRA)--low-rank approximation may fail for complicated downstream tasks. Our approach is theoretically supported by the universality theorem and the rank representation theorem to achieve efficient high-rank adaptations. Experiments demonstrate that QuanTA significantly enhances commonsense reasoning, arithmetic reasoning, and scalability compared to traditional methods. Furthermore, QuanTA shows superior performance with fewer trainable parameters compared to other approaches and can be designed to integrate with existing fine-tuning algorithms for further improvement, providing a scalable and efficient solution for fine-tuning large language models and advancing state-of-the-art in natural language processing. | [
"['Zhuo Chen' 'Rumen Dangovski' 'Charlotte Loh' 'Owen Dugan' 'Di Luo'\n 'Marin Soljačić']"
] |
null | null | 2406.00133 | null | null | http://arxiv.org/pdf/2406.00133v1 | 2024-05-31T18:53:53Z | 2024-05-31T18:53:53Z | Streamflow Prediction with Uncertainty Quantification for Water
Management: A Constrained Reasoning and Learning Approach | Predicting the spatiotemporal variation in streamflow along with uncertainty quantification enables decision-making for sustainable management of scarce water resources. Process-based hydrological models (aka physics-based models) are based on physical laws, but using simplifying assumptions which can lead to poor accuracy. Data-driven approaches offer a powerful alternative, but they require large amount of training data and tend to produce predictions that are inconsistent with physical laws. This paper studies a constrained reasoning and learning (CRL) approach where physical laws represented as logical constraints are integrated as a layer in the deep neural network. To address small data setting, we develop a theoretically-grounded training approach to improve the generalization accuracy of deep models. For uncertainty quantification, we combine the synergistic strengths of Gaussian processes (GPs) and deep temporal models (i.e., deep models for time-series forecasting) by passing the learned latent representation as input to a standard distance-based kernel. Experiments on multiple real-world datasets demonstrate the effectiveness of both CRL and GP with deep kernel approaches over strong baseline methods. | [
"['Mohammed Amine Gharsallaoui' 'Bhupinderjeet Singh' 'Supriya Savalkar'\n 'Aryan Deshwal' 'Yan Yan' 'Ananth Kalyanaraman' 'Kirti Rajagopalan'\n 'Janardhan Rao Doppa']"
] |
null | null | 2406.00134 | null | null | http://arxiv.org/abs/2406.00134v1 | 2024-05-31T18:54:00Z | 2024-05-31T18:54:00Z | Anomaly Detection in Dynamic Graphs: A Comprehensive Survey | This survey paper presents a comprehensive and conceptual overview of anomaly detection using dynamic graphs. We focus on existing graph-based anomaly detection (AD) techniques and their applications to dynamic networks. The contributions of this survey paper include the following: i) a comparative study of existing surveys on anomaly detection; ii) a Dynamic Graph-based Anomaly Detection (DGAD) review framework in which approaches for detecting anomalies in dynamic graphs are grouped based on traditional machine-learning models, matrix transformations, probabilistic approaches, and deep-learning approaches; iii) a discussion of graphically representing both discrete and dynamic networks; and iv) a discussion of the advantages of graph-based techniques for capturing the relational structure and complex interactions in dynamic graph data. Finally, this work identifies the potential challenges and future directions for detecting anomalies in dynamic networks. This DGAD survey approach aims to provide a valuable resource for researchers and practitioners by summarizing the strengths and limitations of each approach, highlighting current research trends, and identifying open challenges. In doing so, it can guide future research efforts and promote advancements in anomaly detection in dynamic graphs. Keywords: Graphs, Anomaly Detection, dynamic networks,Graph Neural Networks (GNN), Node anomaly, Graph mining. | [
"['Ocheme Anthony Ekle' 'William Eberle']"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.