categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 2407.01749 | null | null | http://arxiv.org/pdf/2407.01749v1 | 2024-07-01T19:27:28Z | 2024-07-01T19:27:28Z | Invariant Correlation of Representation with Label | The Invariant Risk Minimization (IRM) approach aims to address the challenge of domain generalization by training a feature representation that remains invariant across multiple environments. However, in noisy environments, IRM-related techniques such as IRMv1 and VREx may be unable to achieve the optimal IRM solution, primarily due to erroneous optimization directions. To address this issue, we introduce ICorr (an abbreviation for textbf{I}nvariant textbf{Corr}elation), a novel approach designed to surmount the above challenge in noisy settings. Additionally, we dig into a case study to analyze why previous methods may lose ground while ICorr can succeed. Through a theoretical lens, particularly from a causality perspective, we illustrate that the invariant correlation of representation with label is a necessary condition for the optimal invariant predictor in noisy environments, whereas the optimization motivations for other methods may not be. Furthermore, we empirically demonstrate the effectiveness of ICorr by comparing it with other domain generalization methods on various noisy datasets. | [
"['Gaojie Jin' 'Ronghui Mu' 'Xinping Yi' 'Xiaowei Huang' 'Lijun Zhang']"
] |
null | null | 2407.01769 | null | null | http://arxiv.org/pdf/2407.01769v1 | 2024-07-01T19:59:29Z | 2024-07-01T19:59:29Z | Improving Trip Mode Choice Modeling Using Ensemble Synthesizer (ENSY) | Accurate classification of mode choice datasets is crucial for transportation planning and decision-making processes. However, conventional classification models often struggle to adequately capture the nuanced patterns of minority classes within these datasets, leading to sub-optimal accuracy. In response to this challenge, we present Ensemble Synthesizer (ENSY) which leverages probability distribution for data augmentation, a novel data model tailored specifically for enhancing classification accuracy in mode choice datasets. In our study, ENSY demonstrates remarkable efficacy by nearly quadrupling the F1 score of minority classes and improving overall classification accuracy by nearly 3%. To assess its performance comprehensively, we compare ENSY against various augmentation techniques including Random Oversampling, SMOTE-NC, and CTGAN. Through experimentation, ENSY consistently outperforms these methods across various scenarios, underscoring its robustness and effectiveness | [
"['Amirhossein Parsi' 'Melina Jafari' 'Sina Sabzekar' 'Zahra Amini']"
] |
null | null | 2407.01776 | null | null | http://arxiv.org/pdf/2407.01776v1 | 2024-07-01T20:10:24Z | 2024-07-01T20:10:24Z | Federated Binary Matrix Factorization using Proximal Optimization | Identifying informative components in binary data is an essential task in many research areas, including life sciences, social sciences, and recommendation systems. Boolean matrix factorization (BMF) is a family of methods that performs this task by efficiently factorizing the data. In real-world settings, the data is often distributed across stakeholders and required to stay private, prohibiting the straightforward application of BMF. To adapt BMF to this context, we approach the problem from a federated-learning perspective, while building on a state-of-the-art continuous binary matrix factorization relaxation to BMF that enables efficient gradient-based optimization. We propose to only share the relaxed component matrices, which are aggregated centrally using a proximal operator that regularizes for binary outcomes. We show the convergence of our federated proximal gradient descent algorithm and provide differential privacy guarantees. Our extensive empirical evaluation demonstrates that our algorithm outperforms, in terms of quality and efficacy, federation schemes of state-of-the-art BMF methods on a diverse set of real-world and synthetic data. | [
"['Sebastian Dalleiger' 'Jilles Vreeken' 'Michael Kamp']"
] |
null | null | 2407.01781 | null | null | http://arxiv.org/abs/2407.01781v1 | 2024-07-01T20:20:33Z | 2024-07-01T20:20:33Z | fVDB: A Deep-Learning Framework for Sparse, Large-Scale, and
High-Performance Spatial Intelligence | We present fVDB, a novel GPU-optimized framework for deep learning on large-scale 3D data. fVDB provides a complete set of differentiable primitives to build deep learning architectures for common tasks in 3D learning such as convolution, pooling, attention, ray-tracing, meshing, etc. fVDB simultaneously provides a much larger feature set (primitives and operators) than established frameworks with no loss in efficiency: our operators match or exceed the performance of other frameworks with narrower scope. Furthermore, fVDB can process datasets with much larger footprint and spatial resolution than prior works, while providing a competitive memory footprint on small inputs. To achieve this combination of versatility and performance, fVDB relies on a single novel VDB index grid acceleration structure paired with several key innovations including GPU accelerated sparse grid construction, convolution using tensorcores, fast ray tracing kernels using a Hierarchical Digital Differential Analyzer algorithm (HDDA), and jagged tensors. Our framework is fully integrated with PyTorch enabling interoperability with existing pipelines, and we demonstrate its effectiveness on a number of representative tasks such as large-scale point-cloud segmentation, high resolution 3D generative modeling, unbounded scale Neural Radiance Fields, and large-scale point cloud reconstruction. | [
"['Francis Williams' 'Jiahui Huang' 'Jonathan Swartz' 'Gergely Klár'\n 'Vijay Thakkar' 'Matthew Cong' 'Xuanchi Ren' 'Ruilong Li'\n 'Clement Fuji-Tsang' 'Sanja Fidler' 'Eftychios Sifakis' 'Ken Museth']"
] |
null | null | 2407.01784 | null | null | http://arxiv.org/pdf/2407.01784v1 | 2024-07-01T20:25:20Z | 2024-07-01T20:25:20Z | Analyzing Persuasive Strategies in Meme Texts: A Fusion of Language
Models with Paraphrase Enrichment | This paper describes our approach to hierarchical multi-label detection of persuasion techniques in meme texts. Our model, developed as a part of the recent SemEval task, is based on fine-tuning individual language models (BERT, XLM-RoBERTa, and mBERT) and leveraging a mean-based ensemble model in addition to dataset augmentation through paraphrase generation from ChatGPT. The scope of the study encompasses enhancing model performance through innovative training techniques and data augmentation strategies. The problem addressed is the effective identification and classification of multiple persuasive techniques in meme texts, a task complicated by the diversity and complexity of such content. The objective of the paper is to improve detection accuracy by refining model training methods and examining the impact of balanced versus unbalanced training datasets. Novelty in the results and discussion lies in the finding that training with paraphrases enhances model performance, yet a balanced training set proves more advantageous than a larger unbalanced one. Additionally, the analysis reveals the potential pitfalls of indiscriminate incorporation of paraphrases from diverse distributions, which can introduce substantial noise. Results with the SemEval 2024 data confirm these insights, demonstrating improved model efficacy with the proposed methods. | [
"['Kota Shamanth Ramanath Nayak' 'Leila Kosseim']"
] |
null | null | 2407.01790 | null | null | http://arxiv.org/pdf/2407.01790v1 | 2024-07-01T20:30:23Z | 2024-07-01T20:30:23Z | Label-free Neural Semantic Image Synthesis | Recent work has shown great progress in integrating spatial conditioning to control large, pre-trained text-to-image diffusion models. Despite these advances, existing methods describe the spatial image content using hand-crafted conditioning inputs, which are either semantically ambiguous (e.g., edges) or require expensive manual annotations (e.g., semantic segmentation). To address these limitations, we propose a new label-free way of conditioning diffusion models to enable fine-grained spatial control. We introduce the concept of neural semantic image synthesis, which uses neural layouts extracted from pre-trained foundation models as conditioning. Neural layouts are advantageous as they provide rich descriptions of the desired image, containing both semantics and detailed geometry of the scene. We experimentally show that images synthesized via neural semantic image synthesis achieve similar or superior pixel-level alignment of semantic classes compared to those created using expensive semantic label maps. At the same time, they capture better semantics, instance separation, and object orientation than other label-free conditioning options, such as edges or depth. Moreover, we show that images generated by neural layout conditioning can effectively augment real data for training various perception tasks. | [
"['Jiayi Wang' 'Kevin Alexander Laube' 'Yumeng Li' 'Jan Hendrik Metzen'\n 'Shin-I Cheng' 'Julio Borges' 'Anna Khoreva']"
] |
null | null | 2407.01794 | null | null | http://arxiv.org/pdf/2407.01794v1 | 2024-07-01T20:44:48Z | 2024-07-01T20:44:48Z | Conditionally valid Probabilistic Conformal Prediction | We develop a new method for creating prediction sets that combines the flexibility of conformal methods with an estimate of the conditional distribution $P_{Y mid X}$. Most existing methods, such as conformalized quantile regression and probabilistic conformal prediction, only offer marginal coverage guarantees. Our approach extends these methods to achieve conditional coverage, which is essential for many practical applications. While exact conditional guarantees are impossible without assumptions on the data distribution, we provide non-asymptotic bounds that explicitly depend on the quality of the available estimate of the conditional distribution. Our confidence sets are highly adaptive to the local structure of the data, making them particularly useful in high heteroskedasticity situations. We demonstrate the effectiveness of our approach through extensive simulations, showing that it outperforms existing methods in terms of conditional coverage and improves the reliability of statistical inference in a wide range of applications. | [
"['Vincent Plassier' 'Alexander Fishkov' 'Maxim Panov' 'Eric Moulines']"
] |
null | null | 2407.01795 | null | null | http://arxiv.org/pdf/2407.01795v1 | 2024-07-01T20:44:52Z | 2024-07-01T20:44:52Z | Honor Among Bandits: No-Regret Learning for Online Fair Division | We consider the problem of online fair division of indivisible goods to players when there are a finite number of types of goods and player values are drawn from distributions with unknown means. Our goal is to maximize social welfare subject to allocating the goods fairly in expectation. When a player's value for an item is unknown at the time of allocation, we show that this problem reduces to a variant of (stochastic) multi-armed bandits, where there exists an arm for each player's value for each type of good. At each time step, we choose a distribution over arms which determines how the next item is allocated. We consider two sets of fairness constraints for this problem: envy-freeness in expectation and proportionality in expectation. Our main result is the design of an explore-then-commit algorithm that achieves $tilde{O}(T^{2/3})$ regret while maintaining either fairness constraint. This result relies on unique properties fundamental to fair-division constraints that allow faster rates of learning, despite the restricted action space. | [
"['Ariel D. Procaccia' 'Benjamin Schiffer' 'Shirley Zhang']"
] |
null | null | 2407.01800 | null | null | http://arxiv.org/pdf/2407.01800v1 | 2024-07-01T20:58:01Z | 2024-07-01T20:58:01Z | Normalization and effective learning rates in reinforcement learning | Normalization layers have recently experienced a renaissance in the deep reinforcement learning and continual learning literature, with several works highlighting diverse benefits such as improving loss landscape conditioning and combatting overestimation bias. However, normalization brings with it a subtle but important side effect: an equivalence between growth in the norm of the network parameters and decay in the effective learning rate. This becomes problematic in continual learning settings, where the resulting effective learning rate schedule may decay to near zero too quickly relative to the timescale of the learning problem. We propose to make the learning rate schedule explicit with a simple re-parameterization which we call Normalize-and-Project (NaP), which couples the insertion of normalization layers with weight projection, ensuring that the effective learning rate remains constant throughout training. This technique reveals itself as a powerful analytical tool to better understand learning rate schedules in deep reinforcement learning, and as a means of improving robustness to nonstationarity in synthetic plasticity loss benchmarks along with both the single-task and sequential variants of the Arcade Learning Environment. We also show that our approach can be easily applied to popular architectures such as ResNets and transformers while recovering and in some cases even slightly improving the performance of the base model in common stationary benchmarks. | [
"['Clare Lyle' 'Zeyu Zheng' 'Khimya Khetarpal' 'James Martens'\n 'Hado van Hasselt' 'Razvan Pascanu' 'Will Dabney']"
] |
null | null | 2407.01804 | null | null | http://arxiv.org/pdf/2407.01804v1 | 2024-07-01T21:06:34Z | 2024-07-01T21:06:34Z | DCoM: Active Learning for All Learners | Deep Active Learning (AL) techniques can be effective in reducing annotation costs for training deep models. However, their effectiveness in low- and high-budget scenarios seems to require different strategies, and achieving optimal results across varying budget scenarios remains a challenge. In this study, we introduce Dynamic Coverage & Margin mix (DCoM), a novel active learning approach designed to bridge this gap. Unlike existing strategies, DCoM dynamically adjusts its strategy, considering the competence of the current model. Through theoretical analysis and empirical evaluations on diverse datasets, including challenging computer vision tasks, we demonstrate DCoM's ability to overcome the cold start problem and consistently improve results across different budgetary constraints. Thus DCoM achieves state-of-the-art performance in both low- and high-budget regimes. | [
"['Inbal Mishal' 'Daphna Weinshall']"
] |
null | null | 2407.01812 | null | null | http://arxiv.org/pdf/2407.01812v1 | 2024-07-01T21:23:26Z | 2024-07-01T21:23:26Z | Equivariant Diffusion Policy | Recent work has shown diffusion models are an effective approach to learning the multimodal distributions arising from demonstration data in behavior cloning. However, a drawback of this approach is the need to learn a denoising function, which is significantly more complex than learning an explicit policy. In this work, we propose Equivariant Diffusion Policy, a novel diffusion policy learning method that leverages domain symmetries to obtain better sample efficiency and generalization in the denoising function. We theoretically analyze the $mathrm{SO}(2)$ symmetry of full 6-DoF control and characterize when a diffusion model is $mathrm{SO}(2)$-equivariant. We furthermore evaluate the method empirically on a set of 12 simulation tasks in MimicGen, and show that it obtains a success rate that is, on average, 21.9% higher than the baseline Diffusion Policy. We also evaluate the method on a real-world system to show that effective policies can be learned with relatively few training samples, whereas the baseline Diffusion Policy cannot. | [
"['Dian Wang' 'Stephen Hart' 'David Surovik' 'Tarik Kelestemur'\n 'Haojie Huang' 'Haibo Zhao' 'Mark Yeatman' 'Jiuguang Wang'\n 'Robin Walters' 'Robert Platt']"
] |
null | null | 2407.01823 | null | null | http://arxiv.org/pdf/2407.01823v2 | 2024-07-03T11:09:00Z | 2024-07-01T21:45:27Z | Meta-Learning Based Optimization for Large Scale Wireless Systems | Optimization algorithms for wireless systems play a fundamental role in improving their performance and efficiency. However, it is known that the complexity of conventional optimization algorithms in the literature often exponentially increases with the number of transmit antennas and communication users in the wireless system. Therefore, in the large scale regime, the astronomically large complexity of these optimization algorithms prohibits their use and prevents assessing large scale wireless systems performance under optimized conditions. To overcome this limitation, this work proposes instead the use of an unsupervised meta-learning based approach to directly perform non-convex optimization at significantly reduced complexity. To demonstrate the effectiveness of the proposed meta-learning based solution, the sum-rate (SR) maximization problem for the following three emerging 6G technologies is contemplated: hierarchical rate-splitting multiple access (H-RSMA), integrated sensing and communication (ISAC), and beyond-diagonal reconfigurable intelligent surfaces (BD-RIS). Through numerical results, it is demonstrated that the proposed meta-learning based optimization framework is able to successfully optimize the performance and also reveal unknown aspects of the operation in the large scale regime for the considered three 6G technologies. | [
"['Rafael Cerna Loli' 'Bruno Clerckx']"
] |
null | null | 2407.01825 | null | null | http://arxiv.org/pdf/2407.01825v1 | 2024-07-01T21:56:54Z | 2024-07-01T21:56:54Z | Empirical Tests of Optimization Assumptions in Deep Learning | There is a significant gap between our theoretical understanding of optimization algorithms used in deep learning and their practical performance. Theoretical development usually focuses on proving convergence guarantees under a variety of different assumptions, which are themselves often chosen based on a rough combination of intuitive match to practice and analytical convenience. The theory/practice gap may then arise because of the failure to prove a theorem under such assumptions, or because the assumptions do not reflect reality. In this paper, we carefully measure the degree to which these assumptions are capable of explaining modern optimization algorithms by developing new empirical metrics that closely track the key quantities that must be controlled in theoretical analysis. All of our tested assumptions (including typical modern assumptions based on bounds on the Hessian) fail to reliably capture optimization performance. This highlights a need for new empirical verification of analytical assumptions used in theoretical analysis. | [
"['Hoang Tran' 'Qinzi Zhang' 'Ashok Cutkosky']"
] |
null | null | 2407.01837 | null | null | http://arxiv.org/pdf/2407.01837v1 | 2024-07-01T22:24:31Z | 2024-07-01T22:24:31Z | To Switch or Not to Switch? Balanced Policy Switching in Offline
Reinforcement Learning | Reinforcement learning (RL) -- finding the optimal behaviour (also referred to as policy) maximizing the collected long-term cumulative reward -- is among the most influential approaches in machine learning with a large number of successful applications. In several decision problems, however, one faces the possibility of policy switching -- changing from the current policy to a new one -- which incurs a non-negligible cost (examples include the shifting of the currently applied educational technology, modernization of a computing cluster, and the introduction of a new webpage design), and in the decision one is limited to using historical data without the availability for further online interaction. Despite the inevitable importance of this offline learning scenario, to our best knowledge, very little effort has been made to tackle the key problem of balancing between the gain and the cost of switching in a flexible and principled way. Leveraging ideas from the area of optimal transport, we initialize the systematic study of policy switching in offline RL. We establish fundamental properties and design a Net Actor-Critic algorithm for the proposed novel switching formulation. Numerical experiments demonstrate the efficiency of our approach on multiple benchmarks of the Gymnasium. | [
"['Tao Ma' 'Xuzhi Yang' 'Zoltan Szabo']"
] |
null | null | 2407.01848 | null | null | http://arxiv.org/pdf/2407.01848v2 | 2024-07-08T13:18:17Z | 2024-07-01T23:16:34Z | UniFIDES: Universal Fractional Integro-Differential Equation Solvers | The development of data-driven approaches for solving differential equations has been followed by a plethora of applications in science and engineering across a multitude of disciplines and remains a central focus of active scientific inquiry. However, a large body of natural phenomena incorporates memory effects that are best described via fractional integro-differential equations (FIDEs), in which the integral or differential operators accept non-integer orders. Addressing the challenges posed by nonlinear FIDEs is a recognized difficulty, necessitating the application of generic methods with immediate practical relevance. This work introduces the Universal Fractional Integro-Differential Equation Solvers (UniFIDES), a comprehensive machine learning platform designed to expeditiously solve a variety of FIDEs in both forward and inverse directions, without the need for ad hoc manipulation of the equations. The effectiveness of UniFIDES is demonstrated through a collection of integer-order and fractional problems in science and engineering. Our results highlight UniFIDES' ability to accurately solve a wide spectrum of integro-differential equations and offer the prospect of using machine learning platforms universally for discovering and describing dynamical and complex systems. | [
"['Milad Saadat' 'Deepak Mangal' 'Safa Jamali']"
] |
null | null | 2407.01851 | null | null | http://arxiv.org/pdf/2407.01851v2 | 2024-07-03T07:01:30Z | 2024-07-01T23:32:25Z | Meerkat: Audio-Visual Large Language Model for Grounding in Space and
Time | Leveraging Large Language Models' remarkable proficiency in text-based tasks, recent works on Multi-modal LLMs (MLLMs) extend them to other modalities like vision and audio. However, the progress in these directions has been mostly focused on tasks that only require a coarse-grained understanding of the audio-visual semantics. We present Meerkat, an audio-visual LLM equipped with a fine-grained understanding of image and audio both spatially and temporally. With a new modality alignment module based on optimal transport and a cross-attention module that enforces audio-visual consistency, Meerkat can tackle challenging tasks such as audio referred image grounding, image guided audio temporal localization, and audio-visual fact-checking. Moreover, we carefully curate a large dataset AVFIT that comprises 3M instruction tuning samples collected from open-source datasets, and introduce MeerkatBench that unifies five challenging audio-visual tasks. We achieve state-of-the-art performance on all these downstream tasks with a relative improvement of up to 37.12%. | [
"['Sanjoy Chowdhury' 'Sayan Nag' 'Subhrajyoti Dasgupta' 'Jun Chen'\n 'Mohamed Elhoseiny' 'Ruohan Gao' 'Dinesh Manocha']"
] |
null | null | 2407.01853 | null | null | http://arxiv.org/pdf/2407.01853v1 | 2024-07-01T23:47:09Z | 2024-07-01T23:47:09Z | Improving Multilingual Instruction Finetuning via Linguistically Natural
and Diverse Datasets | Advancements in Large Language Models (LLMs) have significantly enhanced instruction-following capabilities. However, most Instruction Fine-Tuning (IFT) datasets are predominantly in English, limiting model performance in other languages. Traditional methods for creating multilingual IFT datasets such as translating existing English IFT datasets or converting existing NLP datasets into IFT datasets by templating, struggle to capture linguistic nuances and ensure prompt (instruction) diversity. To address this issue, we propose a novel method for collecting multilingual IFT datasets that preserves linguistic naturalness and ensures prompt diversity. This approach leverages English-focused LLMs, monolingual corpora, and a scoring function to create high-quality, diversified IFT datasets in multiple languages. Experiments demonstrate that LLMs finetuned using these IFT datasets show notable improvements in both generative and discriminative tasks, indicating enhanced language comprehension by LLMs in non-English contexts. Specifically, on the multilingual summarization task, LLMs using our IFT dataset achieved 17.57% and 15.23% improvements over LLMs fine-tuned with translation-based and template-based datasets, respectively. | [
"['Sathish Reddy Indurthi' 'Wenxuan Zhou' 'Shamil Chollampatt'\n 'Ravi Agrawal' 'Kaiqiang Song' 'Lingxiao Zhao' 'Chenguang Zhu']"
] |
null | null | 2407.01856 | null | null | http://arxiv.org/pdf/2407.01856v1 | 2024-07-01T23:56:56Z | 2024-07-01T23:56:56Z | Adaptive RKHS Fourier Features for Compositional Gaussian Process Models | Deep Gaussian Processes (DGPs) leverage a compositional structure to model non-stationary processes. DGPs typically rely on local inducing point approximations across intermediate GP layers. Recent advances in DGP inference have shown that incorporating global Fourier features from Reproducing Kernel Hilbert Space (RKHS) can enhance the DGPs' capability to capture complex non-stationary patterns. This paper extends the use of these features to compositional GPs involving linear transformations. In particular, we introduce Ordinary Differential Equation (ODE) -based RKHS Fourier features that allow for adaptive amplitude and phase modulation through convolution operations. This convolutional formulation relates our work to recently proposed deep latent force models, a multi-layer structure designed for modelling nonlinear dynamical systems. By embedding these adjustable RKHS Fourier features within a doubly stochastic variational inference framework, our model exhibits improved predictive performance across various regression tasks. | [
"['Xinxing Shi' 'Thomas Baldwin-McDonald' 'Mauricio A. Álvarez']"
] |
null | null | 2407.01864 | null | null | http://arxiv.org/pdf/2407.01864v2 | 2024-07-05T17:17:48Z | 2024-07-02T00:43:41Z | Research on target detection method of distracted driving behavior based
on improved YOLOv8 | With the development of deep learning technology, the detection and classification of distracted driving behaviour requires higher accuracy. Existing deep learning-based methods are computationally intensive and parameter redundant, limiting the efficiency and accuracy in practical applications. To solve this problem, this study proposes an improved YOLOv8 detection method based on the original YOLOv8 model by integrating the BoTNet module, GAM attention mechanism and EIoU loss function. By optimising the feature extraction and multi-scale feature fusion strategies, the training and inference processes are simplified, and the detection accuracy and efficiency are significantly improved. Experimental results show that the improved model performs well in both detection speed and accuracy, with an accuracy rate of 99.4%, and the model is smaller and easy to deploy, which is able to identify and classify distracted driving behaviours in real time, provide timely warnings, and enhance driving safety. | [
"['Shiquan Shen' 'Zhizhong Wu' 'Pan Zhang']"
] |
null | null | 2407.01869 | null | null | http://arxiv.org/pdf/2407.01869v1 | 2024-07-02T01:05:35Z | 2024-07-02T01:05:35Z | Let it shine: Autofluorescence of Papanicolaou-stain improves AI-based
cytological oral cancer detection | Oral cancer is a global health challenge. It is treatable if detected early, but it is often fatal in late stages. There is a shift from the invasive and time-consuming tissue sampling and histological examination, toward non-invasive brush biopsies and cytological examination. Reliable computer-assisted methods are essential for cost-effective and accurate cytological analysis, but the lack of detailed cell-level annotations impairs model effectiveness. This study aims to improve AI-based oral cancer detection using multimodal imaging and deep fusion. We combine brightfield and fluorescence whole slide microscopy imaging to analyze Papanicolaou-stained liquid-based cytology slides of brush biopsies collected from both healthy and cancer patients. Due to limited cytological annotations, we utilize a weakly supervised deep learning approach using only patient-level labels. We evaluate various multimodal fusion strategies, including early, late, and three recent intermediate fusion methods. Our results show: (i) fluorescence imaging of Papanicolaou-stained samples provides substantial diagnostic information; (ii) multimodal fusion enhances classification and cancer detection accuracy over single-modality methods. Intermediate fusion is the leading method among the studied approaches. Specifically, the Co-Attention Fusion Network (CAFNet) model excels with an F1 score of 83.34% and accuracy of 91.79%, surpassing human performance on the task. Additional tests highlight the need for precise image registration to optimize multimodal analysis benefits. This study advances cytopathology by combining deep learning and multimodal imaging to enhance early, non-invasive detection of oral cancer, improving diagnostic accuracy and streamlining clinical workflows. The developed pipeline is also applicable in other cytological settings. Our codes and dataset are available online for further research. | [
"['Wenyi Lian' 'Joakim Lindblad' 'Christina Runow Stark'\n 'Jan-Michaél Hirsch' 'Nataša Sladoje']"
] |
null | null | 2407.01873 | null | null | http://arxiv.org/pdf/2407.01873v1 | 2024-07-02T01:17:01Z | 2024-07-02T01:17:01Z | Automated Text Scoring in the Age of Generative AI for the GPU-poor | Current research on generative language models (GLMs) for automated text scoring (ATS) has focused almost exclusively on querying proprietary models via Application Programming Interfaces (APIs). Yet such practices raise issues around transparency and security, and these methods offer little in the way of efficiency or customizability. With the recent proliferation of smaller, open-source models, there is the option to explore GLMs with computers equipped with modest, consumer-grade hardware, that is, for the "GPU poor." In this study, we analyze the performance and efficiency of open-source, small-scale GLMs for ATS. Results show that GLMs can be fine-tuned to achieve adequate, though not state-of-the-art, performance. In addition to ATS, we take small steps towards analyzing models' capacity for generating feedback by prompting GLMs to explain their scores. Model-generated feedback shows promise, but requires more rigorous evaluation focused on targeted use cases. | [
"['Christopher Michael Ormerod' 'Alexander Kwako']"
] |
null | null | 2407.01886 | null | null | http://arxiv.org/pdf/2407.01886v1 | 2024-07-02T02:16:43Z | 2024-07-02T02:16:43Z | Core Knowledge Learning Framework for Graph Adaptation and Scalability
Learning | Graph classification is a pivotal challenge in machine learning, especially within the realm of graph-based data, given its importance in numerous real-world applications such as social network analysis, recommendation systems, and bioinformatics. Despite its significance, graph classification faces several hurdles, including adapting to diverse prediction tasks, training across multiple target domains, and handling small-sample prediction scenarios. Current methods often tackle these challenges individually, leading to fragmented solutions that lack a holistic approach to the overarching problem. In this paper, we propose an algorithm aimed at addressing the aforementioned challenges. By incorporating insights from various types of tasks, our method aims to enhance adaptability, scalability, and generalizability in graph classification. Motivated by the recognition that the underlying subgraph plays a crucial role in GNN prediction, while the remainder is task-irrelevant, we introduce the Core Knowledge Learning (method{}) framework for graph adaptation and scalability learning. method{} comprises several key modules, including the core subgraph knowledge submodule, graph domain adaptation module, and few-shot learning module for downstream tasks. Each module is tailored to tackle specific challenges in graph classification, such as domain shift, label inconsistencies, and data scarcity. By learning the core subgraph of the entire graph, we focus on the most pertinent features for task relevance. Consequently, our method offers benefits such as improved model performance, increased domain adaptability, and enhanced robustness to domain variations. Experimental results demonstrate significant performance enhancements achieved by our method compared to state-of-the-art approaches. | [
"['Bowen Zhang' 'Zhichao Huang' 'Genan Dai' 'Guangning Xu' 'Xiaomao Fan'\n 'Hu Huang']"
] |
null | null | 2407.01887 | null | null | http://arxiv.org/pdf/2407.01887v1 | 2024-07-02T02:18:14Z | 2024-07-02T02:18:14Z | Beyond Numeric Awards: In-Context Dueling Bandits with LLM Agents | In-context decision-making is an important capability of artificial general intelligence, which Large Language Models (LLMs) have effectively demonstrated in various scenarios. However, LLMs often face challenges when dealing with numerical contexts, and limited attention has been paid to evaluating their performance through preference feedback generated by the environment. This paper investigates the performance of LLMs as decision-makers in the context of Dueling Bandits (DB). We first evaluate the performance of LLMs by comparing GPT-3.5-Turbo, GPT-4, and GPT-4-Turbo against established DB algorithms. Our results reveal that LLMs, particularly GPT-4 Turbo, quickly identify the Condorcet winner, thus outperforming existing state-of-the-art algorithms in terms of weak regret. Nevertheless, LLMs struggle to converge even when explicitly prompted to do so, and are sensitive to prompt variations. To overcome these issues, we introduce an LLM-augmented algorithm, IF-Enhanced LLM, which takes advantage of both in-context decision-making capabilities of LLMs and theoretical guarantees inherited from classic DB algorithms. The design of such an algorithm sheds light on how to enhance trustworthiness for LLMs used in decision-making tasks where performance robustness matters. We show that IF-Enhanced LLM has theoretical guarantees on both weak and strong regret. Our experimental results validate that IF-Enhanced LLM is robust even with noisy and adversarial prompts. | [
"['Fanzeng Xia' 'Hao Liu' 'Yisong Yue' 'Tongxin Li']"
] |
null | null | 2407.01903 | null | null | http://arxiv.org/pdf/2407.01903v1 | 2024-07-02T03:08:20Z | 2024-07-02T03:08:20Z | Text-Aware Diffusion for Policy Learning | Training an agent to achieve particular goals or perform desired behaviors is often accomplished through reinforcement learning, especially in the absence of expert demonstrations. However, supporting novel goals or behaviors through reinforcement learning requires the ad-hoc design of appropriate reward functions, which quickly becomes intractable. To address this challenge, we propose Text-Aware Diffusion for Policy Learning (TADPoLe), which uses a pretrained, frozen text-conditioned diffusion model to compute dense zero-shot reward signals for text-aligned policy learning. We hypothesize that large-scale pretrained generative models encode rich priors that can supervise a policy to behave not only in a text-aligned manner, but also in alignment with a notion of naturalness summarized from internet-scale training data. In our experiments, we demonstrate that TADPoLe is able to learn policies for novel goal-achievement and continuous locomotion behaviors specified by natural language, in both Humanoid and Dog environments. The behaviors are learned zero-shot without ground-truth rewards or expert demonstrations, and are qualitatively more natural according to human evaluation. We further show that TADPoLe performs competitively when applied to robotic manipulation tasks in the Meta-World environment. | [
"['Calvin Luo' 'Mandy He' 'Zilai Zeng' 'Chen Sun']"
] |
null | null | 2407.01906 | null | null | http://arxiv.org/pdf/2407.01906v2 | 2024-07-05T03:23:59Z | 2024-07-02T03:11:13Z | Let the Expert Stick to His Last: Expert-Specialized Fine-Tuning for
Sparse Architectural Large Language Models | Parameter-efficient fine-tuning (PEFT) is crucial for customizing Large Language Models (LLMs) with constrained resources. Although there have been various PEFT methods for dense-architecture LLMs, PEFT for sparse-architecture LLMs is still underexplored. In this work, we study the PEFT method for LLMs with the Mixture-of-Experts (MoE) architecture and the contents of this work are mainly threefold: (1) We investigate the dispersion degree of the activated experts in customized tasks, and found that the routing distribution for a specific task tends to be highly concentrated, while the distribution of activated experts varies significantly across different tasks. (2) We propose Expert-Specialized Fine-Tuning, or ESFT, which tunes the experts most relevant to downstream tasks while freezing the other experts and modules; experimental results demonstrate that our method not only improves the tuning efficiency, but also matches or even surpasses the performance of full-parameter fine-tuning. (3) We further analyze the impact of the MoE architecture on expert-specialized fine-tuning. We find that MoE models with finer-grained experts are more advantageous in selecting the combination of experts that are most relevant to downstream tasks, thereby enhancing both the training efficiency and effectiveness. Our code is available at https://github.com/deepseek-ai/ESFT. | [
"['Zihan Wang' 'Deli Chen' 'Damai Dai' 'Runxin Xu' 'Zhuoshu Li' 'Y. Wu']"
] |
null | null | 2407.01907 | null | null | http://arxiv.org/pdf/2407.01907v1 | 2024-07-02T03:13:27Z | 2024-07-02T03:13:27Z | The Solution for the ICCV 2023 Perception Test Challenge 2023 -- Task 6
-- Grounded videoQA | In this paper, we introduce a grounded video question-answering solution. Our research reveals that the fixed official baseline method for video question answering involves two main steps: visual grounding and object tracking. However, a significant challenge emerges during the initial step, where selected frames may lack clearly identifiable target objects. Furthermore, single images cannot address questions like "Track the container from which the person pours the first time." To tackle this issue, we propose an alternative two-stage approach:(1) First, we leverage the VALOR model to answer questions based on video information.(2) concatenate the answered questions with their respective answers. Finally, we employ TubeDETR to generate bounding boxes for the targets. | [
"['Hailiang Zhang' 'Dian Chao' 'Zhihao Guan' 'Yang Yang']"
] |
null | null | 2407.01910 | null | null | http://arxiv.org/pdf/2407.01910v2 | 2024-07-03T15:15:20Z | 2024-07-02T03:21:24Z | MG-Verilog: Multi-grained Dataset Towards Enhanced LLM-assisted Verilog
Generation | Large Language Models (LLMs) have recently shown promise in streamlining hardware design processes by encapsulating vast amounts of domain-specific data. In addition, they allow users to interact with the design processes through natural language instructions, thus making hardware design more accessible to developers. However, effectively leveraging LLMs in hardware design necessitates providing domain-specific data during inference (e.g., through in-context learning), fine-tuning, or pre-training. Unfortunately, existing publicly available hardware datasets are often limited in size, complexity, or detail, which hinders the effectiveness of LLMs in hardware design tasks. To address this issue, we first propose a set of criteria for creating high-quality hardware datasets that can effectively enhance LLM-assisted hardware design. Based on these criteria, we propose a Multi-Grained-Verilog (MG-Verilog) dataset, which encompasses descriptions at various levels of detail and corresponding code samples. To benefit the broader hardware design community, we have developed an open-source infrastructure that facilitates easy access, integration, and extension of the dataset to meet specific project needs. Furthermore, to fully exploit the potential of the MG-Verilog dataset, which varies in complexity and detail, we introduce a balanced fine-tuning scheme. This scheme serves as a unique use case to leverage the diverse levels of detail provided by the dataset. Extensive experiments demonstrate that the proposed dataset and fine-tuning scheme consistently improve the performance of LLMs in hardware design tasks. | [
"['Yongan Zhang' 'Zhongzhi Yu' 'Yonggan Fu' 'Cheng Wan'\n 'Yingyan Celine Lin']"
] |
null | null | 2407.01920 | null | null | http://arxiv.org/pdf/2407.01920v1 | 2024-07-02T03:34:16Z | 2024-07-02T03:34:16Z | To Forget or Not? Towards Practical Knowledge Unlearning for Large
Language Models | Large Language Models (LLMs) trained on extensive corpora inevitably retain sensitive data, such as personal privacy information and copyrighted material. Recent advancements in knowledge unlearning involve updating LLM parameters to erase specific knowledge. However, current unlearning paradigms are mired in vague forgetting boundaries, often erasing knowledge indiscriminately. In this work, we introduce KnowUnDo, a benchmark containing copyrighted content and user privacy domains to evaluate if the unlearning process inadvertently erases essential knowledge. Our findings indicate that existing unlearning methods often suffer from excessive unlearning. To address this, we propose a simple yet effective method, MemFlex, which utilizes gradient information to precisely target and unlearn sensitive parameters. Experimental results show that MemFlex is superior to existing methods in both precise knowledge unlearning and general knowledge retaining of LLMs. Code and dataset will be released at https://github.com/zjunlp/KnowUnDo. | [
"['Bozhong Tian' 'Xiaozhuan Liang' 'Siyuan Cheng' 'Qingbin Liu'\n 'Mengru Wang' 'Dianbo Sui' 'Xi Chen' 'Huajun Chen' 'Ningyu Zhang']"
] |
null | null | 2407.01948 | null | null | http://arxiv.org/pdf/2407.01948v1 | 2024-07-02T04:39:19Z | 2024-07-02T04:39:19Z | Extracting and Encoding: Leveraging Large Language Models and Medical
Knowledge to Enhance Radiological Text Representation | Advancing representation learning in specialized fields like medicine remains challenging due to the scarcity of expert annotations for text and images. To tackle this issue, we present a novel two-stage framework designed to extract high-quality factual statements from free-text radiology reports in order to improve the representations of text encoders and, consequently, their performance on various downstream tasks. In the first stage, we propose a textit{Fact Extractor} that leverages large language models (LLMs) to identify factual statements from well-curated domain-specific datasets. In the second stage, we introduce a textit{Fact Encoder} (CXRFE) based on a BERT model fine-tuned with objective functions designed to improve its representations using the extracted factual data. Our framework also includes a new embedding-based metric (CXRFEScore) for evaluating chest X-ray text generation systems, leveraging both stages of our approach. Extensive evaluations show that our fact extractor and encoder outperform current state-of-the-art methods in tasks such as sentence ranking, natural language inference, and label extraction from radiology reports. Additionally, our metric proves to be more robust and effective than existing metrics commonly used in the radiology report generation literature. The code of this project is available at url{https://github.com/PabloMessina/CXR-Fact-Encoder}. | [
"['Pablo Messina' 'René Vidal' 'Denis Parra' 'Álvaro Soto'\n 'Vladimir Araujo']"
] |
null | null | 2407.01953 | null | null | http://arxiv.org/pdf/2407.01953v1 | 2024-07-02T05:04:13Z | 2024-07-02T05:04:13Z | CatMemo at the FinLLM Challenge Task: Fine-Tuning Large Language Models
using Data Fusion in Financial Applications | The integration of Large Language Models (LLMs) into financial analysis has garnered significant attention in the NLP community. This paper presents our solution to IJCAI-2024 FinLLM challenge, investigating the capabilities of LLMs within three critical areas of financial tasks: financial classification, financial text summarization, and single stock trading. We adopted Llama3-8B and Mistral-7B as base models, fine-tuning them through Parameter Efficient Fine-Tuning (PEFT) and Low-Rank Adaptation (LoRA) approaches. To enhance model performance, we combine datasets from task 1 and task 2 for data fusion. Our approach aims to tackle these diverse tasks in a comprehensive and integrated manner, showcasing LLMs' capacity to address diverse and complex financial tasks with improved accuracy and decision-making capabilities. | [
"['Yupeng Cao' 'Zhiyuan Yao' 'Zhi Chen' 'Zhiyang Deng']"
] |
null | null | 2407.01960 | null | null | http://arxiv.org/pdf/2407.01960v1 | 2024-07-02T05:31:59Z | 2024-07-02T05:31:59Z | Zero-shot Video Restoration and Enhancement Using Pre-Trained Image
Diffusion Model | Diffusion-based zero-shot image restoration and enhancement models have achieved great success in various image restoration and enhancement tasks without training. However, directly applying them to video restoration and enhancement results in severe temporal flickering artifacts. In this paper, we propose the first framework for zero-shot video restoration and enhancement based on a pre-trained image diffusion model. By replacing the self-attention layer with the proposed cross-previous-frame attention layer, the pre-trained image diffusion model can take advantage of the temporal correlation between neighboring frames. We further propose temporal consistency guidance, spatial-temporal noise sharing, and an early stopping sampling strategy for better temporally consistent sampling. Our method is a plug-and-play module that can be inserted into any diffusion-based zero-shot image restoration or enhancement methods to further improve their performance. Experimental results demonstrate the superiority of our proposed method in producing temporally consistent videos with better fidelity. | [
"['Cong Cao' 'Huanjing Yue' 'Xin Liu' 'Jingyu Yang']"
] |
null | null | 2407.01972 | null | null | http://arxiv.org/abs/2407.01972v1 | 2024-07-02T06:08:55Z | 2024-07-02T06:08:55Z | MeMemo: On-device Retrieval Augmentation for Private and Personalized
Text Generation | Retrieval-augmented text generation (RAG) addresses the common limitations of large language models (LLMs), such as hallucination, by retrieving information from an updatable external knowledge base. However, existing approaches often require dedicated backend servers for data storage and retrieval, thereby limiting their applicability in use cases that require strict data privacy, such as personal finance, education, and medicine. To address the pressing need for client-side dense retrieval, we introduce MeMemo, the first open-source JavaScript toolkit that adapts the state-of-the-art approximate nearest neighbor search technique HNSW to browser environments. Developed with modern and native Web technologies, such as IndexedDB and Web Workers, our toolkit leverages client-side hardware capabilities to enable researchers and developers to efficiently search through millions of high-dimensional vectors in the browser. MeMemo enables exciting new design and research opportunities, such as private and personalized content creation and interactive prototyping, as demonstrated in our example application RAG Playground. Reflecting on our work, we discuss the opportunities and challenges for on-device dense retrieval. MeMemo is available at https://github.com/poloclub/mememo. | [
"['Zijie J. Wang' 'Duen Horng Chau']"
] |
null | null | 2407.01979 | null | null | http://arxiv.org/abs/2407.01979v1 | 2024-07-02T06:31:13Z | 2024-07-02T06:31:13Z | Unveiling Global Interactive Patterns across Graphs: Towards
Interpretable Graph Neural Networks | Graph Neural Networks (GNNs) have emerged as a prominent framework for graph mining, leading to significant advances across various domains. Stemmed from the node-wise representations of GNNs, existing explanation studies have embraced the subgraph-specific viewpoint that attributes the decision results to the salient features and local structures of nodes. However, graph-level tasks necessitate long-range dependencies and global interactions for advanced GNNs, deviating significantly from subgraph-specific explanations. To bridge this gap, this paper proposes a novel intrinsically interpretable scheme for graph classification, termed as Global Interactive Pattern (GIP) learning, which introduces learnable global interactive patterns to explicitly interpret decisions. GIP first tackles the complexity of interpretation by clustering numerous nodes using a constrained graph clustering module. Then, it matches the coarsened global interactive instance with a batch of self-interpretable graph prototypes, thereby facilitating a transparent graph-level reasoning process. Extensive experiments conducted on both synthetic and real-world benchmarks demonstrate that the proposed GIP yields significantly superior interpretability and competitive performance to~the state-of-the-art counterparts. Our code will be made publicly available. | [
"['Yuwen Wang' 'Shunyu Liu' 'Tongya Zheng' 'Kaixuan Chen' 'Mingli Song']"
] |
null | null | 2407.01985 | null | null | http://arxiv.org/pdf/2407.01985v1 | 2024-07-02T06:54:46Z | 2024-07-02T06:54:46Z | The Epistemic Uncertainty Hole: an issue of Bayesian Neural Networks | Bayesian Deep Learning (BDL) gives access not only to aleatoric uncertainty, as standard neural networks already do, but also to epistemic uncertainty, a measure of confidence a model has in its own predictions. In this article, we show through experiments that the evolution of epistemic uncertainty metrics regarding the model size and the size of the training set, goes against theoretical expectations. More precisely, we observe that the epistemic uncertainty collapses literally in the presence of large models and sometimes also of little training data, while we expect the exact opposite behaviour. This phenomenon, which we call "epistemic uncertainty hole", is all the more problematic as it undermines the entire applicative potential of BDL, which is based precisely on the use of epistemic uncertainty. As an example, we evaluate the practical consequences of this uncertainty hole on one of the main applications of BDL, namely the detection of out-of-distribution samples | [
"['Mohammed Fellaji' 'Frédéric Pennerath']"
] |
null | null | 2407.01991 | null | null | http://arxiv.org/pdf/2407.01991v1 | 2024-07-02T07:06:49Z | 2024-07-02T07:06:49Z | Generation of Geodesics with Actor-Critic Reinforcement Learning to
Predict Midpoints | To find the shortest paths for all pairs on continuous manifolds with infinitesimally defined metrics, we propose to generate them by predicting midpoints recursively and an actor-critic method to learn midpoint prediction. We prove the soundness of our approach and show experimentally that the proposed method outperforms existing methods on both local and global path planning tasks. | [
"['Kazumi Kasaura']"
] |
null | null | 2407.02010 | null | null | http://arxiv.org/pdf/2407.02010v1 | 2024-07-02T07:29:02Z | 2024-07-02T07:29:02Z | Feynman-Kac Operator Expectation Estimator | The Feynman-Kac Operator Expectation Estimator (FKEE) is an innovative method for estimating the target Mathematical Expectation $mathbb{E}_{Xsim P}[f(X)]$ without relying on a large number of samples, in contrast to the commonly used Markov Chain Monte Carlo (MCMC) Expectation Estimator. FKEE comprises diffusion bridge models and approximation of the Feynman-Kac operator. The key idea is to use the solution to the Feynmann-Kac equation at the initial time $u(x_0,0)=mathbb{E}[f(X_T)|X_0=x_0]$. We use Physically Informed Neural Networks (PINN) to approximate the Feynman-Kac operator, which enables the incorporation of diffusion bridge models into the expectation estimator and significantly improves the efficiency of using data while substantially reducing the variance. Diffusion Bridge Model is a more general MCMC method. In order to incorporate extensive MCMC algorithms, we propose a new diffusion bridge model based on the Minimum Wasserstein distance. This diffusion bridge model is universal and reduces the training time of the PINN. FKEE also reduces the adverse impact of the curse of dimensionality and weakens the assumptions on the distribution of $X$ and performance function $f$ in the general MCMC expectation estimator. The theoretical properties of this universal diffusion bridge model are also shown. Finally, we demonstrate the advantages and potential applications of this method through various concrete experiments, including the challenging task of approximating the partition function in the random graph model such as the Ising model. | [
"['Jingyuan Li' 'Wei Liu']"
] |
null | null | 2407.02013 | null | null | http://arxiv.org/pdf/2407.02013v1 | 2024-07-02T07:33:40Z | 2024-07-02T07:33:40Z | DiGRAF: Diffeomorphic Graph-Adaptive Activation Function | In this paper, we propose a novel activation function tailored specifically for graph data in Graph Neural Networks (GNNs). Motivated by the need for graph-adaptive and flexible activation functions, we introduce DiGRAF, leveraging Continuous Piecewise-Affine Based (CPAB) transformations, which we augment with an additional GNN to learn a graph-adaptive diffeomorphic activation function in an end-to-end manner. In addition to its graph-adaptivity and flexibility, DiGRAF also possesses properties that are widely recognized as desirable for activation functions, such as differentiability, boundness within the domain and computational efficiency. We conduct an extensive set of experiments across diverse datasets and tasks, demonstrating a consistent and superior performance of DiGRAF compared to traditional and graph-specific activation functions, highlighting its effectiveness as an activation function for GNNs. | [
"['Krishna Sri Ipsit Mantri' 'Xinzhi Wang' 'Carola-Bibiane Schönlieb'\n 'Bruno Ribeiro' 'Beatrice Bevilacqua' 'Moshe Eliasof']"
] |
null | null | 2407.02025 | null | null | http://arxiv.org/pdf/2407.02025v1 | 2024-07-02T07:48:22Z | 2024-07-02T07:48:22Z | On the Expressive Power of Sparse Geometric MPNNs | Motivated by applications in chemistry and other sciences, we study the expressive power of message-passing neural networks for geometric graphs, whose node features correspond to 3-dimensional positions. Recent work has shown that such models can separate generic pairs of non-equivalent geometric graphs, though they may fail to separate some rare and complicated instances. However, these results assume a fully connected graph, where each node possesses complete knowledge of all other nodes. In contrast, often, in application, every node only possesses knowledge of a small number of nearest neighbors. This paper shows that generic pairs of non-equivalent geometric graphs can be separated by message-passing networks with rotation equivariant features as long as the underlying graph is connected. When only invariant intermediate features are allowed, generic separation is guaranteed for generically globally rigid graphs. We introduce a simple architecture, EGENNET, which achieves our theoretical guarantees and compares favorably with alternative architecture on synthetic and chemical benchmarks. | [
"['Yonatan Sverdlov' 'Nadav Dym']"
] |
null | null | 2407.02028 | null | null | http://arxiv.org/pdf/2407.02028v1 | 2024-07-02T07:52:30Z | 2024-07-02T07:52:30Z | Why does in-context learning fail sometimes? Evaluating in-context
learning on open and closed questions | We measure the performance of in-context learning as a function of task novelty and difficulty for open and closed questions. For that purpose, we created a novel benchmark consisting of hard scientific questions, each paired with a context of various relevancy. We show that counter-intuitively, a context that is more aligned with the topic does not always help more than a less relevant context. This effect is especially visible for open questions and questions of high difficulty or novelty. This result reveals a fundamental difference between the treatment of close-form and open-form questions by large-language models and shows a need for a more robust evaluation of in-context learning on the variety of different types of questions. It also poses a new question of how to optimally select a context for large language models, especially in the context of Retrieval Augmented Generation (RAG) systems. Our results suggest that the answer to this question can be highly application-dependent and might be contingent on factors including the format of the question, the perceived difficulty level of the questions, and the novelty or popularity of the information we seek. | [
"['Xiang Li' 'Haoran Tang' 'Siyu Chen' 'Ziwei Wang' 'Ryan Chen'\n 'Marcin Abram']"
] |
null | null | 2407.02031 | null | null | http://arxiv.org/pdf/2407.02031v1 | 2024-07-02T07:59:08Z | 2024-07-02T07:59:08Z | SwiftDiffusion: Efficient Diffusion Model Serving with Add-on Modules | This paper documents our characterization study and practices for serving text-to-image requests with stable diffusion models in production. We first comprehensively analyze inference request traces for commercial text-to-image applications. It commences with our observation that add-on modules, i.e., ControlNets and LoRAs, that augment the base stable diffusion models, are ubiquitous in generating images for commercial applications. Despite their efficacy, these add-on modules incur high loading overhead, prolong the serving latency, and swallow up expensive GPU resources. Driven by our characterization study, we present SwiftDiffusion, a system that efficiently generates high-quality images using stable diffusion models and add-on modules. To achieve this, SwiftDiffusion reconstructs the existing text-to-image serving workflow by identifying the opportunities for parallel computation and distributing ControlNet computations across multiple GPUs. Further, SwiftDiffusion thoroughly analyzes the dynamics of image generation and develops techniques to eliminate the overhead associated with LoRA loading and patching while preserving the image quality. Last, SwiftDiffusion proposes specialized optimizations in the backbone architecture of the stable diffusion models, which are also compatible with the efficient serving of add-on modules. Compared to state-of-the-art text-to-image serving systems, SwiftDiffusion reduces serving latency by up to 5x and improves serving throughput by up to 2x without compromising image quality. | [
"['Suyi Li' 'Lingyun Yang' 'Xiaoxiao Jiang' 'Hanfeng Lu' 'Zhipeng Di'\n 'Weiyi Lu' 'Jiawei Chen' 'Kan Liu' 'Yinghao Yu' 'Tao Lan' 'Guodong Yang'\n 'Lin Qu' 'Liping Zhang' 'Wei Wang']"
] |
null | null | 2407.02057 | null | null | http://arxiv.org/pdf/2407.02057v1 | 2024-07-02T08:38:32Z | 2024-07-02T08:38:32Z | HC-GLAD: Dual Hyperbolic Contrastive Learning for Unsupervised
Graph-Level Anomaly Detection | Unsupervised graph-level anomaly detection (UGAD) has garnered increasing attention in recent years due to its significance. However, most existing methods only rely on traditional graph neural networks to explore pairwise relationships but such kind of pairwise edges are not enough to describe multifaceted relationships involving anomaly. There is an emergency need to exploit node group information which plays a crucial role in UGAD. In addition, most previous works ignore the global underlying properties (e.g., hierarchy and power-law structure) which are common in real-world graph datasets and therefore are indispensable factors on UGAD task. In this paper, we propose a novel Dual Hyperbolic Contrastive Learning for Unsupervised Graph-Level Anomaly Detection (HC-GLAD in short). To exploit node group connections, we construct hypergraphs based on gold motifs and subsequently perform hypergraph convolution. Furthermore, to preserve the hierarchy of real-world graphs, we introduce hyperbolic geometry into this field and conduct both graph and hypergraph embedding learning in hyperbolic space with hyperboloid model. To the best of our knowledge, this is the first work to simultaneously apply hypergraph with node group connections and hyperbolic geometry into this field. Extensive experiments on several real world datasets of different fields demonstrate the superiority of HC-GLAD on UGAD task. The code is available at https://github.com/Yali-F/HC-GLAD. | [
"['Yali Fu' 'Jindong Li' 'Jiahong Liu' 'Qianli Xing' 'Qi Wang' 'Irwin King']"
] |
null | null | 2407.02060 | null | null | http://arxiv.org/pdf/2407.02060v1 | 2024-07-02T08:45:38Z | 2024-07-02T08:45:38Z | Terminating Differentiable Tree Experts | We advance the recently proposed neuro-symbolic Differentiable Tree Machine, which learns tree operations using a combination of transformers and Tensor Product Representations. We investigate the architecture and propose two key components. We first remove a series of different transformer layers that are used in every step by introducing a mixture of experts. This results in a Differentiable Tree Experts model with a constant number of parameters for any arbitrary number of steps in the computation, compared to the previous method in the Differentiable Tree Machine with a linear growth. Given this flexibility in the number of steps, we additionally propose a new termination algorithm to provide the model the power to choose how many steps to make automatically. The resulting Terminating Differentiable Tree Experts model sluggishly learns to predict the number of steps without an oracle. It can do so while maintaining the learning capabilities of the model, converging to the optimal amount of steps. | [
"['Jonathan Thomm' 'Michael Hersche' 'Giacomo Camposampiero'\n 'Aleksandar Terzić' 'Bernhard Schölkopf' 'Abbas Rahimi']"
] |
null | null | 2407.02062 | null | null | http://arxiv.org/pdf/2407.02062v1 | 2024-07-02T08:49:43Z | 2024-07-02T08:49:43Z | Are Data Augmentation Methods in Named Entity Recognition Applicable for
Uncertainty Estimation? | This work investigates the impact of data augmentation on confidence calibration and uncertainty estimation in Named Entity Recognition (NER) tasks. For the future advance of NER in safety-critical fields like healthcare and finance, it is essential to achieve accurate predictions with calibrated confidence when applying Deep Neural Networks (DNNs), including Pre-trained Language Models (PLMs), as a real-world application. However, DNNs are prone to miscalibration, which limits their applicability. Moreover, existing methods for calibration and uncertainty estimation are computational expensive. Our investigation in NER found that data augmentation improves calibration and uncertainty in cross-genre and cross-lingual setting, especially in-domain setting. Furthermore, we showed that the calibration for NER tends to be more effective when the perplexity of the sentences generated by data augmentation is lower, and that increasing the size of the augmentation further improves calibration and uncertainty. | [
"['Wataru Hashimoto' 'Hidetaka Kamigaito' 'Taro Watanabe']"
] |
null | null | 2407.02070 | null | null | http://arxiv.org/pdf/2407.02070v2 | 2024-07-04T12:43:52Z | 2024-07-02T08:59:24Z | Latent Diffusion Model for Generating Ensembles of Climate Simulations | Obtaining accurate estimates of uncertainty in climate scenarios often requires generating large ensembles of high-resolution climate simulations, a computationally expensive and memory intensive process. To address this challenge, we train a novel generative deep learning approach on extensive sets of climate simulations. The model consists of two components: a variational autoencoder for dimensionality reduction and a denoising diffusion probabilistic model that generates multiple ensemble members. We validate our model on the Max Planck Institute Grand Ensemble and show that it achieves good agreement with the original ensemble in terms of variability. By leveraging the latent space representation, our model can rapidly generate large ensembles on-the-fly with minimal memory requirements, which can significantly improve the efficiency of uncertainty quantification in climate simulations. | [
"['Johannes Meuer' 'Maximilian Witte' 'Tobias Sebastian Finn'\n 'Claudia Timmreck' 'Thomas Ludwig' 'Christopher Kadow']"
] |
null | null | 2407.02073 | null | null | http://arxiv.org/pdf/2407.02073v1 | 2024-07-02T09:05:43Z | 2024-07-02T09:05:43Z | Contribution Evaluation of Heterogeneous Participants in Federated
Learning via Prototypical Representations | Contribution evaluation in federated learning (FL) has become a pivotal research area due to its applicability across various domains, such as detecting low-quality datasets, enhancing model robustness, and designing incentive mechanisms. Existing contribution evaluation methods, which primarily rely on data volume, model similarity, and auxiliary test datasets, have shown success in diverse scenarios. However, their effectiveness often diminishes due to the heterogeneity of data distributions, presenting a significant challenge to their applicability. In response, this paper explores contribution evaluation in FL from an entirely new perspective of representation. In this work, we propose a new method for the contribution evaluation of heterogeneous participants in federated learning (FLCE), which introduces a novel indicator emph{class contribution momentum} to conduct refined contribution evaluation. Our core idea is the construction and application of the class contribution momentum indicator from individual, relative, and holistic perspectives, thereby achieving an effective and efficient contribution evaluation of heterogeneous participants without relying on an auxiliary test dataset. Extensive experimental results demonstrate the superiority of our method in terms of fidelity, effectiveness, efficiency, and heterogeneity across various scenarios. | [
"['Qi Guo' 'Minghao Yao' 'Zhen Tian' 'Saiyu Qi' 'Yong Qi' 'Yun Lin'\n 'Jin Song Dong']"
] |
null | null | 2407.02089 | null | null | http://arxiv.org/pdf/2407.02089v1 | 2024-07-02T09:25:58Z | 2024-07-02T09:25:58Z | GPTCast: a weather language model for precipitation nowcasting | This work introduces GPTCast, a generative deep-learning method for ensemble nowcast of radar-based precipitation, inspired by advancements in large language models (LLMs). We employ a GPT model as a forecaster to learn spatiotemporal precipitation dynamics using tokenized radar images. The tokenizer is based on a Quantized Variational Autoencoder featuring a novel reconstruction loss tailored for the skewed distribution of precipitation that promotes faithful reconstruction of high rainfall rates. The approach produces realistic ensemble forecasts and provides probabilistic outputs with accurate uncertainty estimation. The model is trained without resorting to randomness, all variability is learned solely from the data and exposed by model at inference for ensemble generation. We train and test GPTCast using a 6-year radar dataset over the Emilia-Romagna region in Northern Italy, showing superior results compared to state-of-the-art ensemble extrapolation methods. | [
"['Gabriele Franch' 'Elena Tomasi' 'Rishabh Wanjari' 'Virginia Poli'\n 'Chiara Cardinali' 'Pier Paolo Alberoni' 'Marco Cristoforetti']"
] |
null | null | 2407.02091 | null | null | http://arxiv.org/pdf/2407.02091v1 | 2024-07-02T09:26:38Z | 2024-07-02T09:26:38Z | Efficient Bit Labeling in Factorization Machines with Annealing for
Traveling Salesman Problem | To efficiently find an optimum parameter combination in a large-scale problem, it is a key to convert the parameters into available variables in actual machines. Specifically, quadratic unconstrained binary optimization problems are solved with the help of machine learning, e.g., factorization machines with annealing, which convert a raw parameter to binary variables. This work investigates the dependence of the convergence speed and the accuracy on binary labeling method, which can influence the cost function shape and thus the probability of being captured at a local minimum solution. By exemplifying traveling salesman problem, we propose and evaluate Gray labeling, which correlates the Hamming distance in binary labels with the traveling distance. Through numerical simulation of traveling salesman problem up to 15 cities at a limited number of iterations, the Gray labeling shows less local minima percentages and shorter traveling distances compared with natural labeling. | [
"['Shota Koshikawa' 'Aruto Hosaka' 'Tsuyoshi Yoshida']"
] |
null | null | 2407.02106 | null | null | http://arxiv.org/pdf/2407.02106v1 | 2024-07-02T09:47:56Z | 2024-07-02T09:47:56Z | Automated Knowledge Graph Learning in Industrial Processes | Industrial processes generate vast amounts of time series data, yet extracting meaningful relationships and insights remains challenging. This paper introduces a framework for automated knowledge graph learning from time series data, specifically tailored for industrial applications. Our framework addresses the complexities inherent in industrial datasets, transforming them into knowledge graphs that improve decision-making, process optimization, and knowledge discovery. Additionally, it employs Granger causality to identify key attributes that can inform the design of predictive models. To illustrate the practical utility of our approach, we also present a motivating use case demonstrating the benefits of our framework in a real-world industrial scenario. Further, we demonstrate how the automated conversion of time series data into knowledge graphs can identify causal influences or dependencies between important process parameters. | [
"['Lolitta Ammann' 'Jorge Martinez-Gil' 'Michael Mayr'\n 'Georgios C. Chasparis']"
] |
null | null | 2407.02112 | null | null | http://arxiv.org/pdf/2407.02112v1 | 2024-07-02T09:54:39Z | 2024-07-02T09:54:39Z | A Data-Centric Perspective on Evaluating Machine Learning Models for
Tabular Data | Tabular data is prevalent in real-world machine learning applications, and new models for supervised learning of tabular data are frequently proposed. Comparative studies assessing the performance of models typically consist of model-centric evaluation setups with overly standardized data preprocessing. This paper demonstrates that such model-centric evaluations are biased, as real-world modeling pipelines often require dataset-specific preprocessing and feature engineering. Therefore, we propose a data-centric evaluation framework. We select 10 relevant datasets from Kaggle competitions and implement expert-level preprocessing pipelines for each dataset. We conduct experiments with different preprocessing pipelines and hyperparameter optimization (HPO) regimes to quantify the impact of model selection, HPO, feature engineering, and test-time adaptation. Our main findings are: 1. After dataset-specific feature engineering, model rankings change considerably, performance differences decrease, and the importance of model selection reduces. 2. Recent models, despite their measurable progress, still significantly benefit from manual feature engineering. This holds true for both tree-based models and neural networks. 3. While tabular data is typically considered static, samples are often collected over time, and adapting to distribution shifts can be important even in supposedly static data. These insights suggest that research efforts should be directed toward a data-centric perspective, acknowledging that tabular data requires feature engineering and often exhibits temporal characteristics. | [
"['Andrej Tschalzev' 'Sascha Marton' 'Stefan Lüdtke' 'Christian Bartelt'\n 'Heiner Stuckenschmidt']"
] |
null | null | 2407.02119 | null | null | http://arxiv.org/pdf/2407.02119v2 | 2024-07-09T08:24:06Z | 2024-07-02T10:09:19Z | Cost-Effective Proxy Reward Model Construction with On-Policy and Active
Learning | Reinforcement learning with human feedback (RLHF), as a widely adopted approach in current large language model pipelines, is textit{bottlenecked by the size of human preference data}. While traditional methods rely on offline preference dataset constructions, recent approaches have shifted towards online settings, where a learner uses a small amount of labeled seed data and a large pool of unlabeled prompts to iteratively construct new preference data through self-generated responses and high-quality reward/preference feedback. However, most current online algorithms still focus on preference labeling during policy model updating with given feedback oracles, which incurs significant expert query costs. textit{We are the first to explore cost-effective proxy reward oracles construction strategies for further labeling preferences or rewards with extremely limited labeled data and expert query budgets}. Our approach introduces two key innovations: (1) on-policy query to avoid OOD and imbalance issues in seed data, and (2) active learning to select the most informative data for preference queries. Using these methods, we train a evaluation model with minimal expert-labeled data, which then effectively labels nine times more preference pairs for further RLHF training. For instance, our model using Direct Preference Optimization (DPO) gains around over 1% average improvement on AlpacaEval2, MMLU-5shot and MMLU-0shot, with only 1.7K query cost. Our methodology is orthogonal to other direct expert query-based strategies and therefore might be integrated with them to further reduce query costs. | [
"['Yifang Chen' 'Shuohang Wang' 'Ziyi Yang' 'Hiteshi Sharma'\n 'Nikos Karampatziakis' 'Donghan Yu' 'Kevin Jamieson' 'Simon Shaolei Du'\n 'Yelong Shen']"
] |
null | null | 2407.02125 | null | null | http://arxiv.org/pdf/2407.02125v1 | 2024-07-02T10:16:04Z | 2024-07-02T10:16:04Z | Distributional Regression U-Nets for the Postprocessing of Precipitation
Ensemble Forecasts | Accurate precipitation forecasts have a high socio-economic value due to their role in decision-making in various fields such as transport networks and farming. We propose a global statistical postprocessing method for grid-based precipitation ensemble forecasts. This U-Net-based distributional regression method predicts marginal distributions in the form of parametric distributions inferred by scoring rule minimization. Distributional regression U-Nets are compared to state-of-the-art postprocessing methods for daily 21-h forecasts of 3-h accumulated precipitation over the South of France. Training data comes from the M'et'eo-France weather model AROME-EPS and spans 3 years. A practical challenge appears when consistent data or reforecasts are not available. Distributional regression U-Nets compete favorably with the raw ensemble. In terms of continuous ranked probability score, they reach a performance comparable to quantile regression forests (QRF). However, they are unable to provide calibrated forecasts in areas associated with high climatological precipitation. In terms of predictive power for heavy precipitation events, they outperform both QRF and semi-parametric QRF with tail extensions. | [
"['Romain Pic' 'Clément Dombry' 'Philippe Naveau' 'Maxime Taillardat']"
] |
null | null | 2407.02138 | null | null | http://arxiv.org/pdf/2407.02138v1 | 2024-07-02T10:33:31Z | 2024-07-02T10:33:31Z | Efficient Nearest Neighbor based Uncertainty Estimation for Natural
Language Processing Tasks | Trustworthy prediction in Deep Neural Networks (DNNs), including Pre-trained Language Models (PLMs) is important for safety-critical applications in the real world. However, DNNs often suffer from uncertainty estimation, such as miscalibration. In particular, approaches that require multiple stochastic inference can mitigate this problem, but the expensive cost of inference makes them impractical. In this study, we propose $k$-Nearest Neighbor Uncertainty Estimation ($k$NN-UE), which is an uncertainty estimation method that uses the distances from the neighbors and label-existence ratio of neighbors. Experiments on sentiment analysis, natural language inference, and named entity recognition show that our proposed method outperforms the baselines or recent density-based methods in confidence calibration, selective prediction, and out-of-distribution detection. Moreover, our analyses indicate that introducing dimension reduction or approximate nearest neighbor search inspired by recent $k$NN-LM studies reduces the inference overhead without significantly degrading estimation performance when combined them appropriately. | [
"['Wataru Hashimoto' 'Hidetaka Kamigaito' 'Taro Watanabe']"
] |
null | null | 2407.02143 | null | null | http://arxiv.org/pdf/2407.02143v1 | 2024-07-02T10:37:54Z | 2024-07-02T10:37:54Z | Counterfactual Data Augmentation with Denoising Diffusion for Graph
Anomaly Detection | A critical aspect of Graph Neural Networks (GNNs) is to enhance the node representations by aggregating node neighborhood information. However, when detecting anomalies, the representations of abnormal nodes are prone to be averaged by normal neighbors, making the learned anomaly representations less distinguishable. To tackle this issue, we propose CAGAD -- an unsupervised Counterfactual data Augmentation method for Graph Anomaly Detection -- which introduces a graph pointer neural network as the heterophilic node detector to identify potential anomalies whose neighborhoods are normal-node-dominant. For each identified potential anomaly, we design a graph-specific diffusion model to translate a part of its neighbors, which are probably normal, into anomalous ones. At last, we involve these translated neighbors in GNN neighborhood aggregation to produce counterfactual representations of anomalies. Through aggregating the translated anomalous neighbors, counterfactual representations become more distinguishable and further advocate detection performance. The experimental results on four datasets demonstrate that CAGAD significantly outperforms strong baselines, with an average improvement of 2.35% on F1, 2.53% on AUC-ROC, and 2.79% on AUC-PR. | [
"['Chunjing Xiao' 'Shikang Pang' 'Xovee Xu' 'Xuan Li' 'Goce Trajcevski'\n 'Fan Zhou']"
] |
null | null | 2407.02153 | null | null | http://arxiv.org/pdf/2407.02153v1 | 2024-07-02T10:51:36Z | 2024-07-02T10:51:36Z | Equidistribution-based training of Free Knot Splines and ReLU Neural
Networks | We consider the problem of one-dimensional function approximation using shallow neural networks (NN) with a rectified linear unit (ReLU) activation function and compare their training with traditional methods such as univariate Free Knot Splines (FKS). ReLU NNs and FKS span the same function space, and thus have the same theoretical expressivity. In the case of ReLU NNs, we show that their ill-conditioning degrades rapidly as the width of the network increases. This often leads to significantly poorer approximation in contrast to the FKS representation, which remains well-conditioned as the number of knots increases. We leverage the theory of optimal piecewise linear interpolants to improve the training procedure for a ReLU NN. Using the equidistribution principle, we propose a two-level procedure for training the FKS by first solving the nonlinear problem of finding the optimal knot locations of the interpolating FKS. Determining the optimal knots then acts as a good starting point for training the weights of the FKS. The training of the FKS gives insights into how we can train a ReLU NN effectively to give an equally accurate approximation. More precisely, we combine the training of the ReLU NN with an equidistribution based loss to find the breakpoints of the ReLU functions, combined with preconditioning the ReLU NN approximation (to take an FKS form) to find the scalings of the ReLU functions, leads to a well-conditioned and reliable method of finding an accurate ReLU NN approximation to a target function. We test this method on a series or regular, singular, and rapidly varying target functions and obtain good results realising the expressivity of the network in this case. | [
"['Simone Appella' 'Simon Arridge' 'Chris Budd' 'Teo Deveney'\n 'Lisa Maria Kreusser']"
] |
null | null | 2407.02156 | null | null | http://arxiv.org/pdf/2407.02156v1 | 2024-07-02T10:54:23Z | 2024-07-02T10:54:23Z | Towards Training Music Taggers on Synthetic Data | Most contemporary music tagging systems rely on large volumes of annotated data. As an alternative, we investigate the extent to which synthetically generated music excerpts can improve tagging systems when only small annotated collections are available. To this end, we release GTZAN-synth, a synthetic dataset that follows the taxonomy of the well-known GTZAN dataset while being ten times larger in data volume. We first observe that simply adding this synthetic dataset to the training split of GTZAN does not result into performance improvements. We then proceed to investigating domain adaptation, transfer learning and fine-tuning strategies for the task at hand and draw the conclusion that the last two options yield an increase in accuracy. Overall, the proposed approach can be considered as a first guide in a promising field for future research. | [
"['Nadine Kroher' 'Steven Manangu' 'Aggelos Pikrakis']"
] |
null | null | 2407.02188 | null | null | http://arxiv.org/pdf/2407.02188v1 | 2024-07-02T11:46:07Z | 2024-07-02T11:46:07Z | Structure-Aware Consensus Network on Graphs with Few Labeled Nodes | Graph node classification with few labeled nodes presents significant challenges due to limited supervision. Conventional methods often exploit the graph in a transductive learning manner. They fail to effectively utilize the abundant unlabeled data and the structural information inherent in graphs. To address these issues, we introduce a Structure-Aware Consensus Network (SACN) from three perspectives. Firstly, SACN leverages a novel structure-aware consensus learning strategy between two strongly augmented views. The proposed strategy can fully exploit the potentially useful information of the unlabeled nodes and the structural information of the entire graph. Secondly, SACN uniquely integrates the graph's structural information to achieve strong-to-strong consensus learning, improving the utilization of unlabeled data while maintaining multiview learning. Thirdly, unlike two-branch graph neural network-based methods, SACN is designed for multiview feature learning within a single-branch architecture. Furthermore, a class-aware pseudolabel selection strategy helps address class imbalance and achieve effective weak-to-strong supervision. Extensive experiments on three benchmark datasets demonstrate SACN's superior performance in node classification tasks, particularly at very low label rates, outperforming state-of-the-art methods while maintaining computational simplicity.The source code is available at https://github.com/kunzhan/SACN | [
"['Shuaike Xu' 'Xiaolin Zhang' 'Peng Zhang' 'Kun Zhan']"
] |
null | null | 2407.02191 | null | null | http://arxiv.org/pdf/2407.02191v1 | 2024-07-02T11:49:59Z | 2024-07-02T11:49:59Z | Attack-Aware Noise Calibration for Differential Privacy | Differential privacy (DP) is a widely used approach for mitigating privacy risks when training machine learning models on sensitive data. DP mechanisms add noise during training to limit the risk of information leakage. The scale of the added noise is critical, as it determines the trade-off between privacy and utility. The standard practice is to select the noise scale in terms of a privacy budget parameter $epsilon$. This parameter is in turn interpreted in terms of operational attack risk, such as accuracy, or sensitivity and specificity of inference attacks against the privacy of the data. We demonstrate that this two-step procedure of first calibrating the noise scale to a privacy budget $epsilon$, and then translating $epsilon$ to attack risk leads to overly conservative risk assessments and unnecessarily low utility. We propose methods to directly calibrate the noise scale to a desired attack risk level, bypassing the intermediate step of choosing $epsilon$. For a target attack risk, our approach significantly decreases noise scale, leading to increased utility at the same level of privacy. We empirically demonstrate that calibrating noise to attack sensitivity/specificity, rather than $epsilon$, when training privacy-preserving ML models substantially improves model accuracy for the same risk level. Our work provides a principled and practical way to improve the utility of privacy-preserving ML without compromising on privacy. | [
"['Bogdan Kulynych' 'Juan Felipe Gomez' 'Georgios Kaissis'\n 'Flavio du Pin Calmon' 'Carmela Troncoso']"
] |
null | null | 2407.02211 | null | null | http://arxiv.org/pdf/2407.02211v1 | 2024-07-02T12:21:14Z | 2024-07-02T12:21:14Z | PromptIntern: Saving Inference Costs by Internalizing Recurrent Prompt
during Large Language Model Fine-tuning | Large language models (LLMs) have played a fundamental role in various natural language processing tasks with powerful prompt techniques. However, in real-world applications, there are often similar prompt components for repeated queries, which causes significant computational burdens during inference. Existing prompt compression and direct fine-tuning methods aim to tackle these challenges, yet they frequently struggle to strike an optimal balance between cost-efficiency and performance effectiveness, especially in complex tasks such as NL2Code. In this paper, we propose a novel method namely PromptIntern to internalize the prompt knowledge into model parameters via progressive fine-tuning. Our method enables LLMs to emulate the human learning process for a new task, where detailed templates and examples in a prompt are gradually internalized and phased out progressively as the model grows accustomed to the task. Extensive experiments demonstrate that our method reduces inference tokens over 90%, speedups inference by 4.2 times, and saves 88.3% monetary cost. | [
"['Jiaru Zou' 'Mengyu Zhou' 'Tao Li' 'Shi Han' 'Dongmei Zhang']"
] |
null | null | 2407.02217 | null | null | http://arxiv.org/pdf/2407.02217v1 | 2024-07-02T12:32:57Z | 2024-07-02T12:32:57Z | Physics-Informed Model and Hybrid Planning for Efficient Dyna-Style
Reinforcement Learning | Applying reinforcement learning (RL) to real-world applications requires addressing a trade-off between asymptotic performance, sample efficiency, and inference time. In this work, we demonstrate how to address this triple challenge by leveraging partial physical knowledge about the system dynamics. Our approach involves learning a physics-informed model to boost sample efficiency and generating imaginary trajectories from this model to learn a model-free policy and Q-function. Furthermore, we propose a hybrid planning strategy, combining the learned policy and Q-function with the learned model to enhance time efficiency in planning. Through practical demonstrations, we illustrate that our method improves the compromise between sample efficiency, time efficiency, and performance over state-of-the-art methods. | [
"['Zakariae El Asri' 'Olivier Sigaud' 'Nicolas Thome']"
] |
null | null | 2407.02231 | null | null | http://arxiv.org/pdf/2407.02231v1 | 2024-07-02T12:56:17Z | 2024-07-02T12:56:17Z | Safety-Driven Deep Reinforcement Learning Framework for Cobots: A
Sim2Real Approach | This study presents a novel methodology incorporating safety constraints into a robotic simulation during the training of deep reinforcement learning (DRL). The framework integrates specific parts of the safety requirements, such as velocity constraints, as specified by ISO 10218, directly within the DRL model that becomes a part of the robot's learning algorithm. The study then evaluated the efficiency of these safety constraints by subjecting the DRL model to various scenarios, including grasping tasks with and without obstacle avoidance. The validation process involved comprehensive simulation-based testing of the DRL model's responses to potential hazards and its compliance. Also, the performance of the system is carried out by the functional safety standards IEC 61508 to determine the safety integrity level. The study indicated a significant improvement in the safety performance of the robotic system. The proposed DRL model anticipates and mitigates hazards while maintaining operational efficiency. This study was validated in a testbed with a collaborative robotic arm with safety sensors and assessed with metrics such as the average number of safety violations, obstacle avoidance, and the number of successful grasps. The proposed approach outperforms the conventional method by a 16.5% average success rate on the tested scenarios in the simulations and 2.5% in the testbed without safety violations. The project repository is available at https://github.com/ammar-n-abbas/sim2real-ur-gym-gazebo. | [
"['Ammar N. Abbas' 'Shakra Mehak' 'Georgios C. Chasparis'\n 'John D. Kelleher' 'Michael Guilfoyle' 'Maria Chiara Leva'\n 'Aswin K Ramasubramanian']"
] |
null | null | 2407.02233 | null | null | http://arxiv.org/pdf/2407.02233v1 | 2024-07-02T12:57:42Z | 2024-07-02T12:57:42Z | Synthetic Multimodal Question Generation | Multimodal Retrieval Augmented Generation (MMRAG) is a powerful approach to question-answering over multimodal documents. A key challenge with evaluating MMRAG is the paucity of high-quality datasets matching the question styles and modalities of interest. In light of this, we propose SMMQG, a synthetic data generation framework. SMMQG leverages interplay between a retriever, large language model (LLM) and large multimodal model (LMM) to generate question and answer pairs directly from multimodal documents, with the questions conforming to specified styles and modalities. We use SMMQG to generate an MMRAG dataset of 1024 questions over Wikipedia documents and evaluate state-of-the-art models using it, revealing insights into model performance that are attainable only through style- and modality-specific evaluation data. Next, we measure the quality of data produced by SMMQG via a human study. We find that the quality of our synthetic data is on par with the quality of the crowdsourced benchmark MMQA and that downstream evaluation results using both datasets strongly concur. | [
"['Ian Wu' 'Sravan Jayanthi' 'Vijay Viswanathan' 'Simon Rosenberg'\n 'Sina Pakazad' 'Tongshuang Wu' 'Graham Neubig']"
] |
null | null | 2407.02238 | null | null | http://arxiv.org/pdf/2407.02238v1 | 2024-07-02T13:00:19Z | 2024-07-02T13:00:19Z | MIREncoder: Multi-modal IR-based Pretrained Embeddings for Performance
Optimizations | One of the primary areas of interest in High Performance Computing is the improvement of performance of parallel workloads. Nowadays, compilable source code-based optimization tasks that employ deep learning often exploit LLVM Intermediate Representations (IRs) for extracting features from source code. Most such works target specific tasks, or are designed with a pre-defined set of heuristics. So far, pre-trained models are rare in this domain, but the possibilities have been widely discussed. Especially approaches mimicking large-language models (LLMs) have been proposed. But these have prohibitively large training costs. In this paper, we propose MIREncoder, a M}ulti-modal IR-based Auto-Encoder that can be pre-trained to generate a learned embedding space to be used for downstream tasks by machine learning-based approaches. A multi-modal approach enables us to better extract features from compilable programs. It allows us to better model code syntax, semantics and structure. For code-based performance optimizations, these features are very important while making optimization decisions. A pre-trained model/embedding implicitly enables the usage of transfer learning, and helps move away from task-specific trained models. Additionally, a pre-trained model used for downstream performance optimization should itself have reduced overhead, and be easily usable. These considerations have led us to propose a modeling approach that i) understands code semantics and structure, ii) enables use of transfer learning, and iii) is small and simple enough to be easily re-purposed or reused even with low resource availability. Our evaluations will show that our proposed approach can outperform the state of the art while reducing overhead. | [
"['Akash Dutta' 'Ali Jannesari']"
] |
null | null | 2407.02240 | null | null | http://arxiv.org/pdf/2407.02240v1 | 2024-07-02T13:02:12Z | 2024-07-02T13:02:12Z | MALT Powers Up Adversarial Attacks | Current adversarial attacks for multi-class classifiers choose the target class for a given input naively, based on the classifier's confidence levels for various target classes. We present a novel adversarial targeting method, textit{MALT - Mesoscopic Almost Linearity Targeting}, based on medium-scale almost linearity assumptions. Our attack wins over the current state of the art AutoAttack on the standard benchmark datasets CIFAR-100 and ImageNet and for a variety of robust models. In particular, our attack is emph{five times faster} than AutoAttack, while successfully matching all of AutoAttack's successes and attacking additional samples that were previously out of reach. We then prove formally and demonstrate empirically that our targeting method, although inspired by linear predictors, also applies to standard non-linear models. | [
"['Odelia Melamed' 'Gilad Yehudai' 'Adi Shamir']"
] |
null | null | 2407.02253 | null | null | http://arxiv.org/pdf/2407.02253v1 | 2024-07-02T13:18:15Z | 2024-07-02T13:18:15Z | Parameter-Selective Continual Test-Time Adaptation | Continual Test-Time Adaptation (CTTA) aims to adapt a pretrained model to ever-changing environments during the test time under continuous domain shifts. Most existing CTTA approaches are based on the Mean Teacher (MT) structure, which contains a student and a teacher model, where the student is updated using the pseudo-labels from the teacher model, and the teacher is then updated by exponential moving average strategy. However, these methods update the MT model indiscriminately on all parameters of the model. That is, some critical parameters involving sharing knowledge across different domains may be erased, intensifying error accumulation and catastrophic forgetting. In this paper, we introduce Parameter-Selective Mean Teacher (PSMT) method, which is capable of effectively updating the critical parameters within the MT network under domain shifts. First, we introduce a selective distillation mechanism in the student model, which utilizes past knowledge to regularize novel knowledge, thereby mitigating the impact of error accumulation. Second, to avoid catastrophic forgetting, in the teacher model, we create a mask through Fisher information to selectively update parameters via exponential moving average, with preservation measures applied to crucial parameters. Extensive experimental results verify that PSMT outperforms state-of-the-art methods across multiple benchmark datasets. Our code is available at url{https://github.com/JiaxuTian/PSMT}. | [
"['Jiaxu Tian' 'Fan Lyu']"
] |
null | null | 2407.02258 | null | null | http://arxiv.org/pdf/2407.02258v1 | 2024-07-02T13:26:16Z | 2024-07-02T13:26:16Z | SiamTST: A Novel Representation Learning Framework for Enhanced
Multivariate Time Series Forecasting applied to Telco Networks | We introduce SiamTST, a novel representation learning framework for multivariate time series. SiamTST integrates a Siamese network with attention, channel-independent patching, and normalization techniques to achieve superior performance. Evaluated on a real-world industrial telecommunication dataset, SiamTST demonstrates significant improvements in forecasting accuracy over existing methods. Notably, a simple linear network also shows competitive performance, achieving the second-best results, just behind SiamTST. The code is available at https://github.com/simenkristoff/SiamTST. | [
"['Simen Kristoffersen' 'Peter Skaar Nordby' 'Sara Malacarne'\n 'Massimiliano Ruocco' 'Pablo Ortiz']"
] |
null | null | 2407.02263 | null | null | http://arxiv.org/pdf/2407.02263v2 | 2024-07-14T12:40:35Z | 2024-07-02T13:40:29Z | FreeCG: Free the Design Space of Clebsch-Gordan Transform for Machine
Learning Force Field | The Clebsch-Gordan Transform (CG transform) effectively encodes many-body interactions. Many studies have proven its accuracy in depicting atomic environments, although this comes with high computational needs. The computational burden of this challenge is hard to reduce due to the need for permutation equivariance, which limits the design space of the CG transform layer. We show that, implementing the CG transform layer on permutation-invariant inputs allows complete freedom in the design of this layer without affecting symmetry. Developing further on this premise, our idea is to create a CG transform layer that operates on permutation-invariant abstract edges generated from real edge information. We bring in group CG transform with sparse path, abstract edges shuffling, and attention enhancer to form a powerful and efficient CG transform layer. Our method, known as FreeCG, achieves State-of-The-Art (SoTA) results in force prediction for MD17, rMD17, MD22, and property prediction in QM9 datasets with notable enhancement. It introduces a novel paradigm for carrying out efficient and expressive CG transform in future geometric neural network designs. | [
"['Shihao Shao' 'Haoran Geng' 'Qinghua Cui']"
] |
null | null | 2407.02265 | null | null | http://arxiv.org/pdf/2407.02265v1 | 2024-07-02T13:41:59Z | 2024-07-02T13:41:59Z | DrugCLIP: Contrastive Drug-Disease Interaction For Drug Repurposing | Bringing a novel drug from the original idea to market typically requires more than ten years and billions of dollars. To alleviate the heavy burden, a natural idea is to reuse the approved drug to treat new diseases. The process is also known as drug repurposing or drug repositioning. Machine learning methods exhibited huge potential in automating drug repurposing. However, it still encounter some challenges, such as lack of labels and multimodal feature representation. To address these issues, we design DrugCLIP, a cutting-edge contrastive learning method, to learn drug and disease's interaction without negative labels. Additionally, we have curated a drug repurposing dataset based on real-world clinical trial records. Thorough empirical studies are conducted to validate the effectiveness of the proposed DrugCLIP method. | [
"['Yingzhou Lu' 'Yaojun Hu' 'Chenhao Li']"
] |
null | null | 2407.02269 | null | null | http://arxiv.org/pdf/2407.02269v1 | 2024-07-02T13:58:28Z | 2024-07-02T13:58:28Z | IFTT-PIN: A Self-Calibrating PIN-Entry Method | Personalising an interface to the needs and preferences of a user often incurs additional interaction steps. In this paper, we demonstrate a novel method that enables the personalising of an interface without the need for explicit calibration procedures, via a process we call self-calibration. A second-order effect of self-calibration is that an outside observer cannot easily infer what a user is trying to achieve because they cannot interpret the user's actions. To explore this security angle, we developed IFTT-PIN (If This Then PIN) as the first self-calibrating PIN-entry method. When using IFTT-PIN, users are free to choose any button for any meaning without ever explicitly communicating their choice to the machine. IFTT-PIN infers both the user's PIN and their preferred button mapping at the same time. This paper presents the concept, implementation, and interactive demonstrations of IFTT-PIN, as well as an evaluation against shoulder surfing attacks. Our study (N=24) shows that by adding self-calibration to an existing PIN entry method, IFTT-PIN statistically significantly decreased PIN attack decoding rate by ca. 8.5 times (p=1.1e-9), while only decreasing the PIN entry encoding rate by ca. 1.4 times (p=0.02), leading to a positive security-usability trade-off. IFTT-PIN's entry rate significantly improved 21 days after first exposure (p=3.6e-6) to the method, suggesting self-calibrating interfaces are memorable despite using an initially undefined user interface. Self-calibration methods might lead to novel opportunities for interaction that are more inclusive and versatile, a potentially interesting challenge for the community. A short introductory video is available at https://youtu.be/pP5sfniNRns. | [
"['Kathryn McConkey' 'Talha Enes Ayranci' 'Mohamed Khamis'\n 'Jonathan Grizou']"
] |
null | null | 2407.02271 | null | null | http://arxiv.org/pdf/2407.02271v1 | 2024-07-02T13:59:09Z | 2024-07-02T13:59:09Z | Improving Explainability of Softmax Classifiers Using a Prototype-Based
Joint Embedding Method | We propose a prototype-based approach for improving explainability of softmax classifiers that provides an understandable prediction confidence, generated through stochastic sampling of prototypes, and demonstrates potential for out of distribution detection (OOD). By modifying the model architecture and training to make predictions using similarities to any set of class examples from the training dataset, we acquire the ability to sample for prototypical examples that contributed to the prediction, which provide an instance-based explanation for the model's decision. Furthermore, by learning relationships between images from the training dataset through relative distances within the model's latent space, we obtain a metric for uncertainty that is better able to detect out of distribution data than softmax confidence. | [
"['Hilarie Sit' 'Brendan Keith' 'Karianne Bergen']"
] |
null | null | 2407.02275 | null | null | http://arxiv.org/pdf/2407.02275v1 | 2024-07-02T14:05:10Z | 2024-07-02T14:05:10Z | Learning Paradigms and Modelling Methodologies for Digital Twins in
Process Industry | Central to the digital transformation of the process industry are Digital Twins (DTs), virtual replicas of physical manufacturing systems that combine sensor data with sophisticated data-based or physics-based models, or a combination thereof, to tackle a variety of industrial-relevant tasks like process monitoring, predictive control or decision support. The backbone of a DT, i.e. the concrete modelling methodologies and architectural frameworks supporting these models, are complex, diverse and evolve fast, necessitating a thorough understanding of the latest state-of-the-art methods and trends to stay on top of a highly competitive market. From a research perspective, despite the high research interest in reviewing various aspects of DTs, structured literature reports specifically focusing on unravelling the utilized learning paradigms (e.g. self-supervised learning) for DT-creation in the process industry are a novel contribution in this field. This study aims to address these gaps by (1) systematically analyzing the modelling methodologies (e.g. Convolutional Neural Network, Encoder-Decoder, Hidden Markov Model) and paradigms (e.g. data-driven, physics-based, hybrid) used for DT-creation; (2) assessing the utilized learning strategies (e.g. supervised, unsupervised, self-supervised); (3) analyzing the type of modelling task (e.g. regression, classification, clustering); and (4) identifying the challenges and research gaps, as well as, discuss potential resolutions provided. | [
"['Michael Mayr' 'Georgios C. Chasparis' 'Josef Küng']"
] |
null | null | 2407.02279 | null | null | http://arxiv.org/pdf/2407.02279v1 | 2024-07-02T14:08:23Z | 2024-07-02T14:08:23Z | How to Boost Any Loss Function | Boosting is a highly successful ML-born optimization setting in which one is required to computationally efficiently learn arbitrarily good models based on the access to a weak learner oracle, providing classifiers performing at least slightly differently from random guessing. A key difference with gradient-based optimization is that boosting's original model does not requires access to first order information about a loss, yet the decades long history of boosting has quickly evolved it into a first order optimization setting -- sometimes even wrongfully textit{defining} it as such. Owing to recent progress extending gradient-based optimization to use only a loss' zeroth ($0^{th}$) order information to learn, this begs the question: what loss functions can be efficiently optimized with boosting and what is the information really needed for boosting to meet the textit{original} boosting blueprint's requirements? We provide a constructive formal answer essentially showing that textit{any} loss function can be optimized with boosting and thus boosting can achieve a feat not yet known to be possible in the classical $0^{th}$ order setting, since loss functions are not required to be be convex, nor differentiable or Lipschitz -- and in fact not required to be continuous either. Some tools we use are rooted in quantum calculus, the mathematical field -- not to be confounded with quantum computation -- that studies calculus without passing to the limit, and thus without using first order information. | [
"['Richard Nock' 'Yishay Mansour']"
] |
null | null | 2407.02309 | null | null | http://arxiv.org/pdf/2407.02309v1 | 2024-07-02T14:44:01Z | 2024-07-02T14:44:01Z | Semantically Guided Representation Learning For Action Anticipation | Action anticipation is the task of forecasting future activity from a partially observed sequence of events. However, this task is exposed to intrinsic future uncertainty and the difficulty of reasoning upon interconnected actions. Unlike previous works that focus on extrapolating better visual and temporal information, we concentrate on learning action representations that are aware of their semantic interconnectivity based on prototypical action patterns and contextual co-occurrences. To this end, we propose the novel Semantically Guided Representation Learning (S-GEAR) framework. S-GEAR learns visual action prototypes and leverages language models to structure their relationship, inducing semanticity. To gather insights on S-GEAR's effectiveness, we test it on four action anticipation benchmarks, obtaining improved results compared to previous works: +3.5, +2.7, and +3.5 absolute points on Top-1 Accuracy on Epic-Kitchen 55, EGTEA Gaze+ and 50 Salads, respectively, and +0.8 on Top-5 Recall on Epic-Kitchens 100. We further observe that S-GEAR effectively transfers the geometric associations between actions from language to visual prototypes. Finally, S-GEAR opens new research frontiers in anticipation tasks by demonstrating the intricate impact of action semantic interconnectivity. | [
"['Anxhelo Diko' 'Danilo Avola' 'Bardh Prenkaj' 'Federico Fontana'\n 'Luigi Cinque']"
] |
null | null | 2407.02318 | null | null | http://arxiv.org/pdf/2407.02318v1 | 2024-07-01T12:52:05Z | 2024-07-01T12:52:05Z | The Solution for Temporal Sound Localisation Task of ICCV 1st Perception
Test Challenge 2023 | In this paper, we propose a solution for improving the quality of temporal sound localization. We employ a multimodal fusion approach to combine visual and audio features. High-quality visual features are extracted using a state-of-the-art self-supervised pre-training network, resulting in efficient video feature representations. At the same time, audio features serve as complementary information to help the model better localize the start and end of sounds. The fused features are trained in a multi-scale Transformer for training. In the final test dataset, we achieved a mean average precision (mAP) of 0.33, obtaining the second-best performance in this track. | [
"['Yurui Huang' 'Yang Yang' 'Shou Chen' 'Xiangyu Wu' 'Qingguo Chen'\n 'Jianfeng Lu']"
] |
null | null | 2407.02322 | null | null | http://arxiv.org/pdf/2407.02322v1 | 2024-07-02T14:52:21Z | 2024-07-02T14:52:21Z | Stochastic Differential Equations models for Least-Squares Stochastic
Gradient Descent | We study the dynamics of a continuous-time model of the Stochastic Gradient Descent (SGD) for the least-square problem. Indeed, pursuing the work of Li et al. (2019), we analyze Stochastic Differential Equations (SDEs) that model SGD either in the case of the training loss (finite samples) or the population one (online setting). A key qualitative feature of the dynamics is the existence of a perfect interpolator of the data, irrespective of the sample size. In both scenarios, we provide precise, non-asymptotic rates of convergence to the (possibly degenerate) stationary distribution. Additionally, we describe this asymptotic distribution, offering estimates of its mean, deviations from it, and a proof of the emergence of heavy-tails related to the step-size magnitude. Numerical simulations supporting our findings are also presented. | [
"['Adrien Schertzer' 'Loucas Pillaud-Vivien']"
] |
null | null | 2407.02327 | null | null | http://arxiv.org/pdf/2407.02327v1 | 2024-07-02T14:56:47Z | 2024-07-02T14:56:47Z | QSync: Quantization-Minimized Synchronous Distributed Training Across
Hybrid Devices | A number of production deep learning clusters have attempted to explore inference hardware for DNN training, at the off-peak serving hours with many inference GPUs idling. Conducting DNN training with a combination of heterogeneous training and inference GPUs, known as hybrid device training, presents considerable challenges due to disparities in compute capability and significant differences in memory capacity. We propose QSync, a training system that enables efficient synchronous data-parallel DNN training over hybrid devices by strategically exploiting quantized operators. According to each device's available resource capacity, QSync selects a quantization-minimized setting for operators in the distributed DNN training graph, minimizing model accuracy degradation but keeping the training efficiency brought by quantization. We carefully design a predictor with a bi-directional mixed-precision indicator to reflect the sensitivity of DNN layers on fixed-point and floating-point low-precision operators, a replayer with a neighborhood-aware cost mapper to accurately estimate the latency of distributed hybrid mixed-precision training, and then an allocator that efficiently synchronizes workers with minimized model accuracy degradation. QSync bridges the computational graph on PyTorch to an optimized backend for quantization kernel performance and flexible support for various GPU architectures. Extensive experiments show that QSync's predictor can accurately simulate distributed mixed-precision training with <5% error, with a consistent 0.27-1.03% accuracy improvement over the from-scratch training tasks compared to uniform precision. | [
"['Juntao Zhao' 'Borui Wan' 'Yanghua Peng' 'Haibin Lin' 'Yibo Zhu'\n 'Chuan Wu']"
] |
null | null | 2407.02335 | null | null | http://arxiv.org/pdf/2407.02335v1 | 2024-07-02T15:05:19Z | 2024-07-02T15:05:19Z | CALICO: Confident Active Learning with Integrated Calibration | The growing use of deep learning in safety-critical applications, such as medical imaging, has raised concerns about limited labeled data, where this demand is amplified as model complexity increases, posing hurdles for domain experts to annotate data. In response to this, active learning (AL) is used to efficiently train models with limited annotation costs. In the context of deep neural networks (DNNs), AL often uses confidence or probability outputs as a score for selecting the most informative samples. However, modern DNNs exhibit unreliable confidence outputs, making calibration essential. We propose an AL framework that self-calibrates the confidence used for sample selection during the training process, referred to as Confident Active Learning with Integrated CalibratiOn (CALICO). CALICO incorporates the joint training of a classifier and an energy-based model, instead of the standard softmax-based classifier. This approach allows for simultaneous estimation of the input data distribution and the class probabilities during training, improving calibration without needing an additional labeled dataset. Experimental results showcase improved classification performance compared to a softmax-based classifier with fewer labeled samples. Furthermore, the calibration stability of the model is observed to depend on the prior class distribution of the data. | [
"['Lorenzo S. Querol' 'Hajime Nagahara' 'Hideaki Hayashi']"
] |
null | null | 2407.02342 | null | null | http://arxiv.org/pdf/2407.02342v1 | 2024-07-01T15:37:38Z | 2024-07-01T15:37:38Z | Optimizing Age of Information in Vehicular Edge Computing with Federated
Graph Neural Network Multi-Agent Reinforcement Learning | With the rapid development of intelligent vehicles and Intelligent Transport Systems (ITS), the sensors such as cameras and LiDAR installed on intelligent vehicles provides higher capacity of executing computation-intensive and delay-sensitive tasks, thereby raising deployment costs. To address this issue, Vehicular Edge Computing (VEC) has been proposed to process data through Road Side Units (RSUs) to support real-time applications. This paper focuses on the Age of Information (AoI) as a key metric for data freshness and explores task offloading issues for vehicles under RSU communication resource constraints. We adopt a Multi-agent Deep Reinforcement Learning (MADRL) approach, allowing vehicles to autonomously make optimal data offloading decisions. However, MADRL poses risks of vehicle information leakage during communication learning and centralized training. To mitigate this, we employ a Federated Learning (FL) framework that shares model parameters instead of raw data to protect the privacy of vehicle users. Building on this, we propose an innovative distributed federated learning framework combining Graph Neural Networks (GNN), named Federated Graph Neural Network Multi-Agent Reinforcement Learning (FGNN-MADRL), to optimize AoI across the system. For the first time, road scenarios are constructed as graph data structures, and a GNN-based federated learning framework is proposed, effectively combining distributed and centralized federated aggregation. Furthermore, we propose a new MADRL algorithm that simplifies decision making and enhances offloading efficiency, further reducing the decision complexity. Simulation results demonstrate the superiority of our proposed approach to other methods through simulations. | [
"['Wenhua Wang' 'Qiong Wu' 'Pingyi Fan' 'Nan Cheng' 'Wen Chen'\n 'Jiangzhou Wang' 'Khaled B. Letaief']"
] |
null | null | 2407.02348 | null | null | http://arxiv.org/pdf/2407.02348v1 | 2024-07-02T15:14:12Z | 2024-07-02T15:14:12Z | Revisiting Cascaded Ensembles for Efficient Inference | A common approach to make machine learning inference more efficient is to use example-specific adaptive schemes, which route or select models for each example at inference time. In this work we study a simple scheme for adaptive inference. We build a cascade of ensembles (CoE), beginning with resource-efficient models and growing to larger, more expressive models, where ensemble agreement serves as a data-dependent routing criterion. This scheme is easy to incorporate into existing inference pipelines, requires no additional training, and can be used to place models across multiple resource tiers--for instance, serving efficient models at the edge and invoking larger models in the cloud only when necessary. In cases where parallel inference is feasible, we show that CoE can improve accuracy relative to the single best model while reducing the average cost of inference by up to 7x, and provides Pareto-dominate solutions in accuracy and efficiency relative to existing adaptive inference baselines. These savings translate to an over 3x-reduction in total monetary cost when performing inference using a heterogeneous cluster of GPUs. Finally, for edge inference scenarios where portions of the cascade reside at the edge vs. in the cloud, CoE can provide a 14x reduction in communication cost and inference latency without sacrificing accuracy. | [
"['Steven Kolawole' 'Don Dennis' 'Ameet Talwalkar' 'Virginia Smith']"
] |
null | null | 2407.02356 | null | null | http://arxiv.org/pdf/2407.02356v1 | 2024-07-02T15:21:11Z | 2024-07-02T15:21:11Z | Enable the Right to be Forgotten with Federated Client Unlearning in
Medical Imaging | The right to be forgotten, as stated in most data regulations, poses an underexplored challenge in federated learning (FL), leading to the development of federated unlearning (FU). However, current FU approaches often face trade-offs between efficiency, model performance, forgetting efficacy, and privacy preservation. In this paper, we delve into the paradigm of Federated Client Unlearning (FCU) to guarantee a client the right to erase the contribution or the influence, introducing the first FU framework in medical imaging. In the unlearning process of a client, the proposed model-contrastive unlearning marks a pioneering step towards feature-level unlearning, and frequency-guided memory preservation ensures smooth forgetting of local knowledge while maintaining the generalizability of the trained global model, thus avoiding performance compromises and guaranteeing rapid post-training. We evaluated our FCU framework on two public medical image datasets, including Intracranial hemorrhage diagnosis and skin lesion diagnosis, demonstrating that our framework outperformed other state-of-the-art FU frameworks, with an expected speed-up of 10-15 times compared with retraining from scratch. The code and the organized datasets can be found at: https://github.com/dzp2095/FCU. | [
"['Zhipeng Deng' 'Luyang Luo' 'Hao Chen']"
] |
null | null | 2407.02362 | null | null | http://arxiv.org/pdf/2407.02362v2 | 2024-07-07T17:20:51Z | 2024-07-02T15:28:10Z | Fast, Scalable, Energy-Efficient Non-element-wise Matrix Multiplication
on FPGA | Modern Neural Network (NN) architectures heavily rely on vast numbers of multiply-accumulate arithmetic operations, constituting the predominant computational cost. Therefore, this paper proposes a high-throughput, scalable and energy efficient non-element-wise matrix multiplication unit on FPGAs as a basic component of the NNs. We firstly streamline inter-layer and intra-layer redundancies of MADDNESS algorithm, a LUT-based approximate matrix multiplication, to design a fast, efficient scalable approximate matrix multiplication module termed "Approximate Multiplication Unit (AMU)". The AMU optimizes LUT-based matrix multiplications further through dedicated memory management and access design, decoupling computational overhead from input resolution and boosting FPGA-based NN accelerator efficiency significantly. The experimental results show that using our AMU achieves up to 9x higher throughput and 112x higher energy efficiency over the state-of-the-art solutions for the FPGA-based Quantised Neural Network (QNN) accelerators. | [
"['Xuqi Zhu' 'Huaizhi Zhang' 'JunKyu Lee' 'Jiacheng Zhu' 'Chandrajit Pal'\n 'Sangeet Saha' 'Klaus D. McDonald-Maier' 'Xiaojun Zhai']"
] |
null | null | 2407.02369 | null | null | http://arxiv.org/pdf/2407.02369v1 | 2024-07-02T15:39:00Z | 2024-07-02T15:39:00Z | Two-Step Q-Learning | Q-learning is a stochastic approximation version of the classic value iteration. The literature has established that Q-learning suffers from both maximization bias and slower convergence. Recently, multi-step algorithms have shown practical advantages over existing methods. This paper proposes a novel off-policy two-step Q-learning algorithms, without importance sampling. With suitable assumption it was shown that, iterates in the proposed two-step Q-learning is bounded and converges almost surely to the optimal Q-values. This study also address the convergence analysis of the smooth version of two-step Q-learning, i.e., by replacing max function with the log-sum-exp function. The proposed algorithms are robust and easy to implement. Finally, we test the proposed algorithms on benchmark problems such as the roulette problem, maximization bias problem, and randomly generated Markov decision processes and compare it with the existing methods available in literature. Numerical experiments demonstrate the superior performance of both the two-step Q-learning and its smooth variants. | [
"['Antony Vijesh' 'Shreyas S R']"
] |
null | null | 2407.02382 | null | null | http://arxiv.org/pdf/2407.02382v1 | 2024-05-10T10:54:03Z | 2024-05-10T10:54:03Z | Light-SLAM: A Robust Deep-Learning Visual SLAM System Based on LightGlue
under Challenging Lighting Conditions | Simultaneous Localization and Mapping (SLAM) has become a critical technology for intelligent transportation systems and autonomous robots and is widely used in autonomous driving. However, traditional manual feature-based methods in challenging lighting environments make it difficult to ensure robustness and accuracy. Some deep learning-based methods show potential but still have significant drawbacks. To address this problem, we propose a novel hybrid system for visual SLAM based on the LightGlue deep learning network. It uses deep local feature descriptors to replace traditional hand-crafted features and a more efficient and accurate deep network to achieve fast and precise feature matching. Thus, we use the robustness of deep learning to improve the whole system. We have combined traditional geometry-based approaches to introduce a complete visual SLAM system for monocular, binocular, and RGB-D sensors. We thoroughly tested the proposed system on four public datasets: KITTI, EuRoC, TUM, and 4Season, as well as on actual campus scenes. The experimental results show that the proposed method exhibits better accuracy and robustness in adapting to low-light and strongly light-varying environments than traditional manual features and deep learning-based methods. It can also run on GPU in real time. | [
"['Zhiqi Zhao' 'Chang Wu' 'Xiaotong Kong' 'Zejie Lv' 'Xiaoqi Du' 'Qiyan Li']"
] |
null | null | 2407.02389 | null | null | http://arxiv.org/pdf/2407.02389v1 | 2024-07-02T16:02:25Z | 2024-07-02T16:02:25Z | SafaRi:Adaptive Sequence Transformer for Weakly Supervised Referring
Expression Segmentation | Referring Expression Segmentation (RES) aims to provide a segmentation mask of the target object in an image referred to by the text (i.e., referring expression). Existing methods require large-scale mask annotations. Moreover, such approaches do not generalize well to unseen/zero-shot scenarios. To address the aforementioned issues, we propose a weakly-supervised bootstrapping architecture for RES with several new algorithmic innovations. To the best of our knowledge, ours is the first approach that considers only a fraction of both mask and box annotations (shown in Figure 1 and Table 1) for training. To enable principled training of models in such low-annotation settings, improve image-text region-level alignment, and further enhance spatial localization of the target object in the image, we propose Cross-modal Fusion with Attention Consistency module. For automatic pseudo-labeling of unlabeled samples, we introduce a novel Mask Validity Filtering routine based on a spatially aware zero-shot proposal scoring approach. Extensive experiments show that with just 30% annotations, our model SafaRi achieves 59.31 and 48.26 mIoUs as compared to 58.93 and 48.19 mIoUs obtained by the fully-supervised SOTA method SeqTR respectively on RefCOCO+@testA and RefCOCO+testB datasets. SafaRi also outperforms SeqTR by 11.7% (on RefCOCO+testA) and 19.6% (on RefCOCO+testB) in a fully-supervised setting and demonstrates strong generalization capabilities in unseen/zero-shot tasks. | [
"['Sayan Nag' 'Koustava Goswami' 'Srikrishna Karanam']"
] |
null | null | 2407.02390 | null | null | http://arxiv.org/pdf/2407.02390v1 | 2024-07-02T16:04:16Z | 2024-07-02T16:04:16Z | Uncertainty-Aware Decarbonization for Datacenters | This paper represents the first effort to quantify uncertainty in carbon intensity forecasting for datacenter decarbonization. We identify and analyze two types of uncertainty -- temporal and spatial -- and discuss their system implications. To address the temporal dynamics in quantifying uncertainty for carbon intensity forecasting, we introduce a conformal prediction-based framework. Evaluation results show that our technique robustly achieves target coverages in uncertainty quantification across various significance levels. We conduct two case studies using production power traces, focusing on temporal and spatial load shifting respectively. The results show that incorporating uncertainty into scheduling decisions can prevent a 5% and 14% increase in carbon emissions, respectively. These percentages translate to an absolute reduction of 2.1 and 10.4 tons of carbon emissions in a 20 MW datacenter cluster. | [
"['Amy Li' 'Sihang Liu' 'Yi Ding']"
] |
null | null | 2407.02405 | null | null | http://arxiv.org/pdf/2407.02405v1 | 2024-07-02T16:24:57Z | 2024-07-02T16:24:57Z | Tiny-PULP-Dronets: Squeezing Neural Networks for Faster and Lighter
Inference on Multi-Tasking Autonomous Nano-Drones | Pocket-sized autonomous nano-drones can revolutionize many robotic use cases, such as visual inspection in narrow, constrained spaces, and ensure safer human-robot interaction due to their tiny form factor and weight -- i.e., tens of grams. This compelling vision is challenged by the high level of intelligence needed aboard, which clashes against the limited computational and storage resources available on PULP (parallel-ultra-low-power) MCU class navigation and mission controllers that can be hosted aboard. This work moves from PULP-Dronet, a State-of-the-Art convolutional neural network for autonomous navigation on nano-drones. We introduce Tiny-PULP-Dronet: a novel methodology to squeeze by more than one order of magnitude model size (50x fewer parameters), and number of operations (27x less multiply-and-accumulate) required to run inference with similar flight performance as PULP-Dronet. This massive reduction paves the way towards affordable multi-tasking on nano-drones, a fundamental requirement for achieving high-level intelligence. | [
"['Lorenzo Lamberti' 'Vlad Niculescu' 'Michał Barcis' 'Lorenzo Bellone'\n 'Enrico Natalizio' 'Luca Benini' 'Daniele Palossi']"
] |
null | null | 2407.02408 | null | null | http://arxiv.org/pdf/2407.02408v1 | 2024-07-02T16:31:37Z | 2024-07-02T16:31:37Z | CEB: Compositional Evaluation Benchmark for Fairness in Large Language
Models | As Large Language Models (LLMs) are increasingly deployed to handle various natural language processing (NLP) tasks, concerns regarding the potential negative societal impacts of LLM-generated content have also arisen. To evaluate the biases exhibited by LLMs, researchers have recently proposed a variety of datasets. However, existing bias evaluation efforts often focus on only a particular type of bias and employ inconsistent evaluation metrics, leading to difficulties in comparison across different datasets and LLMs. To address these limitations, we collect a variety of datasets designed for the bias evaluation of LLMs, and further propose CEB, a Compositional Evaluation Benchmark that covers different types of bias across different social groups and tasks. The curation of CEB is based on our newly proposed compositional taxonomy, which characterizes each dataset from three dimensions: bias types, social groups, and tasks. By combining the three dimensions, we develop a comprehensive evaluation strategy for the bias in LLMs. Our experiments demonstrate that the levels of bias vary across these dimensions, thereby providing guidance for the development of specific bias mitigation methods. | [
"['Song Wang' 'Peng Wang' 'Tong Zhou' 'Yushun Dong' 'Zhen Tan' 'Jundong Li']"
] |
null | null | 2407.02419 | null | null | http://arxiv.org/pdf/2407.02419v2 | 2024-07-11T05:42:23Z | 2024-07-02T16:44:14Z | Quantum Curriculum Learning | Quantum machine learning (QML) requires significant quantum resources to achieve quantum advantage. Research should prioritize both the efficient design of quantum architectures and the development of learning strategies to optimize resource usage. We propose a framework called quantum curriculum learning (Q-CurL) for quantum data, where the curriculum introduces simpler tasks or data to the learning model before progressing to more challenging ones. We define the curriculum criteria based on the data density ratio between tasks to determine the curriculum order. We also implement a dynamic learning schedule to emphasize the significance of quantum data in optimizing the loss function. Empirical evidence shows that Q-CurL significantly enhances the training convergence and the generalization for unitary learning tasks and improves the robustness of quantum phase recognition tasks. Our framework provides a general learning strategy, bringing QML closer to realizing practical advantages. | [
"['Quoc Hoan Tran' 'Yasuhiro Endo' 'Hirotaka Oshima']"
] |
null | null | 2407.02423 | null | null | http://arxiv.org/pdf/2407.02423v2 | 2024-07-07T17:03:05Z | 2024-07-02T16:50:26Z | On the Anatomy of Attention | We introduce a category-theoretic diagrammatic formalism in order to systematically relate and reason about machine learning models. Our diagrams present architectures intuitively but without loss of essential detail, where natural relationships between models are captured by graphical transformations, and important differences and similarities can be identified at a glance. In this paper, we focus on attention mechanisms: translating folklore into mathematical derivations, and constructing a taxonomy of attention variants in the literature. As a first example of an empirical investigation underpinned by our formalism, we identify recurring anatomical components of attention, which we exhaustively recombine to explore a space of variations on the attention mechanism. | [
"['Nikhil Khatri' 'Tuomas Laakkonen' 'Jonathon Liu'\n 'Vincent Wang-Maścianica']"
] |
null | null | 2407.02424 | null | null | http://arxiv.org/pdf/2407.02424v1 | 2024-07-02T16:50:27Z | 2024-07-02T16:50:27Z | A Pattern Language for Machine Learning Tasks | Idealised as universal approximators, learners such as neural networks can be viewed as "variable functions" that may become one of a range of concrete functions after training. In the same way that equations constrain the possible values of variables in algebra, we may view objective functions as constraints on the behaviour of learners. We extract the equivalences perfectly optimised objective functions impose, calling them "tasks". For these tasks, we develop a formal graphical language that allows us to: (1) separate the core tasks of a behaviour from its implementation details; (2) reason about and design behaviours model-agnostically; and (3) simply describe and unify approaches in machine learning across domains. As proof-of-concept, we design a novel task that enables converting classifiers into generative models we call "manipulators", which we implement by directly translating task specifications into code. The resulting models exhibit capabilities such as style transfer and interpretable latent-space editing, without the need for custom architectures, adversarial training or random sampling. We formally relate the behaviour of manipulators to GANs, and empirically demonstrate their competitive performance with VAEs. We report on experiments across vision and language domains aiming to characterise manipulators as approximate Bayesian inversions of discriminative classifiers. | [
"['Benjamin Rodatz' 'Ian Fan' 'Tuomas Laakkonen' 'Neil John Ortega'\n 'Thomas Hoffman' 'Vincent Wang-Mascianica']"
] |
null | null | 2407.02428 | null | null | http://arxiv.org/pdf/2407.02428v1 | 2024-07-02T17:00:23Z | 2024-07-02T17:00:23Z | Comparative Evaluation of Learning Models for Bionic Robots: Non-Linear
Transfer Function Identifications | The control and modeling of bionic robot dynamics have increasingly adopted model-free control strategies using machine learning methods. Given the non-linear elastic nature of bionic robotic systems, learning-based methods provide reliable alternatives by utilizing numerical data to establish a direct mapping from actuation inputs to robot trajectories without complex kinematics models. However, for developers, the method of identifying an appropriate learning model for their specific bionic robots and further constructing the transfer function has not been thoroughly discussed. Thus, this research trains four types of models, including ensemble learning models, regularization-based models, kernel-based models, and neural network models, suitable for multi-input multi-output (MIMO) data and non-linear transfer function identification, in order to evaluate their (1) accuracy, (2) computation complexity, and (3) performance of capturing biological movements. This research encompasses data collection methods for control inputs and action outputs, selection of machine learning models, comparative analysis of training results, and transfer function identifications. The main objective is to provide a comprehensive evaluation strategy and framework for the application of model-free control. | [
"['Po-Yu Hsieh' 'June-Hao Hou']"
] |
null | null | 2407.02430 | null | null | http://arxiv.org/pdf/2407.02430v1 | 2024-07-02T17:04:34Z | 2024-07-02T17:04:34Z | Meta 3D TextureGen: Fast and Consistent Texture Generation for 3D
Objects | The recent availability and adaptability of text-to-image models has sparked a new era in many related domains that benefit from the learned text priors as well as high-quality and fast generation capabilities, one of which is texture generation for 3D objects. Although recent texture generation methods achieve impressive results by using text-to-image networks, the combination of global consistency, quality, and speed, which is crucial for advancing texture generation to real-world applications, remains elusive. To that end, we introduce Meta 3D TextureGen: a new feedforward method comprised of two sequential networks aimed at generating high-quality and globally consistent textures for arbitrary geometries of any complexity degree in less than 20 seconds. Our method achieves state-of-the-art results in quality and speed by conditioning a text-to-image model on 3D semantics in 2D space and fusing them into a complete and high-resolution UV texture map, as demonstrated by extensive qualitative and quantitative evaluations. In addition, we introduce a texture enhancement network that is capable of up-scaling any texture by an arbitrary ratio, producing 4k pixel resolution textures. | [
"['Raphael Bensadoun' 'Yanir Kleiman' 'Idan Azuri' 'Omri Harosh'\n 'Andrea Vedaldi' 'Natalia Neverova' 'Oran Gafni']"
] |
null | null | 2407.02431 | null | null | http://arxiv.org/pdf/2407.02431v2 | 2024-07-09T02:11:47Z | 2024-07-02T17:08:38Z | On the Robustness of Graph Reduction Against GNN Backdoor | Graph Neural Networks (GNNs) are gaining popularity across various domains due to their effectiveness in learning graph-structured data. Nevertheless, they have been shown to be susceptible to backdoor poisoning attacks, which pose serious threats to real-world applications. Meanwhile, graph reduction techniques, including coarsening and sparsification, which have long been employed to improve the scalability of large graph computational tasks, have recently emerged as effective methods for accelerating GNN training on large-scale graphs. However, the current development and deployment of graph reduction techniques for large graphs overlook the potential risks of data poisoning attacks against GNNs. It is not yet clear how graph reduction interacts with existing backdoor attacks. This paper conducts a thorough examination of the robustness of graph reduction methods in scalable GNN training in the presence of state-of-the-art backdoor attacks. We performed a comprehensive robustness analysis across six coarsening methods and six sparsification methods for graph reduction, under three GNN backdoor attacks against three GNN architectures. Our findings indicate that the effectiveness of graph reduction methods in mitigating attack success rates varies significantly, with some methods even exacerbating the attacks. Through detailed analyses of triggers and poisoned nodes, we interpret our findings and enhance our understanding of how graph reduction influences robustness against backdoor attacks. These results highlight the critical need for incorporating robustness considerations in graph reduction for GNN training, ensuring that enhancements in computational efficiency do not compromise the security of GNN systems. | [
"['Yuxuan Zhu' 'Michael Mandulak' 'Kerui Wu' 'George Slota' 'Yuseok Jeon'\n 'Ka-Ho Chow' 'Lei Yu']"
] |
null | null | 2407.02432 | null | null | http://arxiv.org/pdf/2407.02432v1 | 2024-07-02T17:09:24Z | 2024-07-02T17:09:24Z | Evaluating the Robustness of Adverse Drug Event Classification Models
Using Templates | An adverse drug effect (ADE) is any harmful event resulting from medical drug treatment. Despite their importance, ADEs are often under-reported in official channels. Some research has therefore turned to detecting discussions of ADEs in social media. Impressive results have been achieved in various attempts to detect ADEs. In a high-stakes domain such as medicine, however, an in-depth evaluation of a model's abilities is crucial. We address the issue of thorough performance evaluation in English-language ADE detection with hand-crafted templates for four capabilities: Temporal order, negation, sentiment, and beneficial effect. We find that models with similar performance on held-out test sets have varying results on these capabilities. | [
"['Dorothea MacPhail' 'David Harbecke' 'Lisa Raithel' 'Sebastian Möller']"
] |
null | null | 2407.02437 | null | null | http://arxiv.org/pdf/2407.02437v1 | 2024-07-02T17:15:12Z | 2024-07-02T17:15:12Z | Parameter Matching Attack: Enhancing Practical Applicability of
Availability Attacks | The widespread use of personal data for training machine learning models raises significant privacy concerns, as individuals have limited control over how their public data is subsequently utilized. Availability attacks have emerged as a means for data owners to safeguard their data by desning imperceptible perturbations that degrade model performance when incorporated into training datasets. However, existing availability attacks exhibit limitations in practical applicability, particularly when only a portion of the data can be perturbed. To address this challenge, we propose a novel availability attack approach termed Parameter Matching Attack (PMA). PMA is the first availability attack that works when only a portion of data can be perturbed. PMA optimizes perturbations so that when the model is trained on a mixture of clean and perturbed data, the resulting model will approach a model designed to perform poorly. Experimental results across four datasets demonstrate that PMA outperforms existing methods, achieving significant model performance degradation when a part of the training data is perturbed. Our code is available in the supplementary. | [
"['Yu Zhe' 'Jun Sakuma']"
] |
null | null | 2407.02447 | null | null | http://arxiv.org/pdf/2407.02447v1 | 2024-07-02T17:24:04Z | 2024-07-02T17:24:04Z | PLeaS -- Merging Models with Permutations and Least Squares | The democratization of machine learning systems has made the process of fine-tuning accessible to a large number of practitioners, leading to a wide range of open-source models fine-tuned on specialized tasks and datasets. Recent work has proposed to merge such models to combine their functionalities. However, prior approaches are restricted to models that are fine-tuned from the same base model. Furthermore, the final merged model is typically restricted to be of the same size as the original models. In this work, we propose a new two-step algorithm to merge models-termed PLeaS-which relaxes these constraints. First, leveraging the Permutation symmetries inherent in the two models, PLeaS partially matches nodes in each layer by maximizing alignment. Next, PLeaS computes the weights of the merged model as a layer-wise Least Squares solution to minimize the approximation error between the features of the merged model and the permuted features of the original models. into a single model of a desired size, even when the two original models are fine-tuned from different base models. We also present a variant of our method which can merge models without using data from the fine-tuning domains. We demonstrate our method to merge ResNet models trained with shared and different label spaces, and show that we can perform better than the state-of-the-art merging methods by 8 to 15 percentage points for the same target compute while merging models trained on DomainNet and on fine-grained classification tasks. | [
"['Anshul Nasery' 'Jonathan Hayase' 'Pang Wei Koh' 'Sewoong Oh']"
] |
null | null | 2407.02461 | null | null | http://arxiv.org/pdf/2407.02461v1 | 2024-07-02T17:40:06Z | 2024-07-02T17:40:06Z | Decentralized Intelligence Network (DIN) | Decentralized Intelligence Network (DIN) addresses the significant challenges of data sovereignty and AI utilization caused by the fragmentation and siloing of data across providers and institutions. This comprehensive framework overcomes access barriers to scalable data sources previously hindered by silos by leveraging: 1) personal data stores as a prerequisite for data sovereignty; 2) a scalable federated learning protocol implemented on a public blockchain for decentralized AI training, where data remains with participants and only model parameter updates are shared; and 3) a scalable, trustless rewards mechanism to incentivize participation and ensure fair reward distribution. This framework ensures that no entity can prevent or control access to training on data offered by participants or determine financial benefits, as these processes operate on a public blockchain with an immutable record and without a third party. It supports effective AI training, allowing participants to maintain control over their data, benefit financially, and contribute to a decentralized, scalable ecosystem that leverages collective AI to develop beneficial algorithms. | [
"['Abraham Nash']"
] |
null | null | 2407.02466 | null | null | http://arxiv.org/pdf/2407.02466v2 | 2024-07-03T13:24:02Z | 2024-07-02T17:47:03Z | PWM: Policy Learning with Large World Models | Reinforcement Learning (RL) has achieved impressive results on complex tasks but struggles in multi-task settings with different embodiments. World models offer scalability by learning a simulation of the environment, yet they often rely on inefficient gradient-free optimization methods. We introduce Policy learning with large World Models (PWM), a novel model-based RL algorithm that learns continuous control policies from large multi-task world models. By pre-training the world model on offline data and using it for first-order gradient policy learning, PWM effectively solves tasks with up to 152 action dimensions and outperforms methods using ground-truth dynamics. Additionally, PWM scales to an 80-task setting, achieving up to 27% higher rewards than existing baselines without the need for expensive online planning. Visualizations and code available at https://www.imgeorgiev.com/pwm | [
"['Ignat Georgiev' 'Varun Giridhar' 'Nicklas Hansen' 'Animesh Garg']"
] |
null | null | 2407.02476 | null | null | http://arxiv.org/pdf/2407.02476v1 | 2024-07-02T17:53:56Z | 2024-07-02T17:53:56Z | Scalable Multi-Output Gaussian Processes with Stochastic Variational
Inference | The Multi-Output Gaussian Process is is a popular tool for modelling data from multiple sources. A typical choice to build a covariance function for a MOGP is the Linear Model of Coregionalization (LMC) which parametrically models the covariance between outputs. The Latent Variable MOGP (LV-MOGP) generalises this idea by modelling the covariance between outputs using a kernel applied to latent variables, one per output, leading to a flexible MOGP model that allows efficient generalization to new outputs with few data points. Computational complexity in LV-MOGP grows linearly with the number of outputs, which makes it unsuitable for problems with a large number of outputs. In this paper, we propose a stochastic variational inference approach for the LV-MOGP that allows mini-batches for both inputs and outputs, making computational complexity per training iteration independent of the number of outputs. | [
"['Xiaoyu Jiang' 'Sokratia Georgaka' 'Magnus Rattray' 'Mauricio A. Alvarez']"
] |
null | null | 2407.02485 | null | null | http://arxiv.org/pdf/2407.02485v1 | 2024-07-02T17:59:17Z | 2024-07-02T17:59:17Z | RankRAG: Unifying Context Ranking with Retrieval-Augmented Generation in
LLMs | Large language models (LLMs) typically utilize the top-k contexts from a retriever in retrieval-augmented generation (RAG). In this work, we propose a novel instruction fine-tuning framework RankRAG, which instruction-tunes a single LLM for the dual purpose of context ranking and answer generation in RAG. In particular, the instruction-tuned LLMs work surprisingly well by adding a small fraction of ranking data into the training blend, and outperform existing expert ranking models, including the same LLM exclusively fine-tuned on a large amount of ranking data. For generation, we compare our model with many strong baselines, including GPT-4-0613, GPT-4-turbo-2024-0409, and ChatQA-1.5, an open-sourced model with the state-of-the-art performance on RAG benchmarks. Specifically, our Llama3-RankRAG significantly outperforms Llama3-ChatQA-1.5 and GPT-4 models on nine knowledge-intensive benchmarks. In addition, it also performs comparably to GPT-4 on five RAG benchmarks in the biomedical domain without instruction fine-tuning on biomedical data, demonstrating its superb capability for generalization to new domains. | [
"['Yue Yu' 'Wei Ping' 'Zihan Liu' 'Boxin Wang' 'Jiaxuan You' 'Chao Zhang'\n 'Mohammad Shoeybi' 'Bryan Catanzaro']"
] |
null | null | 2407.02486 | null | null | http://arxiv.org/pdf/2407.02486v1 | 2024-07-02T17:59:29Z | 2024-07-02T17:59:29Z | Neurocache: Efficient Vector Retrieval for Long-range Language Modeling | This paper introduces Neurocache, an approach to extend the effective context size of large language models (LLMs) using an external vector cache to store its past states. Like recent vector retrieval approaches, Neurocache uses an efficient k-nearest-neighbor (kNN) algorithm to retrieve relevant past states and incorporate them into the attention process. Neurocache improves upon previous methods by (1) storing compressed states, which reduces cache size; (2) performing a single retrieval operation per token which increases inference speed; and (3) extending the retrieval window to neighboring states, which improves both language modeling and downstream task accuracy. Our experiments show the effectiveness of Neurocache both for models trained from scratch and for pre-trained models such as Llama2-7B and Mistral-7B when enhanced with the cache mechanism. We also compare Neurocache with text retrieval methods and show improvements in single-document question-answering and few-shot learning tasks. We made the source code available under: https://github.com/alisafaya/neurocache | [
"['Ali Safaya' 'Deniz Yuret']"
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.