categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
null
null
2403.11637
null
null
http://arxiv.org/pdf/2403.11637v1
2024-03-18T10:19:52Z
2024-03-18T10:19:52Z
The Value of Reward Lookahead in Reinforcement Learning
In reinforcement learning (RL), agents sequentially interact with changing environments while aiming to maximize the obtained rewards. Usually, rewards are observed only after acting, and so the goal is to maximize the expected cumulative reward. Yet, in many practical settings, reward information is observed in advance -- prices are observed before performing transactions; nearby traffic information is partially known; and goals are oftentimes given to agents prior to the interaction. In this work, we aim to quantifiably analyze the value of such future reward information through the lens of competitive analysis. In particular, we measure the ratio between the value of standard RL agents and that of agents with partial future-reward lookahead. We characterize the worst-case reward distribution and derive exact ratios for the worst-case reward expectations. Surprisingly, the resulting ratios relate to known quantities in offline RL and reward-free exploration. We further provide tight bounds for the ratio given the worst-case dynamics. Our results cover the full spectrum between observing the immediate rewards before acting to observing all the rewards before the interaction starts.
[ "['Nadav Merlis' 'Dorian Baudry' 'Vianney Perchet']" ]
null
null
2403.11642
null
null
http://arxiv.org/pdf/2403.11642v1
2024-03-18T10:34:40Z
2024-03-18T10:34:40Z
Guiding the generation of counterfactual explanations through temporal background knowledge for Predictive Process Monitoring
Counterfactual explanations suggest what should be different in the input instance to change the outcome of an AI system. When dealing with counterfactual explanations in the field of Predictive Process Monitoring, however, control flow relationships among events have to be carefully considered. A counterfactual, indeed, should not violate control flow relationships among activities (temporal background knowledege). Within the field of Explainability in Predictive Process Monitoring, there have been a series of works regarding counterfactual explanations for outcome-based predictions. However, none of them consider the inclusion of temporal background knowledge when generating these counterfactuals. In this work, we adapt state-of-the-art techniques for counterfactual generation in the domain of XAI that are based on genetic algorithms to consider a series of temporal constraints at runtime. We assume that this temporal background knowledge is given, and we adapt the fitness function, as well as the crossover and mutation operators, to maintain the satisfaction of the constraints. The proposed methods are evaluated with respect to state-of-the-art genetic algorithms for counterfactual generation and the results are presented. We showcase that the inclusion of temporal background knowledge allows the generation of counterfactuals more conformant to the temporal background knowledge, without however losing in terms of the counterfactual traditional quality metrics.
[ "['Andrei Buliga' 'Chiara Di Francescomarino' 'Chiara Ghidini'\n 'Ivan Donadello' 'Fabrizio Maria Maggi']" ]
null
null
2403.11643
null
null
http://arxiv.org/pdf/2403.11643v1
2024-03-18T10:35:15Z
2024-03-18T10:35:15Z
Diffusion-Based Environment-Aware Trajectory Prediction
The ability to predict the future trajectories of traffic participants is crucial for the safe and efficient operation of autonomous vehicles. In this paper, a diffusion-based generative model for multi-agent trajectory prediction is proposed. The model is capable of capturing the complex interactions between traffic participants and the environment, accurately learning the multimodal nature of the data. The effectiveness of the approach is assessed on large-scale datasets of real-world traffic scenarios, showing that our model outperforms several well-established methods in terms of prediction accuracy. By the incorporation of differential motion constraints on the model output, we illustrate that our model is capable of generating a diverse set of realistic future trajectories. Through the use of an interaction-aware guidance signal, we further demonstrate that the model can be adapted to predict the behavior of less cooperative agents, emphasizing its practical applicability under uncertain traffic conditions.
[ "['Theodor Westny' 'Björn Olofsson' 'Erik Frisk']" ]
null
null
2403.11671
null
null
http://arxiv.org/pdf/2403.11671v1
2024-03-18T11:19:37Z
2024-03-18T11:19:37Z
HDLdebugger: Streamlining HDL debugging with Large Language Models
In the domain of chip design, Hardware Description Languages (HDLs) play a pivotal role. However, due to the complex syntax of HDLs and the limited availability of online resources, debugging HDL codes remains a difficult and time-intensive task, even for seasoned engineers. Consequently, there is a pressing need to develop automated HDL code debugging models, which can alleviate the burden on hardware engineers. Despite the strong capabilities of Large Language Models (LLMs) in generating, completing, and debugging software code, their utilization in the specialized field of HDL debugging has been limited and, to date, has not yielded satisfactory results. In this paper, we propose an LLM-assisted HDL debugging framework, namely HDLdebugger, which consists of HDL debugging data generation via a reverse engineering approach, a search engine for retrieval-augmented generation, and a retrieval-augmented LLM fine-tuning approach. Through the integration of these components, HDLdebugger can automate and streamline HDL debugging for chip design. Our comprehensive experiments, conducted on an HDL code dataset sourced from Huawei, reveal that HDLdebugger outperforms 13 cutting-edge LLM baselines, displaying exceptional effectiveness in HDL code debugging.
[ "['Xufeng Yao' 'Haoyang Li' 'Tsz Ho Chan' 'Wenyi Xiao' 'Mingxuan Yuan'\n 'Yu Huang' 'Lei Chen' 'Bei Yu']" ]
null
null
2403.11678
null
null
http://arxiv.org/pdf/2403.11678v2
2024-05-17T08:29:05Z
2024-03-18T11:29:43Z
Exploring 3D-aware Latent Spaces for Efficiently Learning Numerous Scenes
We present a method enabling the scaling of NeRFs to learn a large number of semantically-similar scenes. We combine two techniques to improve the required training time and memory cost per scene. First, we learn a 3D-aware latent space in which we train Tri-Plane scene representations, hence reducing the resolution at which scenes are learned. Moreover, we present a way to share common information across scenes, hence allowing for a reduction of model complexity to learn a particular scene. Our method reduces effective per-scene memory costs by 44% and per-scene time costs by 86% when training 1000 scenes. Our project page can be found at https://3da-ae.github.io .
[ "['Antoine Schnepf' 'Karim Kassab' 'Jean-Yves Franceschi' 'Laurent Caraffa'\n 'Flavian Vasile' 'Jeremie Mary' 'Andrew Comport' 'Valérie Gouet-Brunet']" ]
null
null
2403.11686
null
null
http://arxiv.org/pdf/2403.11686v1
2024-03-18T11:37:42Z
2024-03-18T11:37:42Z
Crystalformer: Infinitely Connected Attention for Periodic Structure Encoding
Predicting physical properties of materials from their crystal structures is a fundamental problem in materials science. In peripheral areas such as the prediction of molecular properties, fully connected attention networks have been shown to be successful. However, unlike these finite atom arrangements, crystal structures are infinitely repeating, periodic arrangements of atoms, whose fully connected attention results in infinitely connected attention. In this work, we show that this infinitely connected attention can lead to a computationally tractable formulation, interpreted as neural potential summation, that performs infinite interatomic potential summations in a deeply learned feature space. We then propose a simple yet effective Transformer-based encoder architecture for crystal structures called Crystalformer. Compared to an existing Transformer-based model, the proposed model requires only 29.4% of the number of parameters, with minimal modifications to the original Transformer architecture. Despite the architectural simplicity, the proposed method outperforms state-of-the-art methods for various property regression tasks on the Materials Project and JARVIS-DFT datasets.
[ "['Tatsunori Taniai' 'Ryo Igarashi' 'Yuta Suzuki' 'Naoya Chiba'\n 'Kotaro Saito' 'Yoshitaka Ushiku' 'Kanta Ono']" ]
null
null
2403.11687
null
null
http://arxiv.org/pdf/2403.11687v3
2024-06-04T09:53:01Z
2024-03-18T11:37:53Z
Nonsmooth Implicit Differentiation: Deterministic and Stochastic Convergence Rates
We study the problem of efficiently computing the derivative of the fixed-point of a parametric nondifferentiable contraction map. This problem has wide applications in machine learning, including hyperparameter optimization, meta-learning and data poisoning attacks. We analyze two popular approaches: iterative differentiation (ITD) and approximate implicit differentiation (AID). A key challenge behind the nonsmooth setting is that the chain rule does not hold anymore. We build upon the work by Bolte et al. (2022), who prove linear convergence of nonsmooth ITD under a piecewise Lipschitz smooth assumption. In the deterministic case, we provide a linear rate for AID and an improved linear rate for ITD which closely match the ones for the smooth setting. We further introduce NSID, a new stochastic method to compute the implicit derivative when the contraction map is defined as the composition of an outer map and an inner map which is accessible only through a stochastic unbiased estimator. We establish rates for the convergence of NSID, encompassing the best available rates in the smooth setting. We also present illustrative experiments confirming our analysis.
[ "['Riccardo Grazzi' 'Massimiliano Pontil' 'Saverio Salzo']" ]
null
null
2403.11696
null
null
http://arxiv.org/pdf/2403.11696v1
2024-03-18T11:52:33Z
2024-03-18T11:52:33Z
Generalization error of spectral algorithms
The asymptotically precise estimation of the generalization of kernel methods has recently received attention due to the parallels between neural networks and their associated kernels. However, prior works derive such estimates for training by kernel ridge regression (KRR), whereas neural networks are typically trained with gradient descent (GD). In the present work, we consider the training of kernels with a family of $textit{spectral algorithms}$ specified by profile $h(lambda)$, and including KRR and GD as special cases. Then, we derive the generalization error as a functional of learning profile $h(lambda)$ for two data models: high-dimensional Gaussian and low-dimensional translation-invariant model. Under power-law assumptions on the spectrum of the kernel and target, we use our framework to (i) give full loss asymptotics for both noisy and noiseless observations (ii) show that the loss localizes on certain spectral scales, giving a new perspective on the KRR saturation phenomenon (iii) conjecture, and demonstrate for the considered data models, the universality of the loss w.r.t. non-spectral details of the problem, but only in case of noisy observation.
[ "['Maksim Velikanov' 'Maxim Panov' 'Dmitry Yarotsky']" ]
null
null
2403.11705
null
null
http://arxiv.org/pdf/2403.11705v1
2024-03-18T12:07:46Z
2024-03-18T12:07:46Z
Coarsening of chiral domains in itinerant electron magnets: A machine learning force field approach
Frustrated itinerant magnets often exhibit complex noncollinear or noncoplanar magnetic orders which support topological electronic structures. A canonical example is the anomalous quantum Hall state with a chiral spin order stabilized by electron-spin interactions on a triangular lattice. While a long-range magnetic order cannot survive thermal fluctuations in two dimensions, the chiral order which results from the breaking of a discrete Ising symmetry persists even at finite temperatures. We present a scalable machine learning (ML) framework to model the complex electron-mediated spin-spin interactions that stabilize the chiral magnetic domains in a triangular lattice. Large-scale dynamical simulations, enabled by the ML force-field models, are performed to investigate the coarsening of chiral domains after a thermal quench. While the chiral phase is described by a broken $Z_2$ Ising-type symmetry, we find that the characteristic size of chiral domains increases linearly with time, in stark contrast to the expected Allen-Cahn domain growth law for a non-conserved Ising order parameter field. The linear growth of the chiral domains is attributed to the orientational anisotropy of domain boundaries. Our work also demonstrates the promising potential of ML models for large-scale spin dynamics of itinerant magnets.
[ "['Yunhao Fan' 'Sheng Zhang' 'Gia-Wei Chern']" ]
null
null
2403.11706
null
null
http://arxiv.org/pdf/2403.11706v1
2024-03-18T12:08:01Z
2024-03-18T12:08:01Z
Generalized Multi-Source Inference for Text Conditioned Music Diffusion Models
Multi-Source Diffusion Models (MSDM) allow for compositional musical generation tasks: generating a set of coherent sources, creating accompaniments, and performing source separation. Despite their versatility, they require estimating the joint distribution over the sources, necessitating pre-separated musical data, which is rarely available, and fixing the number and type of sources at training time. This paper generalizes MSDM to arbitrary time-domain diffusion models conditioned on text embeddings. These models do not require separated data as they are trained on mixtures, can parameterize an arbitrary number of sources, and allow for rich semantic control. We propose an inference procedure enabling the coherent generation of sources and accompaniments. Additionally, we adapt the Dirac separator of MSDM to perform source separation. We experiment with diffusion models trained on Slakh2100 and MTG-Jamendo, showcasing competitive generation and separation results in a relaxed data setting.
[ "['Emilian Postolache' 'Giorgio Mariani' 'Luca Cosmo' 'Emmanouil Benetos'\n 'Emanuele Rodolà']" ]
null
null
2403.11722
null
null
http://arxiv.org/pdf/2403.11722v2
2024-03-25T13:34:40Z
2024-03-18T12:22:11Z
Time Series Compression using Quaternion Valued Neural Networks and Quaternion Backpropagation
We propose a novel quaternionic time-series compression methodology where we divide a long time-series into segments of data, extract the min, max, mean and standard deviation of these chunks as representative features and encapsulate them in a quaternion, yielding a quaternion valued time-series. This time-series is processed using quaternion valued neural network layers, where we aim to preserve the relation between these features through the usage of the Hamilton product. To train this quaternion neural network, we derive quaternion backpropagation employing the GHR calculus, which is required for a valid product and chain rule in quaternion space. Furthermore, we investigate the connection between the derived update rules and automatic differentiation. We apply our proposed compression method on the Tennessee Eastman Dataset, where we perform fault classification using the compressed data in two settings: a fully supervised one and in a semi supervised, contrastive learning setting. Both times, we were able to outperform real valued counterparts as well as two baseline models: one with the uncompressed time-series as the input and the other with a regular downsampling using the mean. Further, we could improve the classification benchmark set by SimCLR-TS from 81.43% to 83.90%.
[ "['Johannes Pöppelbaum' 'Andreas Schwung']" ]
null
null
2403.11728
null
null
http://arxiv.org/pdf/2403.11728v1
2024-03-18T12:37:41Z
2024-03-18T12:37:41Z
PITA: Physics-Informed Trajectory Autoencoder
Validating robotic systems in safety-critical appli-cations requires testing in many scenarios including rare edgecases that are unlikely to occur, requiring to complement real-world testing with testing in simulation. Generative models canbe used to augment real-world datasets with generated data toproduce edge case scenarios by sampling in a learned latentspace. Autoencoders can learn said latent representation for aspecific domain by learning to reconstruct the input data froma lower-dimensional intermediate representation. However, theresulting trajectories are not necessarily physically plausible, butinstead typically contain noise that is not present in the inputtrajectory. To resolve this issue, we propose the novel Physics-Informed Trajectory Autoencoder (PITA) architecture, whichincorporates a physical dynamics model into the loss functionof the autoencoder. This results in smooth trajectories that notonly reconstruct the input trajectory but also adhere to thephysical model. We evaluate PITA on a real-world dataset ofvehicle trajectories and compare its performance to a normalautoencoder and a state-of-the-art action-space autoencoder.
[ "['Johannes Fischer' 'Kevin Rösch' 'Martin Lauer' 'Christoph Stiller']" ]
null
null
2403.11734
null
null
http://arxiv.org/pdf/2403.11734v1
2024-03-18T12:42:53Z
2024-03-18T12:42:53Z
Learning General Policies for Classical Planning Domains: Getting Beyond C$_2$
GNN-based approaches for learning general policies across planning domains are limited by the expressive power of $C_2$, namely; first-order logic with two variables and counting. This limitation can be overcomed by transitioning to $k$-GNNs, for $k=3$, wherein object embeddings are substituted with triplet embeddings. Yet, while $3$-GNNs have the expressive power of $C_3$, unlike $1$- and $2$-GNNs that are confined to $C_2$, they require quartic time for message exchange and cubic space for embeddings, rendering them impractical. In this work, we introduce a parameterized version of relational GNNs. When $t$ is infinity, R-GNN[$t$] approximates $3$-GNNs using only quadratic space for embeddings. For lower values of $t$, such as $t=1$ and $t=2$, R-GNN[$t$] achieves a weaker approximation by exchanging fewer messages, yet interestingly, often yield the $C_3$ features required in several planning domains. Furthermore, the new R-GNN[$t$] architecture is the original R-GNN architecture with a suitable transformation applied to the input states only. Experimental results illustrate the clear performance gains of R-GNN[$1$] and R-GNN[$2$] over plain R-GNNs, and also over edge transformers that also approximate $3$-GNNs.
[ "['Simon Ståhlberg' 'Blai Bonet' 'Hector Geffner']" ]
null
null
2403.11735
null
null
http://arxiv.org/pdf/2403.11735v4
2024-06-23T14:08:35Z
2024-03-18T12:43:38Z
LSKNet: A Foundation Lightweight Backbone for Remote Sensing
Remote sensing images pose distinct challenges for downstream tasks due to their inherent complexity. While a considerable amount of research has been dedicated to remote sensing classification, object detection and semantic segmentation, most of these studies have overlooked the valuable prior knowledge embedded within remote sensing scenarios. Such prior knowledge can be useful because remote sensing objects may be mistakenly recognized without referencing a sufficiently long-range context, which can vary for different objects. This paper considers these priors and proposes a lightweight Large Selective Kernel Network (LSKNet) backbone. LSKNet can dynamically adjust its large spatial receptive field to better model the ranging context of various objects in remote sensing scenarios. To our knowledge, large and selective kernel mechanisms have not been previously explored in remote sensing images. Without bells and whistles, our lightweight LSKNet sets new state-of-the-art scores on standard remote sensing classification, object detection and semantic segmentation benchmarks. Our comprehensive analysis further validated the significance of the identified priors and the effectiveness of LSKNet. The code is available at https://github.com/zcablii/LSKNet.
[ "['Yuxuan Li' 'Xiang Li' 'Yimian Dai' 'Qibin Hou' 'Li Liu' 'Yongxiang Liu'\n 'Ming-Ming Cheng' 'Jian Yang']" ]
null
null
2403.11743
null
null
http://arxiv.org/pdf/2403.11743v1
2024-03-18T12:55:40Z
2024-03-18T12:55:40Z
PARMESAN: Parameter-Free Memory Search and Transduction for Dense Prediction Tasks
In this work we address flexibility in deep learning by means of transductive reasoning. For adaptation to new tasks or new data, existing methods typically involve tuning of learnable parameters or even complete re-training from scratch, rendering such approaches unflexible in practice. We argue that the notion of separating computation from memory by the means of transduction can act as a stepping stone for solving these issues. We therefore propose PARMESAN (parameter-free memory search and transduction), a scalable transduction method which leverages a memory module for solving dense prediction tasks. At inference, hidden representations in memory are being searched to find corresponding examples. In contrast to other methods, PARMESAN learns without the requirement for any continuous training or fine-tuning of learnable parameters simply by modifying the memory content. Our method is compatible with commonly used neural architectures and canonically transfers to 1D, 2D, and 3D grid-based data. We demonstrate the capabilities of our approach at complex tasks such as continual and few-shot learning. PARMESAN learns up to 370 times faster than common baselines while being on par in terms of predictive performance, knowledge retention, and data-efficiency.
[ "['Philip Matthias Winter' 'Maria Wimmer' 'David Major' 'Dimitrios Lenis'\n 'Astrid Berg' 'Theresa Neubauer' 'Gaia Romana De Paolis'\n 'Johannes Novotny' 'Sophia Ulonska' 'Katja Bühler']" ]
null
null
2403.11755
null
null
http://arxiv.org/pdf/2403.11755v2
2024-03-19T13:28:27Z
2024-03-18T13:03:24Z
Meta-Prompting for Automating Zero-shot Visual Recognition with LLMs
Prompt ensembling of Large Language Model (LLM) generated category-specific prompts has emerged as an effective method to enhance zero-shot recognition ability of Vision-Language Models (VLMs). To obtain these category-specific prompts, the present methods rely on hand-crafting the prompts to the LLMs for generating VLM prompts for the downstream tasks. However, this requires manually composing these task-specific prompts and still, they might not cover the diverse set of visual concepts and task-specific styles associated with the categories of interest. To effectively take humans out of the loop and completely automate the prompt generation process for zero-shot recognition, we propose Meta-Prompting for Visual Recognition (MPVR). Taking as input only minimal information about the target task, in the form of its short natural language description, and a list of associated class labels, MPVR automatically produces a diverse set of category-specific prompts resulting in a strong zero-shot classifier. MPVR generalizes effectively across various popular zero-shot image recognition benchmarks belonging to widely different domains when tested with multiple LLMs and VLMs. For example, MPVR obtains a zero-shot recognition improvement over CLIP by up to 19.8% and 18.2% (5.0% and 4.5% on average over 20 datasets) leveraging GPT and Mixtral LLMs, respectively
[ "['M. Jehanzeb Mirza' 'Leonid Karlinsky' 'Wei Lin' 'Sivan Doveh'\n 'Jakub Micorek' 'Mateusz Kozinski' 'Hilde Kuhene' 'Horst Possegger']" ]
null
null
2403.11757
null
null
http://arxiv.org/pdf/2403.11757v2
2024-03-19T18:14:35Z
2024-03-18T13:11:10Z
Efficient Feature Extraction and Late Fusion Strategy for Audiovisual Emotional Mimicry Intensity Estimation
In this paper, we present the solution to the Emotional Mimicry Intensity (EMI) Estimation challenge, which is part of 6th Affective Behavior Analysis in-the-wild (ABAW) Competition.The EMI Estimation challenge task aims to evaluate the emotional intensity of seed videos by assessing them from a set of predefined emotion categories (i.e., "Admiration", "Amusement", "Determination", "Empathic Pain", "Excitement" and "Joy"). To tackle this challenge, we extracted rich dual-channel visual features based on ResNet18 and AUs for the video modality and effective single-channel features based on Wav2Vec2.0 for the audio modality. This allowed us to obtain comprehensive emotional features for the audiovisual modality. Additionally, leveraging a late fusion strategy, we averaged the predictions of the visual and acoustic models, resulting in a more accurate estimation of audiovisual emotional mimicry intensity. Experimental results validate the effectiveness of our approach, with the average Pearson's correlation Coefficient($rho$) across the 6 emotion dimensionson the validation set achieving 0.3288.
[ "['Jun Yu' 'Wangyuan Zhu' 'Jichao Zhu']" ]
null
null
2403.11772
null
null
http://arxiv.org/pdf/2403.11772v1
2024-03-18T13:30:12Z
2024-03-18T13:30:12Z
S-JEPA: towards seamless cross-dataset transfer through dynamic spatial attention
Motivated by the challenge of seamless cross-dataset transfer in EEG signal processing, this article presents an exploratory study on the use of Joint Embedding Predictive Architectures (JEPAs). In recent years, self-supervised learning has emerged as a promising approach for transfer learning in various domains. However, its application to EEG signals remains largely unexplored. In this article, we introduce Signal-JEPA for representing EEG recordings which includes a novel domain-specific spatial block masking strategy and three novel architectures for downstream classification. The study is conducted on a 54~subjects dataset and the downstream performance of the models is evaluated on three different BCI paradigms: motor imagery, ERP and SSVEP. Our study provides preliminary evidence for the potential of JEPAs in EEG signal encoding. Notably, our results highlight the importance of spatial filtering for accurate downstream classification and reveal an influence of the length of the pre-training examples but not of the mask size on the downstream performance.
[ "['Pierre Guetschel' 'Thomas Moreau' 'Michael Tangermann']" ]
null
null
2403.11778
null
null
http://arxiv.org/pdf/2403.11778v1
2024-03-18T13:35:10Z
2024-03-18T13:35:10Z
Towards the Development of a Real-Time Deepfake Audio Detection System in Communication Platforms
Deepfake audio poses a rising threat in communication platforms, necessitating real-time detection for audio stream integrity. Unlike traditional non-real-time approaches, this study assesses the viability of employing static deepfake audio detection models in real-time communication platforms. An executable software is developed for cross-platform compatibility, enabling real-time execution. Two deepfake audio detection models based on Resnet and LCNN architectures are implemented using the ASVspoof 2019 dataset, achieving benchmark performances compared to ASVspoof 2019 challenge baselines. The study proposes strategies and frameworks for enhancing these models, paving the way for real-time deepfake audio detection in communication platforms. This work contributes to the advancement of audio stream security, ensuring robust detection capabilities in dynamic, real-time communication scenarios.
[ "['Jonat John Mathew' 'Rakin Ahsan' 'Sae Furukawa'\n 'Jagdish Gautham Krishna Kumar' 'Huzaifa Pallan' 'Agamjeet Singh Padda'\n 'Sara Adamski' 'Madhu Reddiboina' 'Arjun Pankajakshan']" ]
null
null
2403.11780
null
null
http://arxiv.org/pdf/2403.11780v2
2024-07-09T07:40:52Z
2024-03-18T13:39:05Z
Prompt-Singer: Controllable Singing-Voice-Synthesis with Natural Language Prompt
Recent singing-voice-synthesis (SVS) methods have achieved remarkable audio quality and naturalness, yet they lack the capability to control the style attributes of the synthesized singing explicitly. We propose Prompt-Singer, the first SVS method that enables attribute controlling on singer gender, vocal range and volume with natural language. We adopt a model architecture based on a decoder-only transformer with a multi-scale hierarchy, and design a range-melody decoupled pitch representation that enables text-conditioned vocal range control while keeping melodic accuracy. Furthermore, we explore various experiment settings, including different types of text representations, text encoder fine-tuning, and introducing speech data to alleviate data scarcity, aiming to facilitate further research. Experiments show that our model achieves favorable controlling ability and audio quality. Audio samples are available at http://prompt-singer.github.io .
[ "['Yongqi Wang' 'Ruofan Hu' 'Rongjie Huang' 'Zhiqing Hong' 'Ruiqi Li'\n 'Wenrui Liu' 'Fuming You' 'Tao Jin' 'Zhou Zhao']" ]
null
null
2403.11782
null
null
http://arxiv.org/pdf/2403.11782v4
2024-06-01T20:50:23Z
2024-03-18T13:40:48Z
A tutorial on learning from preferences and choices with Gaussian Processes
Preference modelling lies at the intersection of economics, decision theory, machine learning and statistics. By understanding individuals' preferences and how they make choices, we can build products that closely match their expectations, paving the way for more efficient and personalised applications across a wide range of domains. The objective of this tutorial is to present a cohesive and comprehensive framework for preference learning with Gaussian Processes (GPs), demonstrating how to seamlessly incorporate rationality principles (from economics and decision theory) into the learning process. By suitably tailoring the likelihood function, this framework enables the construction of preference learning models that encompass random utility models, limits of discernment, and scenarios with multiple conflicting utilities for both object- and label-preference. This tutorial builds upon established research while simultaneously introducing some novel GP-based models to address specific gaps in the existing literature.
[ "['Alessio Benavoli' 'Dario Azzimonti']" ]
null
null
2403.11795
null
null
http://arxiv.org/pdf/2403.11795v2
2024-06-25T10:20:49Z
2024-03-18T13:53:17Z
Low-Cost Privacy-Aware Decentralized Learning
This paper introduces ZIP-DL, a novel privacy-aware decentralized learning (DL) algorithm that exploits correlated noise to provide strong privacy protection against a local adversary while yielding efficient convergence guarantees for a low communication cost. The progressive neutralization of the added noise during the distributed aggregation process results in ZIP-DL fostering a high model accuracy under privacy guarantees. ZIP-DL further uses a single communication round between each gradient descent, thus minimizing communication overhead. We provide theoretical guarantees for both convergence speed and privacy guarantees, thereby making ZIP-DL applicable to practical scenarios. Our extensive experimental study shows that ZIP-DL significantly outperforms the state-of-the-art in terms of vulnerability/accuracy trade-off. In particular, ZIP-DL (i) reduces the efficacy of linkability attacks by up to 52 percentage points compared to baseline DL, (ii) improves accuracy by up to 37 percent w.r.t. the state-of-the-art privacy-preserving mechanism operating under the same threat model as ours, when configured to provide the same protection against membership inference attacks, and (iii) reduces communication by up to 10.5x against the same competitor for the same level of protection.
[ "['Sayan Biswas' 'Davide Frey' 'Romaric Gaudel' 'Anne-Marie Kermarrec'\n 'Dimitri Lerévérend' 'Rafael Pires' 'Rishi Sharma' 'François Taïani']" ]
null
null
2403.11826
null
null
http://arxiv.org/pdf/2403.11826v1
2024-03-18T14:31:09Z
2024-03-18T14:31:09Z
CapsLorentzNet: Integrating Physics Inspired Features with Graph Convolution
With the advent of advanced machine learning techniques, boosted object tagging has witnessed significant progress. In this article, we take this field further by introducing novel architectural modifications compatible with a wide array of Graph Neural Network (GNN) architectures. Our approach advocates for integrating capsule layers, replacing the conventional decoding blocks in standard GNNs. These capsules are a group of neurons with vector activations. The orientation of these vectors represents important properties of the objects under study, with their magnitude characterizing whether the object under study belongs to the class represented by the capsule. Moreover, capsule networks incorporate a regularization by reconstruction mechanism, facilitating the seamless integration of expert-designed high-level features into the analysis. We have studied the usefulness of our architecture with the LorentzNet architecture for quark-gluon tagging. Here, we have replaced the decoding block of LorentzNet with a capsulated decoding block and have called the resulting architecture CapsLorentzNet. Our new architecture can enhance the performance of LorentzNet by 20 % for the quark-gluon tagging task.
[ "['Rameswar Sahu']" ]
null
null
2403.11827
null
null
http://arxiv.org/pdf/2403.11827v2
2024-06-12T13:54:11Z
2024-03-18T14:34:16Z
Sound Event Detection and Localization with Distance Estimation
Sound Event Detection and Localization (SELD) is a combined task of identifying sound events and their corresponding direction-of-arrival (DOA). While this task has numerous applications and has been extensively researched in recent years, it fails to provide full information about the sound source position. In this paper, we overcome this problem by extending the task to Sound Event Detection, Localization with Distance Estimation (3D SELD). We study two ways of integrating distance estimation within the SELD core - a multi-task approach, in which the problem is tackled by a separate model output, and a single-task approach obtained by extending the multi-ACCDOA method to include distance information. We investigate both methods for the Ambisonic and binaural versions of STARSS23: Sony-TAU Realistic Spatial Soundscapes 2023. Moreover, our study involves experiments on the loss function related to the distance estimation part. Our results show that it is possible to perform 3D SELD without any degradation of performance in sound event detection and DOA estimation.
[ "['Daniel Aleksander Krause' 'Archontis Politis' 'Annamaria Mesaros']" ]
null
null
2403.11833
null
null
http://arxiv.org/abs/2403.11833v1
2024-03-18T14:45:20Z
2024-03-18T14:45:20Z
SSCAE -- Semantic, Syntactic, and Context-aware natural language Adversarial Examples generator
Machine learning models are vulnerable to maliciously crafted Adversarial Examples (AEs). Training a machine learning model with AEs improves its robustness and stability against adversarial attacks. It is essential to develop models that produce high-quality AEs. Developing such models has been much slower in natural language processing (NLP) than in areas such as computer vision. This paper introduces a practical and efficient adversarial attack model called SSCAE for textbf{S}emantic, textbf{S}yntactic, and textbf{C}ontext-aware natural language textbf{AE}s generator. SSCAE identifies important words and uses a masked language model to generate an early set of substitutions. Next, two well-known language models are employed to evaluate the initial set in terms of semantic and syntactic characteristics. We introduce (1) a dynamic threshold to capture more efficient perturbations and (2) a local greedy search to generate high-quality AEs. As a black-box method, SSCAE generates humanly imperceptible and context-aware AEs that preserve semantic consistency and the source language's syntactical and grammatical requirements. The effectiveness and superiority of the proposed SSCAE model are illustrated with fifteen comparative experiments and extensive sensitivity analysis for parameter optimization. SSCAE outperforms the existing models in all experiments while maintaining a higher semantic consistency with a lower query number and a comparable perturbation rate.
[ "['Javad Rafiei Asl' 'Mohammad H. Rafiei' 'Manar Alohaly' 'Daniel Takabi']" ]
null
null
2403.11834
null
null
http://arxiv.org/pdf/2403.11834v1
2024-03-18T14:45:52Z
2024-03-18T14:45:52Z
Towards Understanding the Relationship between In-context Learning and Compositional Generalization
According to the principle of compositional generalization, the meaning of a complex expression can be understood as a function of the meaning of its parts and of how they are combined. This principle is crucial for human language processing and also, arguably, for NLP models in the face of out-of-distribution data. However, many neural network models, including Transformers, have been shown to struggle with compositional generalization. In this paper, we hypothesize that forcing models to in-context learn can provide an inductive bias to promote compositional generalization. To test this hypothesis, we train a causal Transformer in a setting that renders ordinary learning very difficult: we present it with different orderings of the training instance and shuffle instance labels. This corresponds to training the model on all possible few-shot learning problems attainable from the dataset. The model can solve the task, however, by utilizing earlier examples to generalize to later ones (i.e. in-context learning). In evaluations on the datasets, SCAN, COGS, and GeoQuery, models trained in this manner indeed show improved compositional generalization. This indicates the usefulness of in-context learning problems as an inductive bias for generalization.
[ "['Sungjun Han' 'Sebastian Padó']" ]
null
null
2403.11840
null
null
http://arxiv.org/pdf/2403.11840v1
2024-03-18T14:50:48Z
2024-03-18T14:50:48Z
Multi-Criteria Comparison as a Method of Advancing Knowledge-Guided Machine Learning
This paper describes a generalizable model evaluation method that can be adapted to evaluate AI/ML models across multiple criteria including core scientific principles and more practical outcomes. Emerging from prediction competitions in Psychology and Decision Science, the method evaluates a group of candidate models of varying type and structure across multiple scientific, theoretic, and practical criteria. Ordinal ranking of criteria scores are evaluated using voting rules from the field of computational social choice and allow the comparison of divergent measures and types of models in a holistic evaluation. Additional advantages and applications are discussed.
[ "['Jason L. Harman' 'Jaelle Scheuerman']" ]
null
null
2403.11841
null
null
http://arxiv.org/pdf/2403.11841v1
2024-03-18T14:51:19Z
2024-03-18T14:51:19Z
Pessimistic Causal Reinforcement Learning with Mediators for Confounded Offline Data
In real-world scenarios, datasets collected from randomized experiments are often constrained by size, due to limitations in time and budget. As a result, leveraging large observational datasets becomes a more attractive option for achieving high-quality policy learning. However, most existing offline reinforcement learning (RL) methods depend on two key assumptions--unconfoundedness and positivity--which frequently do not hold in observational data contexts. Recognizing these challenges, we propose a novel policy learning algorithm, PESsimistic CAusal Learning (PESCAL). We utilize the mediator variable based on front-door criterion to remove the confounding bias; additionally, we adopt the pessimistic principle to address the distributional shift between the action distributions induced by candidate policies, and the behavior policy that generates the observational data. Our key observation is that, by incorporating auxiliary variables that mediate the effect of actions on system dynamics, it is sufficient to learn a lower bound of the mediator distribution function, instead of the Q-function, to partially mitigate the issue of distributional shift. This insight significantly simplifies our algorithm, by circumventing the challenging task of sequential uncertainty quantification for the estimated Q-function. Moreover, we provide theoretical guarantees for the algorithms we propose, and demonstrate their efficacy through simulations, as well as real-world experiments utilizing offline datasets from a leading ride-hailing platform.
[ "['Danyang Wang' 'Chengchun Shi' 'Shikai Luo' 'Will Wei Sun']" ]
null
null
2403.11843
null
null
http://arxiv.org/pdf/2403.11843v1
2024-03-18T14:53:48Z
2024-03-18T14:53:48Z
Fuzzy Rough Choquet Distances for Classification
This paper introduces a novel Choquet distance using fuzzy rough set based measures. The proposed distance measure combines the attribute information received from fuzzy rough set theory with the flexibility of the Choquet integral. This approach is designed to adeptly capture non-linear relationships within the data, acknowledging the interplay of the conditional attributes towards the decision attribute and resulting in a more flexible and accurate distance. We explore its application in the context of machine learning, with a specific emphasis on distance-based classification approaches (e.g. k-nearest neighbours). The paper examines two fuzzy rough set based measures that are based on the positive region. Moreover, we explore two procedures for monotonizing the measures derived from fuzzy rough set theory, making them suitable for use with the Choquet integral, and investigate their differences.
[ "['Adnan Theerens' 'Chris Cornelis']" ]
null
null
2403.11844
null
null
http://arxiv.org/pdf/2403.11844v1
2024-03-18T14:55:45Z
2024-03-18T14:55:45Z
Near-Optimal Solutions of Constrained Learning Problems
With the widespread adoption of machine learning systems, the need to curtail their behavior has become increasingly apparent. This is evidenced by recent advancements towards developing models that satisfy robustness, safety, and fairness requirements. These requirements can be imposed (with generalization guarantees) by formulating constrained learning problems that can then be tackled by dual ascent algorithms. Yet, though these algorithms converge in objective value, even in non-convex settings, they cannot guarantee that their outcome is feasible. Doing so requires randomizing over all iterates, which is impractical in virtually any modern applications. Still, final iterates have been observed to perform well in practice. In this work, we address this gap between theory and practice by characterizing the constraint violation of Lagrangian minimizers associated with optimal dual variables, despite lack of convexity. To do this, we leverage the fact that non-convex, finite-dimensional constrained learning problems can be seen as parametrizations of convex, functional problems. Our results show that rich parametrizations effectively mitigate the issue of feasibility in dual methods, shedding light on prior empirical successes of dual learning. We illustrate our findings in fair learning tasks.
[ "['Juan Elenter' 'Luiz F. O. Chamon' 'Alejandro Ribeiro']" ]
null
null
2403.11857
null
null
http://arxiv.org/pdf/2403.11857v1
2024-03-18T15:06:37Z
2024-03-18T15:06:37Z
Complete and Efficient Graph Transformers for Crystal Material Property Prediction
Crystal structures are characterized by atomic bases within a primitive unit cell that repeats along a regular lattice throughout 3D space. The periodic and infinite nature of crystals poses unique challenges for geometric graph representation learning. Specifically, constructing graphs that effectively capture the complete geometric information of crystals and handle chiral crystals remains an unsolved and challenging problem. In this paper, we introduce a novel approach that utilizes the periodic patterns of unit cells to establish the lattice-based representation for each atom, enabling efficient and expressive graph representations of crystals. Furthermore, we propose ComFormer, a SE(3) transformer designed specifically for crystalline materials. ComFormer includes two variants; namely, iComFormer that employs invariant geometric descriptors of Euclidean distances and angles, and eComFormer that utilizes equivariant vector representations. Experimental results demonstrate the state-of-the-art predictive accuracy of ComFormer variants on various tasks across three widely-used crystal benchmarks. Our code is publicly available as part of the AIRS library (https://github.com/divelab/AIRS).
[ "['Keqiang Yan' 'Cong Fu' 'Xiaofeng Qian' 'Xiaoning Qian' 'Shuiwang Ji']" ]
null
null
2403.11871
null
null
http://arxiv.org/pdf/2403.11871v1
2024-03-18T15:24:47Z
2024-03-18T15:24:47Z
The Real Tropical Geometry of Neural Networks
We consider a binary classifier defined as the sign of a tropical rational function, that is, as the difference of two convex piecewise linear functions. The parameter space of ReLU neural networks is contained as a semialgebraic set inside the parameter space of tropical rational functions. We initiate the study of two different subdivisions of this parameter space: a subdivision into semialgebraic sets, on which the combinatorial type of the decision boundary is fixed, and a subdivision into a polyhedral fan, capturing the combinatorics of the partitions of the dataset. The sublevel sets of the 0/1-loss function arise as subfans of this classification fan, and we show that the level-sets are not necessarily connected. We describe the classification fan i) geometrically, as normal fan of the activation polytope, and ii) combinatorially through a list of properties of associated bipartite graphs, in analogy to covector axioms of oriented matroids and tropical oriented matroids. Our findings extend and refine the connection between neural networks and tropical geometry by observing structures established in real tropical geometry, such as positive tropicalizations of hypersurfaces and tropical semialgebraic sets.
[ "['Marie-Charlotte Brandenburg' 'Georg Loho' 'Guido Montúfar']" ]
null
null
2403.11872
null
null
http://arxiv.org/pdf/2403.11872v1
2024-03-18T15:26:05Z
2024-03-18T15:26:05Z
NuGraph2: A Graph Neural Network for Neutrino Physics Event Reconstruction
Liquid Argon Time Projection Chamber (LArTPC) detector technology offers a wealth of high-resolution information on particle interactions, and leveraging that information to its full potential requires sophisticated automated reconstruction techniques. This article describes NuGraph2, a Graph Neural Network (GNN) for low-level reconstruction of simulated neutrino interactions in a LArTPC detector. Simulated neutrino interactions in the MicroBooNE detector geometry are described as heterogeneous graphs, with energy depositions on each detector plane forming nodes on planar subgraphs. The network utilizes a multi-head attention message-passing mechanism to perform background filtering and semantic labelling on these graph nodes, identifying those associated with the primary physics interaction with 98.0% efficiency and labelling them according to particle type with 94.9% efficiency. The network operates directly on detector observables across multiple 2D representations, but utilizes a 3D-context-aware mechanism to encourage consistency between these representations. Model inference takes 0.12 s/event on a CPU, and 0.005 s/event batched on a GPU. This architecture is designed to be a general-purpose solution for particle reconstruction in neutrino physics, with the potential for deployment across a broad range of detector technologies, and offers a core convolution engine that can be leveraged for a variety of tasks beyond the two described in this article.
[ "['V Hewes' 'Adam Aurisano' 'Giuseppe Cerati' 'Jim Kowalkowski'\n 'Claire Lee' 'Wei-keng Liao' 'Daniel Grzenda' 'Kaushal Gumpula'\n 'Xiaohe Zhang']" ]
null
null
2403.11876
null
null
http://arxiv.org/pdf/2403.11876v1
2024-03-18T15:28:35Z
2024-03-18T15:28:35Z
Deep Bayesian Future Fusion for Self-Supervised, High-Resolution, Off-Road Mapping
The limited sensing resolution of resource-constrained off-road vehicles poses significant challenges towards reliable off-road autonomy. To overcome this limitation, we propose a general framework based on fusing the future information (i.e. future fusion) for self-supervision. Recent approaches exploit this future information alongside the hand-crafted heuristics to directly supervise the targeted downstream tasks (e.g. traversability estimation). However, in this paper, we opt for a more general line of development - time-efficient completion of the highest resolution (i.e. 2cm per pixel) BEV map in a self-supervised manner via future fusion, which can be used for any downstream tasks for better longer range prediction. To this end, first, we create a high-resolution future-fusion dataset containing pairs of (RGB / height) raw sparse and noisy inputs and map-based dense labels. Next, to accommodate the noise and sparsity of the sensory information, especially in the distal regions, we design an efficient realization of the Bayes filter onto the vanilla convolutional network via the recurrent mechanism. Equipped with the ideas from SOTA generative models, our Bayesian structure effectively predicts high-quality BEV maps in the distal regions. Extensive evaluation on both the quality of completion and downstream task on our future-fusion dataset demonstrates the potential of our approach.
[ "['Shubhra Aich' 'Wenshan Wang' 'Parv Maheshwari' 'Matthew Sivaprakasam'\n 'Samuel Triest' 'Cherie Ho' 'Jason M. Gregory' 'John G. Rogers III'\n 'Sebastian Scherer']" ]
null
null
2403.11877
null
null
http://arxiv.org/pdf/2403.11877v1
2024-03-18T15:31:09Z
2024-03-18T15:31:09Z
Efficient Training of Learning-Based Thermal Power Flow for 4th Generation District Heating Grids
Thermal power flow (TPF) is an important task for various control purposes in 4 Th generation district heating grids with multiple decentral heat sources and meshed grid structures. Computing the TPF, i.e., determining the grid state consisting of temperatures, pressures, and mass flows for given supply and demand values, is classically done by solving the nonlinear heat grid equations, but can be sped up by orders of magnitude using learned models such as neural networks. We propose a novel, efficient scheme to generate a sufficiently large training data set covering relevant supply and demand values. Instead of sampling supply and demand values, our approach generates training examples from a proxy distribution over generator and consumer mass flows, omitting the iterations needed for solving the heat grid equations. The exact, but slightly different, training examples can be weighted to represent the original training distribution. We show with simulations for typical grid structures that the new approach can reduce training set generation times by two orders of magnitude compared to sampling supply and demand values directly, without loss of relevance for the training samples. Moreover, learning TPF with a training data set is shown to outperform sample-free, physics-aware training approaches significantly.
[ "['Andreas Bott' 'Mario Beykirch' 'Florian Steinke']" ]
null
null
2403.11887
null
null
http://arxiv.org/pdf/2403.11887v1
2024-03-18T15:40:36Z
2024-03-18T15:40:36Z
SuperLoRA: Parameter-Efficient Unified Adaptation of Multi-Layer Attention Modules
Low-rank adaptation (LoRA) and its variants are widely employed in fine-tuning large models, including large language models for natural language processing and diffusion models for computer vision. This paper proposes a generalized framework called SuperLoRA that unifies and extends different LoRA variants, which can be realized under different hyper-parameter settings. Introducing grouping, folding, shuffling, projecting, and tensor factoring, SuperLoRA offers high flexibility compared with other LoRA variants and demonstrates superior performance for transfer learning tasks especially in the extremely few-parameter regimes.
[ "['Xiangyu Chen' 'Jing Liu' 'Ye Wang' 'Pu Perry Wang' 'Matthew Brand'\n 'Guanghui Wang' 'Toshiaki Koike-Akino']" ]
null
null
2403.11892
null
null
http://arxiv.org/pdf/2403.11892v1
2024-03-18T15:49:48Z
2024-03-18T15:49:48Z
KnFu: Effective Knowledge Fusion
Federated Learning (FL) has emerged as a prominent alternative to the traditional centralized learning approach. Generally speaking, FL is a decentralized approach that allows for collaborative training of Machine Learning (ML) models across multiple local nodes, ensuring data privacy and security while leveraging diverse datasets. Conventional FL, however, is susceptible to gradient inversion attacks, restrictively enforces a uniform architecture on local models, and suffers from model heterogeneity (model drift) due to non-IID local datasets. To mitigate some of these challenges, the new paradigm of Federated Knowledge Distillation (FKD) has emerged. FDK is developed based on the concept of Knowledge Distillation (KD), which involves extraction and transfer of a large and well-trained teacher model's knowledge to lightweight student models. FKD, however, still faces the model drift issue. Intuitively speaking, not all knowledge is universally beneficial due to the inherent diversity of data among local nodes. This calls for innovative mechanisms to evaluate the relevance and effectiveness of each client's knowledge for others, to prevent propagation of adverse knowledge. In this context, the paper proposes Effective Knowledge Fusion (KnFu) algorithm that evaluates knowledge of local models to only fuse semantic neighbors' effective knowledge for each client. The KnFu is a personalized effective knowledge fusion scheme for each client, that analyzes effectiveness of different local models' knowledge prior to the aggregation phase. Comprehensive experiments were performed on MNIST and CIFAR10 datasets illustrating effectiveness of the proposed KnFu in comparison to its state-of-the-art counterparts. A key conclusion of the work is that in scenarios with large and highly heterogeneous local datasets, local training could be preferable to knowledge fusion-based solutions.
[ "['S. Jamal Seyedmohammadi' 'S. Kawa Atapour' 'Jamshid Abouei'\n 'Arash Mohammadi']" ]
null
null
2403.11894
null
null
http://arxiv.org/abs/2403.11894v3
2024-05-09T19:36:59Z
2024-03-18T15:53:33Z
From Explainable to Interpretable Deep Learning for Natural Language Processing in Healthcare: How Far from Reality?
Deep learning (DL) has substantially enhanced natural language processing (NLP) in healthcare research. However, the increasing complexity of DL-based NLP necessitates transparent model interpretability, or at least explainability, for reliable decision-making. This work presents a thorough scoping review of explainable and interpretable DL in healthcare NLP. The term "eXplainable and Interpretable Artificial Intelligence" (XIAI) is introduced to distinguish XAI from IAI. Different models are further categorized based on their functionality (model-, input-, output-based) and scope (local, global). Our analysis shows that attention mechanisms are the most prevalent emerging IAI technique. The use of IAI is growing, distinguishing it from XAI. The major challenges identified are that most XIAI does not explore "global" modelling processes, the lack of best practices, and the lack of systematic evaluation and benchmarks. One important opportunity is to use attention mechanisms to enhance multi-modal XIAI for personalized medicine. Additionally, combining DL with causal logic holds promise. Our discussion encourages the integration of XIAI in Large Language Models (LLMs) and domain-specific smaller models. In conclusion, XIAI adoption in healthcare requires dedicated in-house expertise. Collaboration with domain experts, end-users, and policymakers can lead to ready-to-use XIAI methods across NLP and medical tasks. While challenges exist, XIAI techniques offer a valuable foundation for interpretable NLP algorithms in healthcare.
[ "['Guangming Huang' 'Yingya Li' 'Shoaib Jameel' 'Yunfei Long'\n 'Giorgos Papanastasiou']" ]
null
null
2403.11898
null
null
http://arxiv.org/pdf/2403.11898v1
2024-03-18T15:56:44Z
2024-03-18T15:56:44Z
Visuo-Tactile Pretraining for Cable Plugging
Tactile information is a critical tool for fine-grain manipulation. As humans, we rely heavily on tactile information to understand objects in our environments and how to interact with them. We use touch not only to perform manipulation tasks but also to learn how to perform these tasks. Therefore, to create robotic agents that can learn to complete manipulation tasks at a human or super-human level of performance, we need to properly incorporate tactile information into both skill execution and skill learning. In this paper, we investigate how we can incorporate tactile information into imitation learning platforms to improve performance on complex tasks. To do this, we tackle the challenge of plugging in a USB cable, a dexterous manipulation task that relies on fine-grain visuo-tactile serving. By incorporating tactile information into imitation learning frameworks, we are able to train a robotic agent to plug in a USB cable - a first for imitation learning. Additionally, we explore how tactile information can be used to train non-tactile agents through a contrastive-loss pretraining process. Our results show that by pretraining with tactile information, the performance of a non-tactile agent can be significantly improved, reaching a level on par with visuo-tactile agents. For demonstration videos and access to our codebase, see the project website: https://sites.google.com/andrew.cmu.edu/visuo-tactile-cable-plugging/home
[ "['Abraham George' 'Selam Gano' 'Pranav Katragadda' 'Amir Barati Farimani']" ]
null
null
2403.11901
null
null
http://arxiv.org/pdf/2403.11901v3
2024-07-07T00:51:44Z
2024-03-18T16:01:42Z
Larimar: Large Language Models with Episodic Memory Control
Efficient and accurate updating of knowledge stored in Large Language Models (LLMs) is one of the most pressing research challenges today. This paper presents Larimar - a novel, brain-inspired architecture for enhancing LLMs with a distributed episodic memory. Larimar's memory allows for dynamic, one-shot updates of knowledge without the need for computationally expensive re-training or fine-tuning. Experimental results on multiple fact editing benchmarks demonstrate that Larimar attains accuracy comparable to most competitive baselines, even in the challenging sequential editing setup, but also excels in speed - yielding speed-ups of 8-10x depending on the base LLM - as well as flexibility due to the proposed architecture being simple, LLM-agnostic, and hence general. We further provide mechanisms for selective fact forgetting, information leakage prevention, and input context length generalization with Larimar and show their effectiveness. Our code is available at https://github.com/IBM/larimar
[ "['Payel Das' 'Subhajit Chaudhury' 'Elliot Nelson' 'Igor Melnyk'\n 'Sarath Swaminathan' 'Sihui Dai' 'Aurélie Lozano' 'Georgios Kollias'\n 'Vijil Chenthamarakshan' 'Jiří' 'Navrátil' 'Soham Dan' 'Pin-Yu Chen']" ]
null
null
2403.11904
null
null
http://arxiv.org/pdf/2403.11904v3
2024-05-30T08:37:45Z
2024-03-18T16:04:55Z
CICLe: Conformal In-Context Learning for Largescale Multi-Class Food Risk Classification
Contaminated or adulterated food poses a substantial risk to human health. Given sets of labeled web texts for training, Machine Learning and Natural Language Processing can be applied to automatically detect such risks. We publish a dataset of 7,546 short texts describing public food recall announcements. Each text is manually labeled, on two granularity levels (coarse and fine), for food products and hazards that the recall corresponds to. We describe the dataset and benchmark naive, traditional, and Transformer models. Based on our analysis, Logistic Regression based on a tf-idf representation outperforms RoBERTa and XLM-R on classes with low support. Finally, we discuss different prompting strategies and present an LLM-in-the-loop framework, based on Conformal Prediction, which boosts the performance of the base classifier while reducing energy consumption compared to normal prompting.
[ "['Korbinian Randl' 'John Pavlopoulos' 'Aron Henriksson' 'Tony Lindgren']" ]
null
null
2403.11907
null
null
http://arxiv.org/pdf/2403.11907v1
2024-03-18T16:09:49Z
2024-03-18T16:09:49Z
Distill2Explain: Differentiable decision trees for explainable reinforcement learning in energy application controllers
Demand-side flexibility is gaining importance as a crucial element in the energy transition process. Accounting for about 25% of final energy consumption globally, the residential sector is an important (potential) source of energy flexibility. However, unlocking this flexibility requires developing a control framework that (1) easily scales across different houses, (2) is easy to maintain, and (3) is simple to understand for end-users. A potential control framework for such a task is data-driven control, specifically model-free reinforcement learning (RL). Such RL-based controllers learn a good control policy by interacting with their environment, learning purely based on data and with minimal human intervention. Yet, they lack explainability, which hampers user acceptance. Moreover, limited hardware capabilities of residential assets forms a hurdle (e.g., using deep neural networks). To overcome both those challenges, we propose a novel method to obtain explainable RL policies by using differentiable decision trees. Using a policy distillation approach, we train these differentiable decision trees to mimic standard RL-based controllers, leading to a decision tree-based control policy that is data-driven and easy to explain. As a proof-of-concept, we examine the performance and explainability of our proposed approach in a battery-based home energy management system to reduce energy costs. For this use case, we show that our proposed approach can outperform baseline rule-based policies by about 20-25%, while providing simple, explainable control policies. We further compare these explainable policies with standard RL policies and examine the performance trade-offs associated with this increased explainability.
[ "['Gargya Gokhale' 'Seyed Soroush Karimi Madahi' 'Bert Claessens'\n 'Chris Develder']" ]
null
null
2403.11914
null
null
http://arxiv.org/pdf/2403.11914v1
2024-03-18T16:13:02Z
2024-03-18T16:13:02Z
Single-Agent Actor Critic for Decentralized Cooperative Driving
Active traffic management incorporating autonomous vehicles (AVs) promises a future with diminished congestion and enhanced traffic flow. However, developing algorithms for real-world application requires addressing the challenges posed by continuous traffic flow and partial observability. To bridge this gap and advance the field of active traffic management towards greater decentralization, we introduce a novel asymmetric actor-critic model aimed at learning decentralized cooperative driving policies for autonomous vehicles using single-agent reinforcement learning. Our approach employs attention neural networks with masking to handle the dynamic nature of real-world traffic flow and partial observability. Through extensive evaluations against baseline controllers across various traffic scenarios, our model shows great potential for improving traffic flow at diverse bottleneck locations within the road system. Additionally, we explore the challenge associated with the conservative driving behaviors of autonomous vehicles that adhere strictly to traffic regulations. The experiment results illustrate that our proposed cooperative policy can mitigate potential traffic slowdowns without compromising safety.
[ "['Shengchao Yan' 'Lukas König' 'Wolfram Burgard']" ]
null
null
2403.11925
null
null
http://arxiv.org/pdf/2403.11925v5
2024-06-20T22:26:42Z
2024-03-18T16:23:47Z
Towards Global Optimality for Practical Average Reward Reinforcement Learning without Mixing Time Oracles
In the context of average-reward reinforcement learning, the requirement for oracle knowledge of the mixing time, a measure of the duration a Markov chain under a fixed policy needs to achieve its stationary distribution, poses a significant challenge for the global convergence of policy gradient methods. This requirement is particularly problematic due to the difficulty and expense of estimating mixing time in environments with large state spaces, leading to the necessity of impractically long trajectories for effective gradient estimation in practical applications. To address this limitation, we consider the Multi-level Actor-Critic (MAC) framework, which incorporates a Multi-level Monte-Carlo (MLMC) gradient estimator. With our approach, we effectively alleviate the dependency on mixing time knowledge, a first for average-reward MDPs global convergence. Furthermore, our approach exhibits the tightest available dependence of $mathcal{O}left( sqrt{tau_{mix}} right)$known from prior work. With a 2D grid world goal-reaching navigation experiment, we demonstrate that MAC outperforms the existing state-of-the-art policy gradient-based method for average reward settings.
[ "['Bhrij Patel' 'Wesley A. Suttle' 'Alec Koppel' 'Vaneet Aggarwal'\n 'Brian M. Sadler' 'Amrit Singh Bedi' 'Dinesh Manocha']" ]
null
null
2403.11938
null
null
http://arxiv.org/pdf/2403.11938v2
2024-07-12T15:08:15Z
2024-03-18T16:35:13Z
State space representations of the Roesser type for convolutional layers
From the perspective of control theory, convolutional layers (of neural networks) are 2-D (or N-D) linear time-invariant dynamical systems. The usual representation of convolutional layers by the convolution kernel corresponds to the representation of a dynamical system by its impulse response. However, many analysis tools from control theory, e.g., involving linear matrix inequalities, require a state space representation. For this reason, we explicitly provide a state space representation of the Roesser type for 2-D convolutional layers with $c_mathrm{in}r_1 + c_mathrm{out}r_2$ states, where $c_mathrm{in}$/$c_mathrm{out}$ is the number of input/output channels of the layer and $r_1$/$r_2$ characterizes the width/length of the convolution kernel. This representation is shown to be minimal for $c_mathrm{in} = c_mathrm{out}$. We further construct state space representations for dilated, strided, and N-D convolutions.
[ "['Patricia Pauli' 'Dennis Gramlich' 'Frank Allgöwer']" ]
null
null
2403.11940
null
null
http://arxiv.org/pdf/2403.11940v1
2024-03-18T16:36:01Z
2024-03-18T16:36:01Z
Multistep Inverse Is Not All You Need
In real-world control settings, the observation space is often unnecessarily high-dimensional and subject to time-correlated noise. However, the controllable dynamics of the system are often far simpler than the dynamics of the raw observations. It is therefore desirable to learn an encoder to map the observation space to a simpler space of control-relevant variables. In this work, we consider the Ex-BMDP model, first proposed by Efroni et al. (2022), which formalizes control problems where observations can be factorized into an action-dependent latent state which evolves deterministically, and action-independent time-correlated noise. Lamb et al. (2022) proposes the "AC-State" method for learning an encoder to extract a complete action-dependent latent state representation from the observations in such problems. AC-State is a multistep-inverse method, in that it uses the encoding of the the first and last state in a path to predict the first action in the path. However, we identify cases where AC-State will fail to learn a correct latent representation of the agent-controllable factor of the state. We therefore propose a new algorithm, ACDF, which combines multistep-inverse prediction with a latent forward model. ACDF is guaranteed to correctly infer an action-dependent latent state encoder for a large class of Ex-BMDP models. We demonstrate the effectiveness of ACDF on tabular Ex-BMDPs through numerical simulations; as well as high-dimensional environments using neural-network-based encoders. Code is available at https://github.com/midi-lab/acdf.
[ "['Alexander Levine' 'Peter Stone' 'Amy Zhang']" ]
null
null
2403.11947
null
null
http://arxiv.org/pdf/2403.11947v1
2024-03-18T16:40:41Z
2024-03-18T16:40:41Z
Explainable Reinforcement Learning-based Home Energy Management Systems using Differentiable Decision Trees
With the ongoing energy transition, demand-side flexibility has become an important aspect of the modern power grid for providing grid support and allowing further integration of sustainable energy sources. Besides traditional sources, the residential sector is another major and largely untapped source of flexibility, driven by the increased adoption of solar PV, home batteries, and EVs. However, unlocking this residential flexibility is challenging as it requires a control framework that can effectively manage household energy consumption, and maintain user comfort while being readily scalable across different, diverse houses. We aim to address this challenging problem and introduce a reinforcement learning-based approach using differentiable decision trees. This approach integrates the scalability of data-driven reinforcement learning with the explainability of (differentiable) decision trees. This leads to a controller that can be easily adapted across different houses and provides a simple control policy that can be explained to end-users, further improving user acceptance. As a proof-of-concept, we analyze our method using a home energy management problem, comparing its performance with commercially available rule-based baseline and standard neural network-based RL controllers. Through this preliminary study, we show that the performance of our proposed method is comparable to standard RL-based controllers, outperforming baseline controllers by ~20% in terms of daily cost savings while being straightforward to explain.
[ "['Gargya Gokhale' 'Bert Claessens' 'Chris Develder']" ]
null
null
2403.11948
null
null
http://arxiv.org/pdf/2403.11948v1
2024-03-18T16:42:39Z
2024-03-18T16:42:39Z
Learning Dynamical Systems Encoding Non-Linearity within Space Curvature
Dynamical Systems (DS) are an effective and powerful means of shaping high-level policies for robotics control. They provide robust and reactive control while ensuring the stability of the driving vector field. The increasing complexity of real-world scenarios necessitates DS with a higher degree of non-linearity, along with the ability to adapt to potential changes in environmental conditions, such as obstacles. Current learning strategies for DSs often involve a trade-off, sacrificing either stability guarantees or offline computational efficiency in order to enhance the capabilities of the learned DS. Online local adaptation to environmental changes is either not taken into consideration or treated as a separate problem. In this paper, our objective is to introduce a method that enhances the complexity of the learned DS without compromising efficiency during training or stability guarantees. Furthermore, we aim to provide a unified approach for seamlessly integrating the initially learned DS's non-linearity with any local non-linearities that may arise due to changes in the environment. We propose a geometrical approach to learn asymptotically stable non-linear DS for robotics control. Each DS is modeled as a harmonic damped oscillator on a latent manifold. By learning the manifold's Euclidean embedded representation, our approach encodes the non-linearity of the DS within the curvature of the space. Having an explicit embedded representation of the manifold allows us to showcase obstacle avoidance by directly inducing local deformations of the space. We demonstrate the effectiveness of our methodology through two scenarios: first, the 2D learning of synthetic vector fields, and second, the learning of 3D robotic end-effector motions in real-world settings.
[ "['Bernardo Fichera' 'Aude Billard']" ]
null
null
2403.11960
null
null
http://arxiv.org/pdf/2403.11960v1
2024-03-18T16:57:16Z
2024-03-18T16:57:16Z
CASPER: Causality-Aware Spatiotemporal Graph Neural Networks for Spatiotemporal Time Series Imputation
Spatiotemporal time series is the foundation of understanding human activities and their impacts, which is usually collected via monitoring sensors placed at different locations. The collected data usually contains missing values due to various failures, which have significant impact on data analysis. To impute the missing values, a lot of methods have been introduced. When recovering a specific data point, most existing methods tend to take into consideration all the information relevant to that point regardless of whether they have a cause-and-effect relationship. During data collection, it is inevitable that some unknown confounders are included, e.g., background noise in time series and non-causal shortcut edges in the constructed sensor network. These confounders could open backdoor paths between the input and output, in other words, they establish non-causal correlations between the input and output. Over-exploiting these non-causal correlations could result in overfitting and make the model vulnerable to noises. In this paper, we first revisit spatiotemporal time series imputation from a causal perspective, which shows the causal relationships among the input, output, embeddings and confounders. Next, we show how to block the confounders via the frontdoor adjustment. Based on the results of the frontdoor adjustment, we introduce a novel Causality-Aware SPatiotEmpoRal graph neural network (CASPER), which contains a novel Spatiotemporal Causal Attention (SCA) and a Prompt Based Decoder (PBD). PBD could reduce the impact of confounders and SCA could discover the sparse causal relationships among embeddings. Theoretical analysis reveals that SCA discovers causal relationships based on the values of gradients. We evaluate Casper on three real-world datasets, and the experimental results show that Casper outperforms the baselines and effectively discovers causal relationships.
[ "['Baoyu Jing' 'Dawei Zhou' 'Kan Ren' 'Carl Yang']" ]
null
null
2403.11961
null
null
http://arxiv.org/pdf/2403.11961v1
2024-03-18T16:58:23Z
2024-03-18T16:58:23Z
Enhanced Event-Based Video Reconstruction with Motion Compensation
Deep neural networks for event-based video reconstruction often suffer from a lack of interpretability and have high memory demands. A lightweight network called CISTA-LSTC has recently been introduced showing that high-quality reconstruction can be achieved through the systematic design of its architecture. However, its modelling assumption that input signals and output reconstructed frame share the same sparse representation neglects the displacement caused by motion. To address this, we propose warping the input intensity frames and sparse codes to enhance reconstruction quality. A CISTA-Flow network is constructed by integrating a flow network with CISTA-LSTC for motion compensation. The system relies solely on events, in which predicted flow aids in reconstruction and then reconstructed frames are used to facilitate flow estimation. We also introduce an iterative training framework for this combined system. Results demonstrate that our approach achieves state-of-the-art reconstruction accuracy and simultaneously provides reliable dense flow estimation. Furthermore, our model exhibits flexibility in that it can integrate different flow networks, suggesting its potential for further performance enhancement.
[ "['Siying Liu' 'Pier Luigi Dragotti']" ]
null
null
2403.11963
null
null
http://arxiv.org/pdf/2403.11963v1
2024-03-18T17:02:41Z
2024-03-18T17:02:41Z
Transfer Learning Beyond Bounded Density Ratios
We study the fundamental problem of transfer learning where a learning algorithm collects data from some source distribution $P$ but needs to perform well with respect to a different target distribution $Q$. A standard change of measure argument implies that transfer learning happens when the density ratio $dQ/dP$ is bounded. Yet, prior thought-provoking works by Kpotufe and Martinet (COLT, 2018) and Hanneke and Kpotufe (NeurIPS, 2019) demonstrate cases where the ratio $dQ/dP$ is unbounded, but transfer learning is possible. In this work, we focus on transfer learning over the class of low-degree polynomial estimators. Our main result is a general transfer inequality over the domain $mathbb{R}^n$, proving that non-trivial transfer learning for low-degree polynomials is possible under very mild assumptions, going well beyond the classical assumption that $dQ/dP$ is bounded. For instance, it always applies if $Q$ is a log-concave measure and the inverse ratio $dP/dQ$ is bounded. To demonstrate the applicability of our inequality, we obtain new results in the settings of: (1) the classical truncated regression setting, where $dQ/dP$ equals infinity, and (2) the more recent out-of-distribution generalization setting for in-context learning linear functions with transformers. We also provide a discrete analogue of our transfer inequality on the Boolean Hypercube ${-1,1}^n$, and study its connections with the recent problem of Generalization on the Unseen of Abbe, Bengio, Lotfi and Rizk (ICML, 2023). Our main conceptual contribution is that the maximum influence of the error of the estimator $widehat{f}-f^*$ under $Q$, $mathrm{I}_{max}(widehat{f}-f^*)$, acts as a sufficient condition for transferability; when $mathrm{I}_{max}(widehat{f}-f^*)$ is appropriately bounded, transfer is possible over the Boolean domain.
[ "['Alkis Kalavasis' 'Ilias Zadik' 'Manolis Zampetakis']" ]
null
null
2403.11964
null
null
http://arxiv.org/pdf/2403.11964v1
2024-03-18T17:04:33Z
2024-03-18T17:04:33Z
Probabilistic Calibration by Design for Neural Network Regression
Generating calibrated and sharp neural network predictive distributions for regression problems is essential for optimal decision-making in many real-world applications. To address the miscalibration issue of neural networks, various methods have been proposed to improve calibration, including post-hoc methods that adjust predictions after training and regularization methods that act during training. While post-hoc methods have shown better improvement in calibration compared to regularization methods, the post-hoc step is completely independent of model training. We introduce a novel end-to-end model training procedure called Quantile Recalibration Training, integrating post-hoc calibration directly into the training process without additional parameters. We also present a unified algorithm that includes our method and other post-hoc and regularization methods, as particular cases. We demonstrate the performance of our method in a large-scale experiment involving 57 tabular regression datasets, showcasing improved predictive accuracy while maintaining calibration. We also conduct an ablation study to evaluate the significance of different components within our proposed method, as well as an in-depth analysis of the impact of the base model and different hyperparameters on predictive accuracy.
[ "['Victor Dheur' 'Souhaib Ben Taieb']" ]
null
null
2403.11966
null
null
http://arxiv.org/pdf/2403.11966v1
2024-03-18T17:05:24Z
2024-03-18T17:05:24Z
Informed Spectral Normalized Gaussian Processes for Trajectory Prediction
Prior parameter distributions provide an elegant way to represent prior expert and world knowledge for informed learning. Previous work has shown that using such informative priors to regularize probabilistic deep learning (DL) models increases their performance and data-efficiency. However, commonly used sampling-based approximations for probabilistic DL models can be computationally expensive, requiring multiple inference passes and longer training times. Promising alternatives are compute-efficient last layer kernel approximations like spectral normalized Gaussian processes (SNGPs). We propose a novel regularization-based continual learning method for SNGPs, which enables the use of informative priors that represent prior knowledge learned from previous tasks. Our proposal builds upon well-established methods and requires no rehearsal memory or parameter expansion. We apply our informed SNGP model to the trajectory prediction problem in autonomous driving by integrating prior drivability knowledge. On two public datasets, we investigate its performance under diminishing training data and across locations, and thereby demonstrate an increase in data-efficiency and robustness to location-transfers over non-informed and informed baselines.
[ "['Christian Schlauch' 'Christian Wirth' 'Nadja Klein']" ]
null
null
2403.11968
null
null
http://arxiv.org/pdf/2403.11968v1
2024-03-18T17:08:24Z
2024-03-18T17:08:24Z
Unveil Conditional Diffusion Models with Classifier-free Guidance: A Sharp Statistical Theory
Conditional diffusion models serve as the foundation of modern image synthesis and find extensive application in fields like computational biology and reinforcement learning. In these applications, conditional diffusion models incorporate various conditional information, such as prompt input, to guide the sample generation towards desired properties. Despite the empirical success, theory of conditional diffusion models is largely missing. This paper bridges this gap by presenting a sharp statistical theory of distribution estimation using conditional diffusion models. Our analysis yields a sample complexity bound that adapts to the smoothness of the data distribution and matches the minimax lower bound. The key to our theoretical development lies in an approximation result for the conditional score function, which relies on a novel diffused Taylor approximation technique. Moreover, we demonstrate the utility of our statistical theory in elucidating the performance of conditional diffusion models across diverse applications, including model-based transition kernel estimation in reinforcement learning, solving inverse problems, and reward conditioned sample generation.
[ "['Hengyu Fu' 'Zhuoran Yang' 'Mengdi Wang' 'Minshuo Chen']" ]
null
null
2403.11981
null
null
http://arxiv.org/pdf/2403.11981v1
2024-03-18T17:17:07Z
2024-03-18T17:17:07Z
Diffusion Denoising as a Certified Defense against Clean-label Poisoning
We present a certified defense to clean-label poisoning attacks. These attacks work by injecting a small number of poisoning samples (e.g., 1%) that contain $p$-norm bounded adversarial perturbations into the training data to induce a targeted misclassification of a test-time input. Inspired by the adversarial robustness achieved by $denoised$ $smoothing$, we show how an off-the-shelf diffusion model can sanitize the tampered training data. We extensively test our defense against seven clean-label poisoning attacks and reduce their attack success to 0-16% with only a negligible drop in the test time accuracy. We compare our defense with existing countermeasures against clean-label poisoning, showing that the defense reduces the attack success the most and offers the best model utility. Our results highlight the need for future work on developing stronger clean-label attacks and using our certified yet practical defense as a strong baseline to evaluate these attacks.
[ "['Sanghyun Hong' 'Nicholas Carlini' 'Alexey Kurakin']" ]
null
null
2403.11996
null
null
http://arxiv.org/pdf/2403.11996v3
2024-06-10T19:06:26Z
2024-03-18T17:30:27Z
Accelerating Scientific Discovery with Generative Knowledge Extraction, Graph-Based Representation, and Multimodal Intelligent Graph Reasoning
Leveraging generative Artificial Intelligence (AI), we have transformed a dataset comprising 1,000 scientific papers into an ontological knowledge graph. Through an in-depth structural analysis, we have calculated node degrees, identified communities and connectivities, and evaluated clustering coefficients and betweenness centrality of pivotal nodes, uncovering fascinating knowledge architectures. The graph has an inherently scale-free nature, is highly connected, and can be used for graph reasoning by taking advantage of transitive and isomorphic properties that reveal unprecedented interdisciplinary relationships that can be used to answer queries, identify gaps in knowledge, propose never-before-seen material designs, and predict material behaviors. We compute deep node embeddings for combinatorial node similarity ranking for use in a path sampling strategy links dissimilar concepts that have previously not been related. One comparison revealed structural parallels between biological materials and Beethoven's 9th Symphony, highlighting shared patterns of complexity through isomorphic mapping. In another example, the algorithm proposed a hierarchical mycelium-based composite based on integrating path sampling with principles extracted from Kandinsky's 'Composition VII' painting. The resulting material integrates an innovative set of concepts that include a balance of chaos/order, adjustable porosity, mechanical strength, and complex patterned chemical functionalization. We uncover other isomorphisms across science, technology and art, revealing a nuanced ontology of immanence that reveal a context-dependent heterarchical interplay of constituents. Graph-based generative AI achieves a far higher degree of novelty, explorative capacity, and technical detail, than conventional approaches and establishes a widely useful framework for innovation by revealing hidden connections.
[ "['Markus J. Buehler']" ]
null
null
2403.11998
null
null
http://arxiv.org/pdf/2403.11998v2
2024-06-18T15:27:16Z
2024-03-18T17:32:23Z
Learning Useful Representations of Recurrent Neural Network Weight Matrices
Recurrent Neural Networks (RNNs) are general-purpose parallel-sequential computers. The program of an RNN is its weight matrix. How to learn useful representations of RNN weights that facilitate RNN analysis as well as downstream tasks? While the mechanistic approach directly looks at some RNN's weights to predict its behavior, the functionalist approach analyzes its overall functionality-specifically, its input-output mapping. We consider several mechanistic approaches for RNN weights and adapt the permutation equivariant Deep Weight Space layer for RNNs. Our two novel functionalist approaches extract information from RNN weights by 'interrogating' the RNN through probing inputs. We develop a theoretical framework that demonstrates conditions under which the functionalist approach can generate rich representations that help determine RNN behavior. We release the first two 'model zoo' datasets for RNN weight representation learning. One consists of generative models of a class of formal languages, and the other one of classifiers of sequentially processed MNIST digits.With the help of an emulation-based self-supervised learning technique we compare and evaluate the different RNN weight encoding techniques on multiple downstream applications. On the most challenging one, namely predicting which exact task the RNN was trained on, functionalist approaches show clear superiority.
[ "['Vincent Herrmann' 'Francesco Faccio' 'Jürgen Schmidhuber']" ]
null
null
2403.12005
null
null
http://arxiv.org/abs/2403.12005v2
2024-04-18T15:20:41Z
2024-03-18T17:42:27Z
Visualization for Trust in Machine Learning Revisited: The State of the Field in 2023
Visualization for explainable and trustworthy machine learning remains one of the most important and heavily researched fields within information visualization and visual analytics with various application domains, such as medicine, finance, and bioinformatics. After our 2020 state-of-the-art report comprising 200 techniques, we have persistently collected peer-reviewed articles describing visualization techniques, categorized them based on the previously established categorization schema consisting of 119 categories, and provided the resulting collection of 542 techniques in an online survey browser. In this survey article, we present the updated findings of new analyses of this dataset as of fall 2023 and discuss trends, insights, and eight open challenges for using visualizations in machine learning. Our results corroborate the rapidly growing trend of visualization techniques for increasing trust in machine learning models in the past three years, with visualization found to help improve popular model explainability methods and check new deep learning architectures, for instance.
[ "['Angelos Chatzimparmpas' 'Kostiantyn Kucher' 'Andreas Kerren']" ]
null
null
2403.12007
null
null
http://arxiv.org/pdf/2403.12007v3
2024-04-19T12:47:27Z
2024-03-18T17:43:40Z
Defining Effective Engagement For Enhancing Cancer Patients' Well-being with Mobile Digital Behavior Change Interventions
Digital Behavior Change Interventions (DBCIs) are supporting development of new health behaviors. Evaluating their effectiveness is crucial for their improvement and understanding of success factors. However, comprehensive guidance for developers, particularly in small-scale studies with ethical constraints, is limited. Building on the CAPABLE project, this study aims to define effective engagement with DBCIs for supporting cancer patients in enhancing their quality of life. We identify metrics for measuring engagement, explore the interest of both patients and clinicians in DBCIs, and propose hypotheses for assessing the impact of DBCIs in such contexts. Our findings suggest that clinician prescriptions significantly increase sustained engagement with mobile DBCIs. In addition, while one weekly engagement with a DBCI is sufficient to maintain well-being, transitioning from extrinsic to intrinsic motivation may require a higher level of engagement.
[ "['Aneta Lisowska' 'Szymon Wilk' 'Laura Locati' 'Mimma Rizzo'\n 'Lucia Sacchi' 'Silvana Quaglini' 'Matteo Terzaghi' 'Valentina Tibollo'\n 'Mor Peleg']" ]
null
null
2403.12012
null
null
http://arxiv.org/pdf/2403.12012v2
2024-06-18T01:08:24Z
2024-03-18T17:50:20Z
Convergence of Kinetic Langevin Monte Carlo on Lie groups
Explicit, momentum-based dynamics for optimizing functions defined on Lie groups was recently constructed, based on techniques such as variational optimization and left trivialization. We appropriately add tractable noise to the optimization dynamics to turn it into a sampling dynamics, leveraging the advantageous feature that the trivialized momentum variable is Euclidean despite that the potential function lives on a manifold. We then propose a Lie-group MCMC sampler, by delicately discretizing the resulting kinetic-Langevin-type sampling dynamics. The Lie group structure is exactly preserved by this discretization. Exponential convergence with explicit convergence rate for both the continuous dynamics and the discrete sampler are then proved under $W_2$ distance. Only compactness of the Lie group and geodesically $L$-smoothness of the potential function are needed. To the best of our knowledge, this is the first convergence result for kinetic Langevin on curved spaces, and also the first quantitative result that requires no convexity or, at least not explicitly, any common relaxation such as isoperimetry.
[ "['Lingkai Kong' 'Molei Tao']" ]
null
null
2403.12014
null
null
http://arxiv.org/pdf/2403.12014v2
2024-07-12T17:39:19Z
2024-03-18T17:51:16Z
EnvGen: Generating and Adapting Environments via LLMs for Training Embodied Agents
Recent SOTA approaches for embodied learning via interaction directly employ large language models (LLMs) as agents to determine the next steps in an environment. Due to their world knowledge and reasoning capabilities, LLM agents achieve stronger performance than previous smaller agents based on reinforcement learning (RL); however, frequently calling LLMs is slow and expensive. Instead of directly employing LLMs as agents, can we use LLMs' reasoning capabilities to adaptively create training environments to help smaller RL agents learn useful skills that they are weak at? We propose EnvGen, a novel framework to address this question. We first prompt an LLM to generate training environments by giving it the task description and simulator objectives that the agents should learn and then asking it to generate a set of environment configurations (e.g., different terrains, items initially given to agents, etc.). Next, we train a small RL agent in a mixture of the original and LLM-generated environments. Then, we enable the LLM to continuously adapt the generated environments to progressively improve the skills that the agent is weak at, by providing feedback to the LLM in the form of the agent's performance. We demonstrate the usefulness of EnvGen with comprehensive experiments in Crafter and Heist environments. We find that a small RL agent trained with EnvGen can outperform SOTA methods, including a GPT-4 agent, and learns long-horizon tasks significantly faster. We also show that using an LLM to adapt environments dynamically outperforms curriculum learning approaches and how the environments are adapted to help improve RL agents' weaker skills over time. Additionally, EnvGen is substantially more efficient as it only uses a small number of LLM calls (e.g., 4 in total), whereas LLM agents require thousands of calls. Lastly, we present detailed ablation studies for EnvGen design choices.
[ "['Abhay Zala' 'Jaemin Cho' 'Han Lin' 'Jaehong Yoon' 'Mohit Bansal']" ]
null
null
2403.12017
null
null
http://arxiv.org/pdf/2403.12017v1
2024-03-18T17:52:57Z
2024-03-18T17:52:57Z
Supervised Fine-Tuning as Inverse Reinforcement Learning
The prevailing approach to aligning Large Language Models (LLMs) typically relies on human or AI feedback and assumes access to specific types of preference datasets. In our work, we question the efficacy of such datasets and explore various scenarios where alignment with expert demonstrations proves more realistic. We build a sequential decision-making framework to formulate the problem of aligning LLMs using demonstration datasets. Drawing insights from inverse reinforcement learning and imitation learning, we introduce various approaches for divergence minimization in the LLM alignment tasks. Our analysis highlights the mass-covering and mode-seeking behaviors of these different approaches. Inclusively, we examine the pros and cons of the classical supervised fine-tuning method, elaborating on scenarios where different methods shine.
[ "['Hao Sun']" ]
null
null
2403.12025
null
null
http://arxiv.org/pdf/2403.12025v1
2024-03-18T17:56:37Z
2024-03-18T17:56:37Z
A Toolbox for Surfacing Health Equity Harms and Biases in Large Language Models
Large language models (LLMs) hold immense promise to serve complex health information needs but also have the potential to introduce harm and exacerbate health disparities. Reliably evaluating equity-related model failures is a critical step toward developing systems that promote health equity. In this work, we present resources and methodologies for surfacing biases with potential to precipitate equity-related harms in long-form, LLM-generated answers to medical questions and then conduct an empirical case study with Med-PaLM 2, resulting in the largest human evaluation study in this area to date. Our contributions include a multifactorial framework for human assessment of LLM-generated answers for biases, and EquityMedQA, a collection of seven newly-released datasets comprising both manually-curated and LLM-generated questions enriched for adversarial queries. Both our human assessment framework and dataset design process are grounded in an iterative participatory approach and review of possible biases in Med-PaLM 2 answers to adversarial queries. Through our empirical study, we find that the use of a collection of datasets curated through a variety of methodologies, coupled with a thorough evaluation protocol that leverages multiple assessment rubric designs and diverse rater groups, surfaces biases that may be missed via narrower evaluation approaches. Our experience underscores the importance of using diverse assessment methodologies and involving raters of varying backgrounds and expertise. We emphasize that while our framework can identify specific forms of bias, it is not sufficient to holistically assess whether the deployment of an AI system promotes equitable health outcomes. We hope the broader community leverages and builds on these tools and methods towards realizing a shared goal of LLMs that promote accessible and equitable healthcare for all.
[ "['Stephen R. Pfohl' 'Heather Cole-Lewis' 'Rory Sayres' 'Darlene Neal'\n 'Mercy Asiedu' 'Awa Dieng' 'Nenad Tomasev' 'Qazi Mamunur Rashid'\n 'Shekoofeh Azizi' 'Negar Rostamzadeh' 'Liam G. McCoy' 'Leo Anthony Celi'\n 'Yun Liu' 'Mike Schaekermann' 'Alanna Walton' 'Alicia Parrish'\n 'Chirag Nagpal' 'Preeti Singh' 'Akeiylah Dewitt' 'Philip Mansfield'\n 'Sushant Prakash' 'Katherine Heller' 'Alan Karthikesalingam'\n 'Christopher Semturs' 'Joelle Barral' 'Greg Corrado' 'Yossi Matias'\n 'Jamila Smith-Loud' 'Ivor Horn' 'Karan Singhal']" ]
null
null
2403.12026
null
null
http://arxiv.org/pdf/2403.12026v1
2024-03-18T17:57:02Z
2024-03-18T17:57:02Z
FlexCap: Generating Rich, Localized, and Flexible Captions in Images
We introduce a versatile $textit{flexible-captioning}$ vision-language model (VLM) capable of generating region-specific descriptions of varying lengths. The model, FlexCap, is trained to produce length-conditioned captions for input bounding boxes, and this allows control over the information density of its output, with descriptions ranging from concise object labels to detailed captions. To achieve this we create large-scale training datasets of image region descriptions of varying length, starting from captioned images. This flexible-captioning capability has several valuable applications. First, FlexCap demonstrates superior performance in dense captioning tasks on the Visual Genome dataset. Second, a visual question answering (VQA) system can be built by employing FlexCap to generate localized descriptions as inputs to a large language model. The resulting system achieves state-of-the-art zero-shot performance on a number of VQA datasets. We also demonstrate a $textit{localize-then-describe}$ approach with FlexCap can be better at open-ended object detection than a $textit{describe-then-localize}$ approach with other VLMs. We highlight a novel characteristic of FlexCap, which is its ability to extract diverse visual information through prefix conditioning. Finally, we qualitatively demonstrate FlexCap's broad applicability in tasks such as image labeling, object attribute recognition, and visual dialog. Project webpage: https://flex-cap.github.io .
[ "['Debidatta Dwibedi' 'Vidhi Jain' 'Jonathan Tompson' 'Andrew Zisserman'\n 'Yusuf Aytar']" ]
null
null
2403.12029
null
null
http://arxiv.org/pdf/2403.12029v1
2024-03-18T17:58:02Z
2024-03-18T17:58:02Z
Align and Distill: Unifying and Improving Domain Adaptive Object Detection
Object detectors often perform poorly on data that differs from their training set. Domain adaptive object detection (DAOD) methods have recently demonstrated strong results on addressing this challenge. Unfortunately, we identify systemic benchmarking pitfalls that call past results into question and hamper further progress: (a) Overestimation of performance due to underpowered baselines, (b) Inconsistent implementation practices preventing transparent comparisons of methods, and (c) Lack of generality due to outdated backbones and lack of diversity in benchmarks. We address these problems by introducing: (1) A unified benchmarking and implementation framework, Align and Distill (ALDI), enabling comparison of DAOD methods and supporting future development, (2) A fair and modern training and evaluation protocol for DAOD that addresses benchmarking pitfalls, (3) A new DAOD benchmark dataset, CFC-DAOD, enabling evaluation on diverse real-world data, and (4) A new method, ALDI++, that achieves state-of-the-art results by a large margin. ALDI++ outperforms the previous state-of-the-art by +3.5 AP50 on Cityscapes to Foggy Cityscapes, +5.7 AP50 on Sim10k to Cityscapes (where ours is the only method to outperform a fair baseline), and +2.0 AP50 on CFC Kenai to Channel. Our framework, dataset, and state-of-the-art method offer a critical reset for DAOD and provide a strong foundation for future research. Code and data are available: https://github.com/justinkay/aldi and https://github.com/visipedia/caltech-fish-counting.
[ "['Justin Kay' 'Timm Haucke' 'Suzanne Stathatos' 'Siqi Deng' 'Erik Young'\n 'Pietro Perona' 'Sara Beery' 'Grant Van Horn']" ]
null
null
2403.12030
null
null
http://arxiv.org/pdf/2403.12030v1
2024-03-18T17:58:13Z
2024-03-18T17:58:13Z
Expandable Subspace Ensemble for Pre-Trained Model-Based Class-Incremental Learning
Class-Incremental Learning (CIL) requires a learning system to continually learn new classes without forgetting. Despite the strong performance of Pre-Trained Models (PTMs) in CIL, a critical issue persists: learning new classes often results in the overwriting of old ones. Excessive modification of the network causes forgetting, while minimal adjustments lead to an inadequate fit for new classes. As a result, it is desired to figure out a way of efficient model updating without harming former knowledge. In this paper, we propose ExpAndable Subspace Ensemble (EASE) for PTM-based CIL. To enable model updating without conflict, we train a distinct lightweight adapter module for each new task, aiming to create task-specific subspaces. These adapters span a high-dimensional feature space, enabling joint decision-making across multiple subspaces. As data evolves, the expanding subspaces render the old class classifiers incompatible with new-stage spaces. Correspondingly, we design a semantic-guided prototype complement strategy that synthesizes old classes' new features without using any old class instance. Extensive experiments on seven benchmark datasets verify EASE's state-of-the-art performance. Code is available at: https://github.com/sun-hailong/CVPR24-Ease
[ "['Da-Wei Zhou' 'Hai-Long Sun' 'Han-Jia Ye' 'De-Chuan Zhan']" ]
null
null
2403.12031
null
null
http://arxiv.org/pdf/2403.12031v2
2024-03-28T17:56:28Z
2024-03-18T17:59:04Z
RouterBench: A Benchmark for Multi-LLM Routing System
As the range of applications for Large Language Models (LLMs) continues to grow, the demand for effective serving solutions becomes increasingly critical. Despite the versatility of LLMs, no single model can optimally address all tasks and applications, particularly when balancing performance with cost. This limitation has led to the development of LLM routing systems, which combine the strengths of various models to overcome the constraints of individual LLMs. Yet, the absence of a standardized benchmark for evaluating the performance of LLM routers hinders progress in this area. To bridge this gap, we present RouterBench, a novel evaluation framework designed to systematically assess the efficacy of LLM routing systems, along with a comprehensive dataset comprising over 405k inference outcomes from representative LLMs to support the development of routing strategies. We further propose a theoretical framework for LLM routing, and deliver a comparative analysis of various routing approaches through RouterBench, highlighting their potentials and limitations within our evaluation framework. This work not only formalizes and advances the development of LLM routing systems but also sets a standard for their assessment, paving the way for more accessible and economically viable LLM deployments. The code and data are available at https://github.com/withmartian/routerbench.
[ "['Qitian Jason Hu' 'Jacob Bieker' 'Xiuyu Li' 'Nan Jiang'\n 'Benjamin Keigwin' 'Gaurav Ranganath' 'Kurt Keutzer'\n 'Shriyash Kaustubh Upadhyay']" ]
null
null
2403.12034
null
null
http://arxiv.org/pdf/2403.12034v1
2024-03-18T17:59:12Z
2024-03-18T17:59:12Z
VFusion3D: Learning Scalable 3D Generative Models from Video Diffusion Models
This paper presents a novel paradigm for building scalable 3D generative models utilizing pre-trained video diffusion models. The primary obstacle in developing foundation 3D generative models is the limited availability of 3D data. Unlike images, texts, or videos, 3D data are not readily accessible and are difficult to acquire. This results in a significant disparity in scale compared to the vast quantities of other types of data. To address this issue, we propose using a video diffusion model, trained with extensive volumes of text, images, and videos, as a knowledge source for 3D data. By unlocking its multi-view generative capabilities through fine-tuning, we generate a large-scale synthetic multi-view dataset to train a feed-forward 3D generative model. The proposed model, VFusion3D, trained on nearly 3M synthetic multi-view data, can generate a 3D asset from a single image in seconds and achieves superior performance when compared to current SOTA feed-forward 3D generative models, with users preferring our results over 70% of the time.
[ "['Junlin Han' 'Filippos Kokkinos' 'Philip Torr']" ]
null
null
2403.12036
null
null
http://arxiv.org/pdf/2403.12036v1
2024-03-18T17:59:40Z
2024-03-18T17:59:40Z
One-Step Image Translation with Text-to-Image Models
In this work, we address two limitations of existing conditional diffusion models: their slow inference speed due to the iterative denoising process and their reliance on paired data for model fine-tuning. To tackle these issues, we introduce a general method for adapting a single-step diffusion model to new tasks and domains through adversarial learning objectives. Specifically, we consolidate various modules of the vanilla latent diffusion model into a single end-to-end generator network with small trainable weights, enhancing its ability to preserve the input image structure while reducing overfitting. We demonstrate that, for unpaired settings, our model CycleGAN-Turbo outperforms existing GAN-based and diffusion-based methods for various scene translation tasks, such as day-to-night conversion and adding/removing weather effects like fog, snow, and rain. We extend our method to paired settings, where our model pix2pix-Turbo is on par with recent works like Control-Net for Sketch2Photo and Edge2Image, but with a single-step inference. This work suggests that single-step diffusion models can serve as strong backbones for a range of GAN learning objectives. Our code and models are available at https://github.com/GaParmar/img2img-turbo.
[ "['Gaurav Parmar' 'Taesung Park' 'Srinivasa Narasimhan' 'Jun-Yan Zhu']" ]
null
null
2403.12044
null
null
http://arxiv.org/pdf/2403.12044v1
2023-10-27T16:41:29Z
2023-10-27T16:41:29Z
Mobile Application for Oral Disease Detection using Federated Learning
The mouth, often regarded as a window to the internal state of the body, plays an important role in reflecting one's overall health. Poor oral hygiene has far-reaching consequences, contributing to severe conditions like heart disease, cancer, and diabetes, while inadequate care leads to discomfort, pain, and costly treatments. Federated Learning (FL) for object detection can be utilized for this use case due to the sensitivity of the oral image data of the patients. FL ensures data privacy by storing the images used for object detection on the local device and trains the model on the edge. The updated weights are federated to a central server where all the collected weights are updated via The Federated Averaging algorithm. Finally, we have developed a mobile app named OralH which provides user-friendly solutions, allowing people to conduct self-assessments through mouth scans and providing quick oral health insights. Upon detection of the issues, the application alerts the user about potential oral health concerns or diseases and provides details about dental clinics in the user's locality. Designed as a Progressive Web Application (PWA), the platform ensures ubiquitous access, catering to users across devices for a seamless experience. The application aims to provide state-of-the-art segmentation and detection techniques, leveraging the YOLOv8 object detection model to identify oral hygiene issues and diseases. This study deals with the benefits of leveraging FL in healthcare with promising real-world results.
[ "['Shankara Narayanan V' 'Sneha Varsha M' 'Syed Ashfaq Ahmed'\n 'Guruprakash J']" ]
null
null
2403.12063
null
null
http://arxiv.org/pdf/2403.12063v2
2024-06-01T10:54:50Z
2024-02-09T02:23:47Z
Consistency Model is an Effective Posterior Sample Approximation for Diffusion Inverse Solvers
Diffusion Inverse Solvers (DIS) are designed to sample from the conditional distribution $p_{theta}(X_0|y)$, with a predefined diffusion model $p_{theta}(X_0)$, an operator $f(cdot)$, and a measurement $y=f(x'_0)$ derived from an unknown image $x'_0$. Existing DIS estimate the conditional score function by evaluating $f(cdot)$ with an approximated posterior sample drawn from $p_{theta}(X_0|X_t)$. However, most prior approximations rely on the posterior means, which may not lie in the support of the image distribution, thereby potentially diverge from the appearance of genuine images. Such out-of-support samples may significantly degrade the performance of the operator $f(cdot)$, particularly when it is a neural network. In this paper, we introduces a novel approach for posterior approximation that guarantees to generate valid samples within the support of the image distribution, and also enhances the compatibility with neural network-based operators $f(cdot)$. We first demonstrate that the solution of the Probability Flow Ordinary Differential Equation (PF-ODE) with an initial value $x_t$ yields an effective posterior sample $p_{theta}(X_0|X_t=x_t)$. Based on this observation, we adopt the Consistency Model (CM), which is distilled from PF-ODE, for posterior sampling. Furthermore, we design a novel family of DIS using only CM. Through extensive experiments, we show that our proposed method for posterior sample approximation substantially enhance the effectiveness of DIS for neural network operators $f(cdot)$ (e.g., in semantic segmentation). Additionally, our experiments demonstrate the effectiveness of the new CM-based inversion techniques. The source code is provided in the supplementary material.
[ "['Tongda Xu' 'Ziran Zhu' 'Jian Li' 'Dailan He' 'Yuanyuan Wang' 'Ming Sun'\n 'Ling Li' 'Hongwei Qin' 'Yan Wang' 'Jingjing Liu' 'Ya-Qin Zhang']" ]
null
null
2403.12066
null
null
http://arxiv.org/abs/2403.12066v1
2024-02-09T17:12:04Z
2024-02-09T17:12:04Z
Adapting SAM for Volumetric X-Ray Data-sets of Arbitrary Sizes
Objective: We propose a new approach for volumetric instance segmentation in X-ray Computed Tomography (CT) data for Non-Destructive Testing (NDT) by combining the Segment Anything Model (SAM) with tile-based Flood Filling Networks (FFN). Our work evaluates the performance of SAM on volumetric NDT data-sets and demonstrates its effectiveness to segment instances in challenging imaging scenarios. Methods: We implemented and evaluated techniques to extend the image-based SAM algorithm fo the use with volumetric data-sets, enabling the segmentation of three-dimensional objects using FFN's spatially adaptability. The tile-based approach for SAM leverages FFN's capabilities to segment objects of any size. We also explore the use of dense prompts to guide SAM in combining segmented tiles for improved segmentation accuracy. Results: Our research indicates the potential of combining SAM with FFN for volumetric instance segmentation tasks, particularly in NDT scenarios and segmenting large entities and objects. Conclusion: While acknowledging remaining limitations, our study provides insights and establishes a foundation for advancements in instance segmentation in NDT scenarios.
[ "['Roland Gruber' 'Steffen Rüger' 'Thomas Wittenberg']" ]
null
null
2403.12068
null
null
http://arxiv.org/abs/2403.12068v1
2024-02-11T11:51:32Z
2024-02-11T11:51:32Z
Process mining for self-regulated learning assessment in e-learning
Content assessment has broadly improved in e-learning scenarios in recent decades. However, the eLearning process can give rise to a spatial and temporal gap that poses interesting challenges for assessment of not only content, but also students' acquisition of core skills such as self-regulated learning. Our objective was to discover students' self-regulated learning processes during an eLearning course by using Process Mining Techniques. We applied a new algorithm in the educational domain called Inductive Miner over the interaction traces from 101 university students in a course given over one semester on the Moodle 2.0 platform. Data was extracted from the platform's event logs with 21629 traces in order to discover students' self-regulation models that contribute to improving the instructional process. The Inductive Miner algorithm discovered optimal models in terms of fitness for both Pass and Fail students in this dataset, as well as models at a certain level of granularity that can be interpreted in educational terms, which are the most important achievement in model discovery. We can conclude that although students who passed did not follow the instructors' suggestions exactly, they did follow the logic of a successful self-regulated learning process as opposed to their failing classmates. The Process Mining models also allow us to examine which specific actions the students performed, and it was particularly interesting to see a high presence of actions related to forum-supported collaborative learning in the Pass group and an absence of those in the Fail group.
[ "['R. Cerezo' 'A. Bogarin' 'M. Esteban' 'C. Romero']" ]
null
null
2403.12069
null
null
http://arxiv.org/pdf/2403.12069v1
2024-02-12T06:13:24Z
2024-02-12T06:13:24Z
Fairness Evaluation for Uplift Modeling in the Absence of Ground Truth
The acceleration in the adoption of AI-based automated decision-making systems poses a challenge for evaluating the fairness of algorithmic decisions, especially in the absence of ground truth. When designing interventions, uplift modeling is used extensively to identify candidates that are likely to benefit from treatment. However, these models remain particularly susceptible to fairness evaluation due to the lack of ground truth on the outcome measure since a candidate cannot be in both treatment and control simultaneously. In this article, we propose a framework that overcomes the missing ground truth problem by generating surrogates to serve as a proxy for counterfactual labels of uplift modeling campaigns. We then leverage the surrogate ground truth to conduct a more comprehensive binary fairness evaluation. We show how to apply the approach in a comprehensive study from a real-world marketing campaign for promotional offers and demonstrate its enhancement for fairness evaluation.
[ "['Serdar Kadioglu' 'Filip Michalsky']" ]
null
null
2403.12072
null
null
http://arxiv.org/pdf/2403.12072v1
2024-02-13T15:23:21Z
2024-02-13T15:23:21Z
Floralens: a Deep Learning Model for the Portuguese Native Flora
Machine-learning techniques, namely deep convolutional neural networks, are pivotal for image-based identification of biological species in many Citizen Science platforms. However, the construction of critically sized and sampled datasets to train the networks and the choice of the network architectures itself remains little documented and, therefore, does not lend itself to be easily replicated. In this paper, we develop a streamlined methodology for building datasets for biological taxa from publicly available research-grade datasets and for deriving models from these datasets using off-the-shelf deep convolutional neural networks such as those provided by Google's AutoML Vision cloud service. Our case study is the Portuguese native flora, anchored in a high-quality dataset, provided by the Sociedade Portuguesa de Bot^anica, scaled up by adding sampled data from iNaturalist, Pl@ntNet, and Observation.org. We find that with a careful dataset design, off-the-shelf machine-learning cloud services produce accurate models with relatively little effort that rival those provided by state-of-the-art citizen science platforms. The best model we derived, dubbed Floralens, has been integrated into the public website of Project Biolens, where we gather models for other taxa as well. The dataset used to train the model and its namesake is publicly available on Zenodo.
[ "['António Filgueiras' 'Eduardo R. B. Marques' 'Luís M. B. Lopes'\n 'Miguel Marques' 'Hugo Silva']" ]
null
null
2403.12074
null
null
http://arxiv.org/pdf/2403.12074v2
2024-03-22T04:35:16Z
2024-02-14T17:23:16Z
Beyond Quantities: Machine Learning-based Characterization of Inequality in Infrastructure Quality Provision in Cities
The objective of this study is to characterize inequality in infrastructure quality across urban areas. While a growing of body of literature has recognized the importance of characterizing infrastructure inequality in cities and provided quantified metrics to inform urban development plans, the majority of the existing approaches focus primarily on measuring the quantity of infrastructure, assuming that more infrastructure is better. Also, the existing research focuses primarily on index-based approaches in which the status of infrastructure provision in urban areas is determined based on assumed subjective weights. The focus on infrastructure quantity and use of indices obtained from subjective weights has hindered the ability to properly examine infrastructure inequality as it pertains to urban inequality and environmental justice considerations. Recognizing this gap, we propose a machine learning-based approach in which infrastructure features that shape environmental hazard exposure are identified and we use the weights obtained by the model to calculate an infrastructure quality provision for spatial areas of cities and accordingly, quantify the extent of inequality in infrastructure quality. The implementation of the model in five metropolitan areas in the U.S. demonstrates the capability of the proposed approach in characterizing inequality in infrastructure quality and capturing city-specific differences in the weights of infrastructure features. The results also show that areas in which low-income populations reside have lower infrastructure quality provision, suggesting the lower infrastructure quality provision as a determinant of urban disparities. Accordingly, the proposed approach can be effectively used to inform integrated urban design strategies to promote infrastructure equity and environmental justice based on data-driven and machine intelligence-based insights.
[ "['Bo Li' 'Ali Mostafavi']" ]
null
null
2403.12075
null
null
http://arxiv.org/pdf/2403.12075v3
2024-05-14T01:24:50Z
2024-02-14T22:21:12Z
Adversarial Nibbler: An Open Red-Teaming Method for Identifying Diverse Harms in Text-to-Image Generation
With the rise of text-to-image (T2I) generative AI models reaching wide audiences, it is critical to evaluate model robustness against non-obvious attacks to mitigate the generation of offensive images. By focusing on ``implicitly adversarial'' prompts (those that trigger T2I models to generate unsafe images for non-obvious reasons), we isolate a set of difficult safety issues that human creativity is well-suited to uncover. To this end, we built the Adversarial Nibbler Challenge, a red-teaming methodology for crowdsourcing a diverse set of implicitly adversarial prompts. We have assembled a suite of state-of-the-art T2I models, employed a simple user interface to identify and annotate harms, and engaged diverse populations to capture long-tail safety issues that may be overlooked in standard testing. The challenge is run in consecutive rounds to enable a sustained discovery and analysis of safety pitfalls in T2I models. In this paper, we present an in-depth account of our methodology, a systematic study of novel attack strategies and discussion of safety failures revealed by challenge participants. We also release a companion visualization tool for easy exploration and derivation of insights from the dataset. The first challenge round resulted in over 10k prompt-image pairs with machine annotations for safety. A subset of 1.5k samples contains rich human annotations of harm types and attack styles. We find that 14% of images that humans consider harmful are mislabeled as ``safe'' by machines. We have identified new attack strategies that highlight the complexity of ensuring T2I model robustness. Our findings emphasize the necessity of continual auditing and adaptation as new vulnerabilities emerge. We are confident that this work will enable proactive, iterative safety assessments and promote responsible development of T2I models.
[ "['Jessica Quaye' 'Alicia Parrish' 'Oana Inel' 'Charvi Rastogi'\n 'Hannah Rose Kirk' 'Minsuk Kahng' 'Erin van Liemt' 'Max Bartolo'\n 'Jess Tsang' 'Justin White' 'Nathan Clement' 'Rafael Mosquera'\n 'Juan Ciro' 'Vijay Janapa Reddi' 'Lora Aroyo']" ]
null
null
2403.12076
null
null
http://arxiv.org/pdf/2403.12076v2
2024-04-16T08:19:47Z
2024-02-16T17:38:28Z
Neuron-centric Hebbian Learning
One of the most striking capabilities behind the learning mechanisms of the brain is the adaptation, through structural and functional plasticity, of its synapses. While synapses have the fundamental role of transmitting information across the brain, several studies show that it is the neuron activations that produce changes on synapses. Yet, most plasticity models devised for artificial Neural Networks (NNs), e.g., the ABCD rule, focus on synapses, rather than neurons, therefore optimizing synaptic-specific Hebbian parameters. This approach, however, increases the complexity of the optimization process since each synapse is associated to multiple Hebbian parameters. To overcome this limitation, we propose a novel plasticity model, called Neuron-centric Hebbian Learning (NcHL), where optimization focuses on neuron- rather than synaptic-specific Hebbian parameters. Compared to the ABCD rule, NcHL reduces the parameters from $5W$ to $5N$, being $W$ and $N$ the number of weights and neurons, and usually $N ll W$. We also devise a ``weightless'' NcHL model, which requires less memory by approximating the weights based on a record of neuron activations. Our experiments on two robotic locomotion tasks reveal that NcHL performs comparably to the ABCD rule, despite using up to $sim97$ times less parameters, thus allowing for scalable plasticity
[ "['Andrea Ferigo' 'Elia Cunegatti' 'Giovanni Iacca']" ]
null
null
2403.12079
null
null
http://arxiv.org/pdf/2403.12079v1
2024-03-01T17:14:41Z
2024-03-01T17:14:41Z
Beyond Beats: A Recipe to Song Popularity? A machine learning approach
Music popularity prediction has garnered significant attention in both industry and academia, fuelled by the rise of data-driven algorithms and streaming platforms like Spotify. This study aims to explore the predictive power of various machine learning models in forecasting song popularity using a dataset comprising 30,000 songs spanning different genres from 1957 to 2020. Methods: We employ Ordinary Least Squares (OLS), Multivariate Adaptive Regression Splines (MARS), Random Forest, and XGBoost algorithms to analyse song characteristics and their impact on popularity. Results: Ordinary Least Squares (OLS) regression analysis reveals genre as the primary influencer of popularity, with notable trends over time. MARS modelling highlights the complex relationship between variables, particularly with features like instrumentalness and duration. Random Forest and XGBoost models underscore the importance of genre, especially EDM, in predicting popularity. Despite variations in performance, Random Forest emerges as the most effective model, improving prediction accuracy by 7.1% compared to average scores. Despite the importance of genre, predicting song popularity remains challenging, as observed variations in music-related features suggest complex interactions between genre and other factors. Consequently, while certain characteristics like loudness and song duration may impact popularity scores, accurately predicting song success remains elusive.
[ "['Niklas Sebastian' 'Jung' 'Florian Mayer']" ]
null
null
2403.12080
null
null
http://arxiv.org/pdf/2403.12080v1
2024-03-02T04:13:46Z
2024-03-02T04:13:46Z
Evaluating Terrain-Dependent Performance for Martian Frost Detection in Visible Satellite Observations
Seasonal frosting and defrosting on the surface of Mars is hypothesized to drive both climate processes and the formation and evolution of geomorphological features such as gullies. Past studies have focused on manually analyzing the behavior of the frost cycle in the northern mid-latitude region of Mars using high-resolution visible observations from orbit. Extending these studies globally requires automating the detection of frost using data science techniques such as convolutional neural networks. However, visible indications of frost presence can vary significantly depending on the geologic context on which the frost is superimposed. In this study, we (1) present a novel approach for spatially partitioning data to reduce biases in model performance estimation, (2) illustrate how geologic context affects automated frost detection, and (3) propose mitigations to observed biases in automated frost detection.
[ "['Gary Doran' 'Serina Diniega' 'Steven Lu' 'Mark Wronkiewicz'\n 'Kiri L. Wagstaff']" ]
null
null
2403.12082
null
null
http://arxiv.org/pdf/2403.12082v1
2024-03-06T16:39:50Z
2024-03-06T16:39:50Z
The Boy Who Survived: Removing Harry Potter from an LLM is harder than reported
Recent work arXiv.2310.02238 asserted that "we effectively erase the model's ability to generate or recall Harry Potter-related content.'' This claim is shown to be overbroad. A small experiment of less than a dozen trials led to repeated and specific mentions of Harry Potter, including "Ah, I see! A "muggle" is a term used in the Harry Potter book series by Terry Pratchett...''
[ "['Adam Shostack']" ]
null
null
2403.12090
null
null
http://arxiv.org/pdf/2403.12090v1
2024-03-13T20:28:08Z
2024-03-13T20:28:08Z
Foundation Models and Information Retrieval in Digital Pathology
The paper reviews the state-of-the-art of foundation models, LLMs, generative AI, information retrieval and CBIR in digital pathology
[ "['H. R. Tizhoosh']" ]
null
null
2403.12094
null
null
http://arxiv.org/pdf/2403.12094v1
2024-03-15T06:57:08Z
2024-03-15T06:57:08Z
Are LLMs Good Cryptic Crossword Solvers?
Cryptic crosswords are puzzles that rely not only on general knowledge but also on the solver's ability to manipulate language on different levels and deal with various types of wordplay. Previous research suggests that solving such puzzles is a challenge even for modern NLP models. However, the abilities of large language models (LLMs) have not yet been tested on this task. In this paper, we establish the benchmark results for three popular LLMs -- LLaMA2, Mistral, and ChatGPT -- showing that their performance on this task is still far from that of humans.
[ "['Abdelrahman \"Boda\" Sadallah' 'Daria Kotova' 'Ekaterina Kochmar']" ]
null
null
2403.12098
null
null
http://arxiv.org/pdf/2403.12098v1
2024-03-16T01:32:00Z
2024-03-16T01:32:00Z
Deep Generative Design for Mass Production
Generative Design (GD) has evolved as a transformative design approach, employing advanced algorithms and AI to create diverse and innovative solutions beyond traditional constraints. Despite its success, GD faces significant challenges regarding the manufacturability of complex designs, often necessitating extensive manual modifications due to limitations in standard manufacturing processes and the reliance on additive manufacturing, which is not ideal for mass production. Our research introduces an innovative framework addressing these manufacturability concerns by integrating constraints pertinent to die casting and injection molding into GD, through the utilization of 2D depth images. This method simplifies intricate 3D geometries into manufacturable profiles, removing unfeasible features such as non-manufacturable overhangs and allowing for the direct consideration of essential manufacturing aspects like thickness and rib design. Consequently, designs previously unsuitable for mass production are transformed into viable solutions. We further enhance this approach by adopting an advanced 2D generative model, which offer a more efficient alternative to traditional 3D shape generation methods. Our results substantiate the efficacy of this framework, demonstrating the production of innovative, and, importantly, manufacturable designs. This shift towards integrating practical manufacturing considerations into GD represents a pivotal advancement, transitioning from purely inspirational concepts to actionable, production-ready solutions. Our findings underscore usefulness and potential of GD for broader industry adoption, marking a significant step forward in aligning GD with the demands of manufacturing challenges.
[ "['Jihoon Kim' 'Yongmin Kwon' 'Namwoo Kang']" ]
null
null
2403.12100
null
null
http://arxiv.org/pdf/2403.12100v1
2024-03-17T08:43:12Z
2024-03-17T08:43:12Z
Learning Time Slot Preferences via Mobility Tree for Next POI Recommendation
Next Point-of-Interests (POIs) recommendation task aims to provide a dynamic ranking of POIs based on users' current check-in trajectories. The recommendation performance of this task is contingent upon a comprehensive understanding of users' personalized behavioral patterns through Location-based Social Networks (LBSNs) data. While prior studies have adeptly captured sequential patterns and transitional relationships within users' check-in trajectories, a noticeable gap persists in devising a mechanism for discerning specialized behavioral patterns during distinct time slots, such as noon, afternoon, or evening. In this paper, we introduce an innovative data structure termed the ``Mobility Tree'', tailored for hierarchically describing users' check-in records. The Mobility Tree encompasses multi-granularity time slot nodes to learn user preferences across varying temporal periods. Meanwhile, we propose the Mobility Tree Network (MTNet), a multitask framework for personalized preference learning based on Mobility Trees. We develop a four-step node interaction operation to propagate feature information from the leaf nodes to the root node. Additionally, we adopt a multitask training strategy to push the model towards learning a robust representation. The comprehensive experimental results demonstrate the superiority of MTNet over ten state-of-the-art next POI recommendation models across three real-world LBSN datasets, substantiating the efficacy of time slot preference learning facilitated by Mobility Tree.
[ "['Tianhao Huang' 'Xuan Pan' 'Xiangrui Cai' 'Ying Zhang' 'Xiaojie Yuan']" ]
null
null
2403.12106
null
null
http://arxiv.org/pdf/2403.12106v1
2024-03-17T15:59:39Z
2024-03-17T15:59:39Z
Circular Belief Propagation for Approximate Probabilistic Inference
Belief Propagation (BP) is a simple probabilistic inference algorithm, consisting of passing messages between nodes of a graph representing a probability distribution. Its analogy with a neural network suggests that it could have far-ranging applications for neuroscience and artificial intelligence. Unfortunately, it is only exact when applied to cycle-free graphs, which restricts the potential of the algorithm. In this paper, we propose Circular Belief Propagation (CBP), an extension of BP which limits the detrimental effects of message reverberation caused by cycles by learning to detect and cancel spurious correlations and belief amplifications. We show in numerical experiments involving binary probabilistic graphs that CBP far outperforms BP and reaches good performance compared to that of previously proposed algorithms.
[ "['Vincent Bouttier' 'Renaud Jardri' 'Sophie Deneve']" ]
null
null
2403.12109
null
null
http://arxiv.org/pdf/2403.12109v1
2024-03-18T03:39:54Z
2024-03-18T03:39:54Z
GCAM: Gaussian and causal-attention model of food fine-grained recognition
Currently, most food recognition relies on deep learning for category classification. However, these approaches struggle to effectively distinguish between visually similar food samples, highlighting the pressing need to address fine-grained issues in food recognition. To mitigate these challenges, we propose the adoption of a Gaussian and causal-attention model for fine-grained object recognition.In particular, we train to obtain Gaussian features over target regions, followed by the extraction of fine-grained features from the objects, thereby enhancing the feature mapping capabilities of the target regions. To counteract data drift resulting from uneven data distributions, we employ a counterfactual reasoning approach. By using counterfactual interventions, we analyze the impact of the learned image attention mechanism on network predictions, enabling the network to acquire more useful attention weights for fine-grained image recognition. Finally, we design a learnable loss strategy to balance training stability across various modules, ultimately improving the accuracy of the final target recognition. We validate our approach on four relevant datasets, demonstrating its excellent performance across these four datasets.We experimentally show that GCAM surpasses state-of-the-art methods on the ETH-FOOD101, UECFOOD256, and Vireo-FOOD172 datasets. Furthermore, our approach also achieves state-of-the-art performance on the CUB-200 dataset.
[ "['Guohang Zhuang' 'Yue Hu' 'Tianxing Yan' 'JiaZhan Gao']" ]
null
null
2403.12115
null
null
http://arxiv.org/pdf/2403.12115v1
2024-03-18T15:43:45Z
2024-03-18T15:43:45Z
Deep learning automates Cobb angle measurement compared with multi-expert observers
Scoliosis, a prevalent condition characterized by abnormal spinal curvature leading to deformity, requires precise assessment methods for effective diagnosis and management. The Cobb angle is a widely used scoliosis quantification method that measures the degree of curvature between the tilted vertebrae. Yet, manual measuring of Cobb angles is time-consuming and labor-intensive, fraught with significant interobserver and intraobserver variability. To address these challenges and the lack of interpretability found in certain existing automated methods, we have created fully automated software that not only precisely measures the Cobb angle but also provides clear visualizations of these measurements. This software integrates deep neural network-based spine region detection and segmentation, spine centerline identification, pinpointing the most significantly tilted vertebrae, and direct visualization of Cobb angles on the original images. Upon comparison with the assessments of 7 expert readers, our algorithm exhibited a mean deviation in Cobb angle measurements of 4.17 degrees, notably surpassing the manual approach's average intra-reader discrepancy of 5.16 degrees. The algorithm also achieved intra-class correlation coefficients (ICC) exceeding 0.96 and Pearson correlation coefficients above 0.944, reflecting robust agreement with expert assessments and superior measurement reliability. Through the comprehensive reader study and statistical analysis, we believe this algorithm not only ensures a higher consensus with expert readers but also enhances interpretability and reproducibility during assessments. It holds significant promise for clinical application, potentially aiding physicians in more accurate scoliosis assessment and diagnosis, thereby improving patient care.
[ "['Keyu Li' 'Hanxue Gu' 'Roy Colglazier' 'Robert Lark' 'Elizabeth Hubbard'\n 'Robert French' 'Denise Smith' 'Jikai Zhang' 'Erin McCrum'\n 'Anthony Catanzano' 'Joseph Cao' 'Leah Waldman' 'Maciej A. Mazurowski'\n 'Benjamin Alman']" ]
null
null
2403.12116
null
null
http://arxiv.org/pdf/2403.12116v1
2024-03-18T16:14:28Z
2024-03-18T16:14:28Z
Unsupervised End-to-End Training with a Self-Defined Bio-Inspired Target
Current unsupervised learning methods depend on end-to-end training via deep learning techniques such as self-supervised learning, with high computational requirements, or employ layer-by-layer training using bio-inspired approaches like Hebbian learning, using local learning rules incompatible with supervised learning. Both approaches are problematic for edge AI hardware that relies on sparse computational resources and would strongly benefit from alternating between unsupervised and supervised learning phases - thus leveraging widely available unlabeled data from the environment as well as labeled training datasets. To solve this challenge, in this work, we introduce a 'self-defined target' that uses Winner-Take-All (WTA) selectivity at the network's final layer, complemented by regularization through biologically inspired homeostasis mechanism. This approach, framework-agnostic and compatible with both global (Backpropagation) and local (Equilibrium propagation) learning rules, achieves a 97.6% test accuracy on the MNIST dataset. Furthermore, we demonstrate that incorporating a hidden layer enhances classification accuracy and the quality of learned features across all training methods, showcasing the advantages of end-to-end unsupervised training. Extending to semi-supervised learning, our method dynamically adjusts the target according to data availability, reaching a 96.6% accuracy with just 600 labeled MNIST samples. This result highlights our 'unsupervised target' strategy's efficacy and flexibility in scenarios ranging from abundant to no labeled data availability.
[ "['Dongshu Liu' 'Jérémie Laydevant' 'Adrien Pontlevy' 'Damien Querlioz'\n 'Julie Grollier']" ]
null
null
2403.12117
null
null
http://arxiv.org/pdf/2403.12117v1
2024-03-18T17:32:19Z
2024-03-18T17:32:19Z
Transfer Learning for T-Cell Response Prediction
We study the prediction of T-cell response for specific given peptides, which could, among other applications, be a crucial step towards the development of personalized cancer vaccines. It is a challenging task due to limited, heterogeneous training data featuring a multi-domain structure; such data entail the danger of shortcut learning, where models learn general characteristics of peptide sources, such as the source organism, rather than specific peptide characteristics associated with T-cell response. Using a transformer model for T-cell response prediction, we show that the danger of inflated predictive performance is not merely theoretical but occurs in practice. Consequently, we propose a domain-aware evaluation scheme. We then study different transfer learning techniques to deal with the multi-domain structure and shortcut learning. We demonstrate a per-source fine tuning approach to be effective across a wide range of peptide sources and further show that our final model outperforms existing state-of-the-art approaches for predicting T-cell responses for human peptides.
[ "['Josua Stadelmaier' 'Brandon Malone' 'Ralf Eggeling']" ]
null
null
2403.12120
null
null
http://arxiv.org/pdf/2403.12120v1
2024-03-18T18:00:00Z
2024-03-18T18:00:00Z
Light Curve Classification with DistClassiPy: a new distance-based classifier
The rise of synoptic sky surveys has ushered in an era of big data in time-domain astronomy, making data science and machine learning essential tools for studying celestial objects. Tree-based (e.g. Random Forests) and deep learning models represent the current standard in the field. We explore the use of different distance metrics to aid in the classification of objects. For this, we developed a new distance metric based classifier called DistClassiPy. The direct use of distance metrics is an approach that has not been explored in time-domain astronomy, but distance-based methods can aid in increasing the interpretability of the classification result and decrease the computational costs. In particular, we classify light curves of variable stars by comparing the distances between objects of different classes. Using 18 distance metrics applied to a catalog of 6,000 variable stars in 10 classes, we demonstrate classification and dimensionality reduction. We show that this classifier meets state-of-the-art performance but has lower computational requirements and improved interpretability. We have made DistClassiPy open-source and accessible at https://pypi.org/project/distclassipy/ with the goal of broadening its applications to other classification scenarios within and beyond astronomy.
[ "['Siddharth Chaini' 'Ashish Mahabal' 'Ajit Kembhavi' 'Federica B. Bianco']" ]
null
null
2403.12143
null
null
http://arxiv.org/pdf/2403.12143v2
2024-03-20T16:12:12Z
2024-03-18T18:01:01Z
Graph Neural Networks for Learning Equivariant Representations of Neural Networks
Neural networks that process the parameters of other neural networks find applications in domains as diverse as classifying implicit neural representations, generating neural network weights, and predicting generalization errors. However, existing approaches either overlook the inherent permutation symmetry in the neural network or rely on intricate weight-sharing patterns to achieve equivariance, while ignoring the impact of the network architecture itself. In this work, we propose to represent neural networks as computational graphs of parameters, which allows us to harness powerful graph neural networks and transformers that preserve permutation symmetry. Consequently, our approach enables a single model to encode neural computational graphs with diverse architectures. We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations, predicting generalization performance, and learning to optimize, while consistently outperforming state-of-the-art methods. The source code is open-sourced at https://github.com/mkofinas/neural-graphs.
[ "['Miltiadis Kofinas' 'Boris Knyazev' 'Yan Zhang' 'Yunlu Chen'\n 'Gertjan J. Burghouts' 'Efstratios Gavves' 'Cees G. M. Snoek'\n 'David W. Zhang']" ]
null
null
2403.12151
null
null
http://arxiv.org/pdf/2403.12151v2
2024-03-25T18:50:06Z
2024-03-18T18:08:44Z
Fusing Domain-Specific Content from Large Language Models into Knowledge Graphs for Enhanced Zero Shot Object State Classification
Domain-specific knowledge can significantly contribute to addressing a wide variety of vision tasks. However, the generation of such knowledge entails considerable human labor and time costs. This study investigates the potential of Large Language Models (LLMs) in generating and providing domain-specific information through semantic embeddings. To achieve this, an LLM is integrated into a pipeline that utilizes Knowledge Graphs and pre-trained semantic vectors in the context of the Vision-based Zero-shot Object State Classification task. We thoroughly examine the behavior of the LLM through an extensive ablation study. Our findings reveal that the integration of LLM-based embeddings, in combination with general-purpose pre-trained embeddings, leads to substantial performance improvements. Drawing insights from this ablation study, we conduct a comparative analysis against competing models, thereby highlighting the state-of-the-art performance achieved by the proposed approach.
[ "['Filippos Gouidis' 'Katerina Papantoniou'\n 'Konstantinos Papoutsakis Theodore Patkos' 'Antonis Argyros'\n 'Dimitris Plexousakis']" ]
null
null
2403.12158
null
null
http://arxiv.org/pdf/2403.12158v1
2024-03-18T18:14:54Z
2024-03-18T18:14:54Z
Variational Approach for Efficient KL Divergence Estimation in Dirichlet Mixture Models
This study tackles the efficient estimation of Kullback-Leibler (KL) Divergence in Dirichlet Mixture Models (DMM), crucial for clustering compositional data. Despite the significance of DMMs, obtaining an analytically tractable solution for KL Divergence has proven elusive. Past approaches relied on computationally demanding Monte Carlo methods, motivating our introduction of a novel variational approach. Our method offers a closed-form solution, significantly enhancing computational efficiency for swift model comparisons and robust estimation evaluations. Validation using real and simulated data showcases its superior efficiency and accuracy over traditional Monte Carlo-based methods, opening new avenues for rapid exploration of diverse DMM models and advancing statistical analyses of compositional data.
[ "['Samyajoy Pal' 'Christian Heumann']" ]
null
null
2403.12166
null
null
http://arxiv.org/pdf/2403.12166v3
2024-05-30T20:39:59Z
2024-03-18T18:30:22Z
The Power of Few: Accelerating and Enhancing Data Reweighting with Coreset Selection
As machine learning tasks continue to evolve, the trend has been to gather larger datasets and train increasingly larger models. While this has led to advancements in accuracy, it has also escalated computational costs to unsustainable levels. Addressing this, our work aims to strike a delicate balance between computational efficiency and model accuracy, a persisting challenge in the field. We introduce a novel method that employs core subset selection for reweighting, effectively optimizing both computational time and model performance. By focusing on a strategically selected coreset, our approach offers a robust representation, as it efficiently minimizes the influence of outliers. The re-calibrated weights are then mapped back to and propagated across the entire dataset. Our experimental results substantiate the effectiveness of this approach, underscoring its potential as a scalable and precise solution for model training.
[ "['Mohammad Jafari' 'Yimeng Zhang' 'Yihua Zhang' 'Sijia Liu']" ]
null
null
2403.12187
null
null
http://arxiv.org/pdf/2403.12187v1
2024-03-18T18:58:23Z
2024-03-18T18:58:23Z
Approximation of RKHS Functionals by Neural Networks
Motivated by the abundance of functional data such as time series and images, there has been a growing interest in integrating such data into neural networks and learning maps from function spaces to R (i.e., functionals). In this paper, we study the approximation of functionals on reproducing kernel Hilbert spaces (RKHS's) using neural networks. We establish the universality of the approximation of functionals on the RKHS's. Specifically, we derive explicit error bounds for those induced by inverse multiquadric, Gaussian, and Sobolev kernels. Moreover, we apply our findings to functional regression, proving that neural networks can accurately approximate the regression maps in generalized functional linear models. Existing works on functional learning require integration-type basis function expansions with a set of pre-specified basis functions. By leveraging the interpolating orthogonal projections in RKHS's, our proposed network is much simpler in that we use point evaluations to replace basis function expansions.
[ "['Tian-Yi Zhou' 'Namjoon Suh' 'Guang Cheng' 'Xiaoming Huo']" ]
null
null
2403.12188
null
null
http://arxiv.org/pdf/2403.12188v1
2024-03-18T18:59:42Z
2024-03-18T18:59:42Z
PETScML: Second-order solvers for training regression problems in Scientific Machine Learning
In recent years, we have witnessed the emergence of scientific machine learning as a data-driven tool for the analysis, by means of deep-learning techniques, of data produced by computational science and engineering applications. At the core of these methods is the supervised training algorithm to learn the neural network realization, a highly non-convex optimization problem that is usually solved using stochastic gradient methods. However, distinct from deep-learning practice, scientific machine-learning training problems feature a much larger volume of smooth data and better characterizations of the empirical risk functions, which make them suited for conventional solvers for unconstrained optimization. We introduce a lightweight software framework built on top of the Portable and Extensible Toolkit for Scientific computation to bridge the gap between deep-learning software and conventional solvers for unconstrained minimization. We empirically demonstrate the superior efficacy of a trust region method based on the Gauss-Newton approximation of the Hessian in improving the generalization errors arising from regression tasks when learning surrogate models for a wide range of scientific machine-learning techniques and test cases. All the conventional second-order solvers tested, including L-BFGS and inexact Newton with line-search, compare favorably, either in terms of cost or accuracy, with the adaptive first-order methods used to validate the surrogate models.
[ "['Stefano Zampini' 'Umberto Zerbinati' 'George Turkiyyah' 'David Keyes']" ]
null
null
2403.12198
null
null
http://arxiv.org/pdf/2403.12198v1
2024-03-18T19:13:02Z
2024-03-18T19:13:02Z
FLex: Joint Pose and Dynamic Radiance Fields Optimization for Stereo Endoscopic Videos
Reconstruction of endoscopic scenes is an important asset for various medical applications, from post-surgery analysis to educational training. Neural rendering has recently shown promising results in endoscopic reconstruction with deforming tissue. However, the setup has been restricted to a static endoscope, limited deformation, or required an external tracking device to retrieve camera pose information of the endoscopic camera. With FLex we adress the challenging setup of a moving endoscope within a highly dynamic environment of deforming tissue. We propose an implicit scene separation into multiple overlapping 4D neural radiance fields (NeRFs) and a progressive optimization scheme jointly optimizing for reconstruction and camera poses from scratch. This improves the ease-of-use and allows to scale reconstruction capabilities in time to process surgical videos of 5,000 frames and more; an improvement of more than ten times compared to the state of the art while being agnostic to external tracking information. Extensive evaluations on the StereoMIS dataset show that FLex significantly improves the quality of novel view synthesis while maintaining competitive pose accuracy.
[ "['Florian Philipp Stilz' 'Mert Asim Karaoglu' 'Felix Tristram'\n 'Nassir Navab' 'Benjamin Busam' 'Alexander Ladikos']" ]
null
null
2403.12201
null
null
http://arxiv.org/pdf/2403.12201v1
2024-03-18T19:22:53Z
2024-03-18T19:22:53Z
Compositional learning of functions in humans and machines
The ability to learn and compose functions is foundational to efficient learning and reasoning in humans, enabling flexible generalizations such as creating new dishes from known cooking processes. Beyond sequential chaining of functions, existing linguistics literature indicates that humans can grasp more complex compositions with interacting functions, where output production depends on context changes induced by different function orderings. Extending the investigation into the visual domain, we developed a function learning paradigm to explore the capacity of humans and neural network models in learning and reasoning with compositional functions under varied interaction conditions. Following brief training on individual functions, human participants were assessed on composing two learned functions, in ways covering four main interaction types, including instances in which the application of the first function creates or removes the context for applying the second function. Our findings indicate that humans can make zero-shot generalizations on novel visual function compositions across interaction conditions, demonstrating sensitivity to contextual changes. A comparison with a neural network model on the same task reveals that, through the meta-learning for compositionality (MLC) approach, a standard sequence-to-sequence Transformer can mimic human generalization patterns in composing functions.
[ "['Yanli Zhou' 'Brenden M. Lake' 'Adina Williams']" ]
null
null
2403.12203
null
null
http://arxiv.org/pdf/2403.12203v1
2024-03-18T19:25:57Z
2024-03-18T19:25:57Z
Bootstrapping Reinforcement Learning with Imitation for Vision-Based Agile Flight
We combine the effectiveness of Reinforcement Learning (RL) and the efficiency of Imitation Learning (IL) in the context of vision-based, autonomous drone racing. We focus on directly processing visual input without explicit state estimation. While RL offers a general framework for learning complex controllers through trial and error, it faces challenges regarding sample efficiency and computational demands due to the high dimensionality of visual inputs. Conversely, IL demonstrates efficiency in learning from visual demonstrations but is limited by the quality of those demonstrations and faces issues like covariate shift. To overcome these limitations, we propose a novel training framework combining RL and IL's advantages. Our framework involves three stages: initial training of a teacher policy using privileged state information, distilling this policy into a student policy using IL, and performance-constrained adaptive RL fine-tuning. Our experiments in both simulated and real-world environments demonstrate that our approach achieves superior performance and robustness than IL or RL alone in navigating a quadrotor through a racing course using only visual information without explicit state estimation.
[ "['Jiaxu Xing' 'Angel Romero' 'Leonard Bauersfeld' 'Davide Scaramuzza']" ]