categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
sequence
null
null
2406.18072
null
null
http://arxiv.org/pdf/2406.18072v1
2024-06-26T05:03:00Z
2024-06-26T05:03:00Z
Learning for Bandits under Action Erasures
We consider a novel multi-arm bandit (MAB) setup, where a learner needs to communicate the actions to distributed agents over erasure channels, while the rewards for the actions are directly available to the learner through external sensors. In our model, while the distributed agents know if an action is erased, the central learner does not (there is no feedback), and thus does not know whether the observed reward resulted from the desired action or not. We propose a scheme that can work on top of any (existing or future) MAB algorithm and make it robust to action erasures. Our scheme results in a worst-case regret over action-erasure channels that is at most a factor of $O(1/sqrt{1-epsilon})$ away from the no-erasure worst-case regret of the underlying MAB algorithm, where $epsilon$ is the erasure probability. We also propose a modification of the successive arm elimination algorithm and prove that its worst-case regret is $Tilde{O}(sqrt{KT}+K/(1-epsilon))$, which we prove is optimal by providing a matching lower bound.
[ "['Osama Hanna' 'Merve Karakas' 'Lin F. Yang' 'Christina Fragouli']" ]
null
null
2406.18108
null
null
http://arxiv.org/pdf/2406.18108v1
2024-06-26T06:48:11Z
2024-06-26T06:48:11Z
Token-Weighted RNN-T for Learning from Flawed Data
ASR models are commonly trained with the cross-entropy criterion to increase the probability of a target token sequence. While optimizing the probability of all tokens in the target sequence is sensible, one may want to de-emphasize tokens that reflect transcription errors. In this work, we propose a novel token-weighted RNN-T criterion that augments the RNN-T objective with token-specific weights. The new objective is used for mitigating accuracy loss from transcriptions errors in the training data, which naturally appear in two settings: pseudo-labeling and human annotation errors. Experiments results show that using our method for semi-supervised learning with pseudo-labels leads to a consistent accuracy improvement, up to 38% relative. We also analyze the accuracy degradation resulting from different levels of WER in the reference transcription, and show that token-weighted RNN-T is suitable for overcoming this degradation, recovering 64%-99% of the accuracy loss.
[ "['Gil Keren' 'Wei Zhou' 'Ozlem Kalinli']" ]
null
null
2406.18119
null
null
http://arxiv.org/pdf/2406.18119v1
2024-06-26T07:16:18Z
2024-06-26T07:16:18Z
Robust personnel rostering: how accurate should absenteeism predictions be?
Disruptions to personnel rosters caused by absenteeism often necessitate last-minute adjustments to the employees' working hours. A common strategy to mitigate the impact of such changes is to assign employees to reserve shifts: special on-call duties during which an employee can be called in to cover for an absent employee. To maximize roster robustness, we assume a predict-then-optimize approach that uses absence predictions from a machine learning model to schedule an adequate number of reserve shifts. In this paper we propose a methodology to evaluate the robustness of rosters generated by the predict-then-optimize approach, assuming the machine learning model will make predictions at a predetermined prediction performance level. Instead of training and testing machine learning models, our methodology simulates the predictions based on a characterization of model performance. We show how this methodology can be applied to identify the minimum performance level needed for the model to outperform simple non-data-driven robust rostering policies. In a computational study on a nurse rostering problem, we demonstrate how the predict-then-optimize approach outperforms non-data-driven policies under reasonable performance requirements, particularly when employees possess interchangeable skills.
[ "['Martina Doneda' 'Pieter Smet' 'Giuliana Carello' 'Ettore Lanzarone'\n 'Greet Vanden Berghe']" ]
null
null
2406.18120
null
null
http://arxiv.org/pdf/2406.18120v2
2024-07-12T18:22:26Z
2024-06-26T07:19:51Z
ArzEn-LLM: Code-Switched Egyptian Arabic-English Translation and Speech Recognition Using LLMs
Motivated by the widespread increase in the phenomenon of code-switching between Egyptian Arabic and English in recent times, this paper explores the intricacies of machine translation (MT) and automatic speech recognition (ASR) systems, focusing on translating code-switched Egyptian Arabic-English to either English or Egyptian Arabic. Our goal is to present the methodologies employed in developing these systems, utilizing large language models such as LLama and Gemma. In the field of ASR, we explore the utilization of the Whisper model for code-switched Egyptian Arabic recognition, detailing our experimental procedures including data preprocessing and training techniques. Through the implementation of a consecutive speech-to-text translation system that integrates ASR with MT, we aim to overcome challenges posed by limited resources and the unique characteristics of the Egyptian Arabic dialect. Evaluation against established metrics showcases promising results, with our methodologies yielding a significant improvement of $56%$ in English translation over the state-of-the-art and $9.3%$ in Arabic translation. Since code-switching is deeply inherent in spoken languages, it is crucial that ASR systems can effectively handle this phenomenon. This capability is crucial for enabling seamless interaction in various domains, including business negotiations, cultural exchanges, and academic discourse. Our models and code are available as open-source resources. Code: url{http://github.com/ahmedheakl/arazn-llm}}, Models: url{http://huggingface.co/collections/ahmedheakl/arazn-llm-662ceaf12777656607b9524e}.
[ "['Ahmed Heakl' 'Youssef Zaghloul' 'Mennatullah Ali' 'Rania Hossam'\n 'Walid Gomaa']" ]
null
null
2406.18125
null
null
http://arxiv.org/pdf/2406.18125v2
2024-07-12T18:19:28Z
2024-06-26T07:25:18Z
ResumeAtlas: Revisiting Resume Classification with Large-Scale Datasets and Large Language Models
The increasing reliance on online recruitment platforms coupled with the adoption of AI technologies has highlighted the critical need for efficient resume classification methods. However, challenges such as small datasets, lack of standardized resume templates, and privacy concerns hinder the accuracy and effectiveness of existing classification models. In this work, we address these challenges by presenting a comprehensive approach to resume classification. We curated a large-scale dataset of 13,389 resumes from diverse sources and employed Large Language Models (LLMs) such as BERT and Gemma1.1 2B for classification. Our results demonstrate significant improvements over traditional machine learning approaches, with our best model achieving a top-1 accuracy of 92% and a top-5 accuracy of 97.5%. These findings underscore the importance of dataset quality and advanced model architectures in enhancing the accuracy and robustness of resume classification systems, thus advancing the field of online recruitment practices.
[ "['Ahmed Heakl' 'Youssef Mohamed' 'Noran Mohamed' 'Aly Elsharkawy'\n 'Ahmed Zaky']" ]
null
null
2406.18129
null
null
http://arxiv.org/pdf/2406.18129v1
2024-06-26T07:31:16Z
2024-06-26T07:31:16Z
CTS: Sim-to-Real Unsupervised Domain Adaptation on 3D Detection
Simulation data can be accurately labeled and have been expected to improve the performance of data-driven algorithms, including object detection. However, due to the various domain inconsistencies from simulation to reality (sim-to-real), cross-domain object detection algorithms usually suffer from dramatic performance drops. While numerous unsupervised domain adaptation (UDA) methods have been developed to address cross-domain tasks between real-world datasets, progress in sim-to-real remains limited. This paper presents a novel Complex-to-Simple (CTS) framework to transfer models from labeled simulation (source) to unlabeled reality (target) domains. Based on a two-stage detector, the novelty of this work is threefold: 1) developing fixed-size anchor heads and RoI augmentation to address size bias and feature diversity between two domains, thereby improving the quality of pseudo-label; 2) developing a novel corner-format representation of aleatoric uncertainty (AU) for the bounding box, to uniformly quantify pseudo-label quality; 3) developing a noise-aware mean teacher domain adaptation method based on AU, as well as object-level and frame-level sampling strategies, to migrate the impact of noisy labels. Experimental results demonstrate that our proposed approach significantly enhances the sim-to-real domain adaptation capability of 3D object detection models, outperforming state-of-the-art cross-domain algorithms, which are usually developed for real-to-real UDA tasks.
[ "['Meiying Zhang' 'Weiyuan Peng' 'Guangyao Ding' 'Chenyang Lei'\n 'Chunlin Ji' 'Qi Hao']" ]
null
null
2406.18131
null
null
http://arxiv.org/pdf/2406.18131v1
2024-06-26T07:32:47Z
2024-06-26T07:32:47Z
Sequential Disentanglement by Extracting Static Information From A Single Sequence Element
One of the fundamental representation learning tasks is unsupervised sequential disentanglement, where latent codes of inputs are decomposed to a single static factor and a sequence of dynamic factors. To extract this latent information, existing methods condition the static and dynamic codes on the entire input sequence. Unfortunately, these models often suffer from information leakage, i.e., the dynamic vectors encode both static and dynamic information, or vice versa, leading to a non-disentangled representation. Attempts to alleviate this problem via reducing the dynamic dimension and auxiliary loss terms gain only partial success. Instead, we propose a novel and simple architecture that mitigates information leakage by offering a simple and effective subtraction inductive bias while conditioning on a single sample. Remarkably, the resulting variational framework is simpler in terms of required loss terms, hyperparameters, and data augmentation. We evaluate our method on multiple data-modality benchmarks including general time series, video, and audio, and we show beyond state-of-the-art results on generation and prediction tasks in comparison to several strong baselines.
[ "['Nimrod Berman' 'Ilan Naiman' 'Idan Arbiv' 'Gal Fadlon' 'Omri Azencot']" ]
null
null
2406.18137
null
null
http://arxiv.org/pdf/2406.18137v1
2024-06-26T07:41:41Z
2024-06-26T07:41:41Z
Sparse deep neural networks for nonparametric estimation in high-dimensional sparse regression
Generalization theory has been established for sparse deep neural networks under high-dimensional regime. Beyond generalization, parameter estimation is also important since it is crucial for variable selection and interpretability of deep neural networks. Current theoretical studies concerning parameter estimation mainly focus on two-layer neural networks, which is due to the fact that the convergence of parameter estimation heavily relies on the regularity of the Hessian matrix, while the Hessian matrix of deep neural networks is highly singular. To avoid the unidentifiability of deep neural networks in parameter estimation, we propose to conduct nonparametric estimation of partial derivatives with respect to inputs. We first show that model convergence of sparse deep neural networks is guaranteed in that the sample complexity only grows with the logarithm of the number of parameters or the input dimension when the $ell_{1}$-norm of parameters is well constrained. Then by bounding the norm and the divergence of partial derivatives, we establish that the convergence rate of nonparametric estimation of partial derivatives scales as $mathcal{O}(n^{-1/4})$, a rate which is slower than the model convergence rate $mathcal{O}(n^{-1/2})$. To the best of our knowledge, this study combines nonparametric estimation and parametric sparse deep neural networks for the first time. As nonparametric estimation of partial derivatives is of great significance for nonlinear variable selection, the current results show the promising future for the interpretability of deep neural networks.
[ "['Dongya Wu' 'Xin Li']" ]
null
null
2406.18145
null
null
http://arxiv.org/pdf/2406.18145v2
2024-07-12T01:36:06Z
2024-06-26T07:53:48Z
Beyond Statistical Estimation: Differentially Private Individual Computation via Shuffling
In data-driven applications, preserving user privacy while enabling valuable computations remains a critical challenge. Technologies like Differential Privacy (DP) have been pivotal in addressing these concerns. The shuffle model of DP requires no trusted curators and can achieve high utility by leveraging the privacy amplification effect yielded from shuffling. These benefits have led to significant interest in the shuffle model. However, the computation tasks in the shuffle model are limited to statistical estimation, making the shuffle model inapplicable to real-world scenarios in which each user requires a personalized output. This paper introduces a novel paradigm termed Private Individual Computation (PIC), expanding the shuffle model to support a broader range of permutation-equivariant computations. PIC enables personalized outputs while preserving privacy, and enjoys privacy amplification through shuffling. We propose a concrete protocol that realizes PIC. By using one-time public keys, our protocol enables users to receive their outputs without compromising anonymity, which is essential for privacy amplification. Additionally, we present an optimal randomizer, the Minkowski Response, designed for the PIC model to enhance utility. We formally prove the security and privacy properties of the PIC protocol. Theoretical analysis and empirical evaluations demonstrate PIC's capability in handling non-statistical computation tasks, and the efficacy of PIC and the Minkowski randomizer in achieving superior utility compared to existing solutions.
[ "['Shaowei Wang' 'Changyu Dong' 'Xiangfu Song' 'Jin Li' 'Zhili Zhou'\n 'Di Wang' 'Han Wu']" ]
null
null
2406.18156
null
null
http://arxiv.org/pdf/2406.18156v1
2024-06-26T08:14:23Z
2024-06-26T08:14:23Z
FedAQ: Communication-Efficient Federated Edge Learning via Joint Uplink and Downlink Adaptive Quantization
Federated learning (FL) is a powerful machine learning paradigm which leverages the data as well as the computational resources of clients, while protecting clients' data privacy. However, the substantial model size and frequent aggregation between the server and clients result in significant communication overhead, making it challenging to deploy FL in resource-limited wireless networks. In this work, we aim to mitigate the communication overhead by using quantization. Previous research on quantization has primarily focused on the uplink communication, employing either fixed-bit quantization or adaptive quantization methods. In this work, we introduce a holistic approach by joint uplink and downlink adaptive quantization to reduce the communication overhead. In particular, we optimize the learning convergence by determining the optimal uplink and downlink quantization bit-length, with a communication energy constraint. Theoretical analysis shows that the optimal quantization levels depend on the range of model gradients or weights. Based on this insight, we propose a decreasing-trend quantization for the uplink and an increasing-trend quantization for the downlink, which aligns with the change of the model parameters during the training process. Experimental results show that, the proposed joint uplink and downlink adaptive quantization strategy can save up to 66.7% energy compared with the existing schemes.
[ "['Linping Qu' 'Shenghui Song' 'Chi-Ying Tsui']" ]
null
null
2406.18164
null
null
http://arxiv.org/pdf/2406.18164v1
2024-06-26T08:24:44Z
2024-06-26T08:24:44Z
NeBuLa: A discourse aware Minecraft Builder
When engaging in collaborative tasks, humans efficiently exploit the semantic structure of a conversation to optimize verbal and nonverbal interactions. But in recent "language to code" or "language to action" models, this information is lacking. We show how incorporating the prior discourse and nonlinguistic context of a conversation situated in a nonlinguistic environment can improve the "language to action" component of such interactions. We fine tune an LLM to predict actions based on prior context; our model, NeBuLa, doubles the net-action F1 score over the baseline on this task of Jayannavar et al.(2020). We also investigate our model's ability to construct shapes and understand location descriptions using a synthetic dataset.
[ "['Akshay Chaturvedi' 'Kate Thompson' 'Nicholas Asher']" ]
null
null
2406.18179
null
null
http://arxiv.org/pdf/2406.18179v1
2024-06-26T08:53:26Z
2024-06-26T08:53:26Z
DeepExtremeCubes: Integrating Earth system spatio-temporal data for impact assessment of climate extremes
With climate extremes' rising frequency and intensity, robust analytical tools are crucial to predict their impacts on terrestrial ecosystems. Machine learning techniques show promise but require well-structured, high-quality, and curated analysis-ready datasets. Earth observation datasets comprehensively monitor ecosystem dynamics and responses to climatic extremes, yet the data complexity can challenge the effectiveness of machine learning models. Despite recent progress in deep learning to ecosystem monitoring, there is a need for datasets specifically designed to analyse compound heatwave and drought extreme impact. Here, we introduce the DeepExtremeCubes database, tailored to map around these extremes, focusing on persistent natural vegetation. It comprises over 40,000 spatially sampled small data cubes (i.e. minicubes) globally, with a spatial coverage of 2.5 by 2.5 km. Each minicube includes (i) Sentinel-2 L2A images, (ii) ERA5-Land variables and generated extreme event cube covering 2016 to 2022, and (iii) ancillary land cover and topography maps. The paper aims to (1) streamline data accessibility, structuring, pre-processing, and enhance scientific reproducibility, and (2) facilitate biosphere dynamics forecasting in response to compound extremes.
[ "['Chaonan Ji' 'Tonio Fincke' 'Vitus Benson' 'Gustau Camps-Valls'\n 'Miguel-Angel Fernandez-Torres' 'Fabian Gans' 'Guido Kraemer'\n 'Francesco Martinuzzi' 'David Montero' 'Karin Mora'\n 'Oscar J. Pellicer-Valero' 'Claire Robin' 'Maximilian Soechting'\n 'Melanie Weynants' 'Miguel D. Mahecha']" ]
null
null
2406.18187
null
null
http://arxiv.org/pdf/2406.18187v1
2024-06-26T09:03:52Z
2024-06-26T09:03:52Z
Selective Prompting Tuning for Personalized Conversations with LLMs
In conversational AI, personalizing dialogues with persona profiles and contextual understanding is essential. Despite large language models' (LLMs) improved response coherence, effective persona integration remains a challenge. In this work, we first study two common approaches for personalizing LLMs: textual prompting and direct fine-tuning. We observed that textual prompting often struggles to yield responses that are similar to the ground truths in datasets, while direct fine-tuning tends to produce repetitive or overly generic replies. To alleviate those issues, we propose textbf{S}elective textbf{P}rompt textbf{T}uning (SPT), which softly prompts LLMs for personalized conversations in a selective way. Concretely, SPT initializes a set of soft prompts and uses a trainable dense retriever to adaptively select suitable soft prompts for LLMs according to different input contexts, where the prompt retriever is dynamically updated through feedback from the LLMs. Additionally, we propose context-prompt contrastive learning and prompt fusion learning to encourage the SPT to enhance the diversity of personalized conversations. Experiments on the CONVAI2 dataset demonstrate that SPT significantly enhances response diversity by up to 90%, along with improvements in other critical performance indicators. Those results highlight the efficacy of SPT in fostering engaging and personalized dialogue generation. The SPT model code (https://github.com/hqsiswiliam/SPT) is publicly available for further exploration.
[ "['Qiushi Huang' 'Xubo Liu' 'Tom Ko' 'Bo Wu' 'Wenwu Wang' 'Yu Zhang'\n 'Lilian Tang']" ]
null
null
2406.18219
null
null
http://arxiv.org/pdf/2406.18219v1
2024-06-26T10:07:57Z
2024-06-26T10:07:57Z
A Closer Look into Mixture-of-Experts in Large Language Models
Mixture-of-experts (MoE) is gaining increasing attention due to its unique properties and remarkable performance, especially for language tasks. By sparsely activating a subset of parameters for each token, MoE architecture could increase the model size without sacrificing computational efficiency, achieving a better trade-off between performance and training costs. However, the underlying mechanism of MoE still lacks further exploration, and its modularization degree remains questionable. In this paper, we make an initial attempt to understand the inner workings of MoE-based large language models. Concretely, we comprehensively study the parametric and behavioral features of three recent MoE-based models and reveal some intriguing observations, including (1) Neurons act like fine-grained experts. (2) The router of MoE usually selects experts with larger output norms. (3) The expert diversity increases as the layer increases, while the last layer is an outlier. Based on the observations, we also provide suggestions for a broad spectrum of MoE practitioners, such as router design and expert allocation. We hope this work could shed light on future research on the MoE framework and other modular architectures. Code is available at https://github.com/kamanphoebe/Look-into-MoEs.
[ "['Ka Man Lo' 'Zeyu Huang' 'Zihan Qiu' 'Zili Wang' 'Jie Fu']" ]
null
null
2406.18220
null
null
http://arxiv.org/abs/2406.18220v1
2024-06-26T10:08:24Z
2024-06-26T10:08:24Z
Guiding Video Prediction with Explicit Procedural Knowledge
We propose a general way to integrate procedural knowledge of a domain into deep learning models. We apply it to the case of video prediction, building on top of object-centric deep models and show that this leads to a better performance than using data-driven models alone. We develop an architecture that facilitates latent space disentanglement in order to use the integrated procedural knowledge, and establish a setup that allows the model to learn the procedural interface in the latent space using the downstream task of video prediction. We contrast the performance to a state-of-the-art data-driven approach and show that problems where purely data-driven approaches struggle can be handled by using knowledge about the domain, providing an alternative to simply collecting more data.
[ "['Patrick Takenaka' 'Johannes Maucher' 'Marco F. Huber']" ]
null
null
2406.18247
null
null
http://arxiv.org/pdf/2406.18247v1
2024-06-26T10:49:26Z
2024-06-26T10:49:26Z
Generative artificial intelligence in ophthalmology: multimodal retinal images for the diagnosis of Alzheimer's disease with convolutional neural networks
Background/Aim. This study aims to predict Amyloid Positron Emission Tomography (AmyloidPET) status with multimodal retinal imaging and convolutional neural networks (CNNs) and to improve the performance through pretraining with synthetic data. Methods. Fundus autofluorescence, optical coherence tomography (OCT), and OCT angiography images from 328 eyes of 59 AmyloidPET positive subjects and 108 AmyloidPET negative subjects were used for classification. Denoising Diffusion Probabilistic Models (DDPMs) were trained to generate synthetic images and unimodal CNNs were pretrained on synthetic data and finetuned on real data or trained solely on real data. Multimodal classifiers were developed to combine predictions of the four unimodal CNNs with patient metadata. Class activation maps of the unimodal classifiers provided insight into the network's attention to inputs. Results. DDPMs generated diverse, realistic images without memorization. Pretraining unimodal CNNs with synthetic data improved AUPR at most from 0.350 to 0.579. Integration of metadata in multimodal CNNs improved AUPR from 0.486 to 0.634, which was the best overall best classifier. Class activation maps highlighted relevant retinal regions which correlated with AD. Conclusion. Our method for generating and leveraging synthetic data has the potential to improve AmyloidPET prediction from multimodal retinal imaging. A DDPM can generate realistic and unique multimodal synthetic retinal images. Our best performing unimodal and multimodal classifiers were not pretrained on synthetic data, however pretraining with synthetic data slightly improved classification performance for two out of the four modalities.
[ "['I. R. Slootweg' 'M. Thach' 'K. R. Curro-Tafili' 'F. D. Verbraak'\n 'F. H. Bouwman' 'Y. A. L. Pijnenburg' 'J. F. Boer'\n 'J. H. P. de Kwisthout' 'L. Bagheriye' 'P. J. González']" ]
null
null
2406.18249
null
null
http://arxiv.org/pdf/2406.18249v1
2024-06-26T10:51:44Z
2024-06-26T10:51:44Z
Foundational Models for Pathology and Endoscopy Images: Application for Gastric Inflammation
The integration of artificial intelligence (AI) in medical diagnostics represents a significant advancement in managing upper gastrointestinal (GI) cancer, a major cause of global cancer mortality. Specifically for gastric cancer (GC), chronic inflammation causes changes in the mucosa such as atrophy, intestinal metaplasia (IM), dysplasia and ultimately cancer. Early detection through endoscopic regular surveillance is essential for better outcomes. Foundation models (FM), which are machine or deep learning models trained on diverse data and applicable to broad use cases, offer a promising solution to enhance the accuracy of endoscopy and its subsequent pathology image analysis. This review explores the recent advancements, applications, and challenges associated with FM in endoscopy and pathology imaging. We started by elucidating the core principles and architectures underlying these models, including their training methodologies and the pivotal role of large-scale data in developing their predictive capabilities. Moreover, this work discusses emerging trends and future research directions, emphasizing the integration of multimodal data, the development of more robust and equitable models, and the potential for real-time diagnostic support. This review aims to provide a roadmap for researchers and practitioners in navigating the complexities of incorporating FM into clinical practice for prevention/management of GC cases, thereby improving patient outcomes.
[ "['Hamideh Kerdegari' 'Kyle Higgins' 'Dennis Veselkov' 'Ivan Laponogov'\n 'Inese Polaka' 'Miguel Coimbra' 'Junior Andrea Pescino' 'Marcis Leja'\n 'Mario Dinis-Ribeiro' 'Tania Fleitas Kanonnikoff' 'Kirill Veselkov']" ]
null
null
2406.18279
null
null
http://arxiv.org/pdf/2406.18279v1
2024-06-26T12:05:49Z
2024-06-26T12:05:49Z
CAS: Confidence Assessments of classification algorithms for Semantic segmentation of EO data
Confidence assessments of semantic segmentation algorithms in remote sensing are important. It is a desirable property of models to a priori know if they produce an incorrect output. Evaluations of the confidence assigned to the estimates of models for the task of classification in Earth Observation (EO) are crucial as they can be used to achieve improved semantic segmentation performance and prevent high error rates during inference and deployment. The model we develop, the Confidence Assessments of classification algorithms for Semantic segmentation (CAS) model, performs confidence evaluations at both the segment and pixel levels, and outputs both labels and confidence. The outcome of this work has important applications. The main application is the evaluation of EO Foundation Models on semantic segmentation downstream tasks, in particular land cover classification using satellite Copernicus Sentinel-2 data. The evaluation shows that the proposed model is effective and outperforms other alternative baseline models.
[ "['Nikolaos Dionelis' 'Nicolas Longepe']" ]
null
null
2406.18293
null
null
http://arxiv.org/pdf/2406.18293v1
2024-06-26T12:23:54Z
2024-06-26T12:23:54Z
Combining Automated Optimisation of Hyperparameters and Reward Shape
There has been significant progress in deep reinforcement learning (RL) in recent years. Nevertheless, finding suitable hyperparameter configurations and reward functions remains challenging even for experts, and performance heavily relies on these design choices. Also, most RL research is conducted on known benchmarks where knowledge about these choices already exists. However, novel practical applications often pose complex tasks for which no prior knowledge about good hyperparameters and reward functions is available, thus necessitating their derivation from scratch. Prior work has examined automatically tuning either hyperparameters or reward functions individually. We demonstrate empirically that an RL algorithm's hyperparameter configurations and reward function are often mutually dependent, meaning neither can be fully optimised without appropriate values for the other. We then propose a methodology for the combined optimisation of hyperparameters and the reward function. Furthermore, we include a variance penalty as an optimisation objective to improve the stability of learned policies. We conducted extensive experiments using Proximal Policy Optimisation and Soft Actor-Critic on four environments. Our results show that combined optimisation significantly improves over baseline performance in half of the environments and achieves competitive performance in the others, with only a minor increase in computational costs. This suggests that combined optimisation should be best practice.
[ "['Julian Dierkes' 'Emma Cramer' 'Holger H. Hoos' 'Sebastian Trimpe']" ]
null
null
2406.18295
null
null
http://arxiv.org/pdf/2406.18295v1
2024-06-26T12:27:06Z
2024-06-26T12:27:06Z
Evaluating and Benchmarking Foundation Models for Earth Observation and Geospatial AI
When we are primarily interested in solving several problems jointly with a given prescribed high performance accuracy for each target application, then Foundation Models should for most cases be used rather than problem-specific models. We focus on the specific Computer Vision application of Foundation Models for Earth Observation (EO) and geospatial AI. These models can solve important problems we are tackling, including for example land cover classification, crop type mapping, flood segmentation, building density estimation, and road regression segmentation. In this paper, we show that for a limited number of labelled data, Foundation Models achieve improved performance compared to problem-specific models. In this work, we also present our proposed evaluation benchmark for Foundation Models for EO. Benchmarking the generalization performance of Foundation Models is important as it has become difficult to standardize a fair comparison across the many different models that have been proposed recently. We present the results using our evaluation benchmark for EO Foundation Models and show that Foundation Models are label efficient in the downstream tasks and help us solve problems we are tackling in EO and remote sensing.
[ "['Nikolaos Dionelis' 'Casper Fibaek' 'Luke Camilleri' 'Andreas Luyts'\n 'Jente Bosmans' 'Bertrand Le Saux']" ]
null
null
2406.18309
null
null
http://arxiv.org/pdf/2406.18309v1
2024-06-26T12:50:07Z
2024-06-26T12:50:07Z
Automated Immunophenotyping Assessment for Diagnosing Childhood Acute Leukemia using Set-Transformers
Acute Leukemia is the most common hematologic malignancy in children and adolescents. A key methodology in the diagnostic evaluation of this malignancy is immunophenotyping based on Multiparameter Flow Cytometry (FCM). However, this approach is manual, and thus time-consuming and subjective. To alleviate this situation, we propose in this paper the FCM-Former, a machine learning, self-attention based FCM-diagnostic tool, automating the immunophenotyping assessment in Childhood Acute Leukemia. The FCM-Former is trained in a supervised manner, by directly using flow cytometric data. Our FCM-Former achieves an accuracy of 96.5% assigning lineage to each sample among 960 cases of either acute B-cell, T-cell lymphoblastic, and acute myeloid leukemia (B-ALL, T-ALL, AML). To the best of our knowledge, the FCM-Former is the first work that automates the immunophenotyping assessment with FCM data in diagnosing pediatric Acute Leukemia.
[ "['Elpiniki Maria Lygizou' 'Michael Reiter' 'Margarita Maurer-Granofszky'\n 'Michael Dworzak' 'Radu Grosu']" ]
null
null
2406.18310
null
null
http://arxiv.org/pdf/2406.18310v1
2024-06-26T12:50:10Z
2024-06-26T12:50:10Z
Spatial-temporal Hierarchical Reinforcement Learning for Interpretable Pathology Image Super-Resolution
Pathology image are essential for accurately interpreting lesion cells in cytopathology screening, but acquiring high-resolution digital slides requires specialized equipment and long scanning times. Though super-resolution (SR) techniques can alleviate this problem, existing deep learning models recover pathology image in a black-box manner, which can lead to untruthful biological details and misdiagnosis. Additionally, current methods allocate the same computational resources to recover each pixel of pathology image, leading to the sub-optimal recovery issue due to the large variation of pathology image. In this paper, we propose the first hierarchical reinforcement learning framework named Spatial-Temporal hierARchical Reinforcement Learning (STAR-RL), mainly for addressing the aforementioned issues in pathology image super-resolution problem. We reformulate the SR problem as a Markov decision process of interpretable operations and adopt the hierarchical recovery mechanism in patch level, to avoid sub-optimal recovery. Specifically, the higher-level spatial manager is proposed to pick out the most corrupted patch for the lower-level patch worker. Moreover, the higher-level temporal manager is advanced to evaluate the selected patch and determine whether the optimization should be stopped earlier, thereby avoiding the over-processed problem. Under the guidance of spatial-temporal managers, the lower-level patch worker processes the selected patch with pixel-wise interpretable actions at each time step. Experimental results on medical images degraded by different kernels show the effectiveness of STAR-RL. Furthermore, STAR-RL validates the promotion in tumor diagnosis with a large margin and shows generalizability under various degradations. The source code is available at https://github.com/CUHK-AIM-Group/STAR-RL.
[ "['Wenting Chen' 'Jie Liu' 'Tommy W. S. Chow' 'Yixuan Yuan']" ]
null
null
2406.18311
null
null
http://arxiv.org/pdf/2406.18311v2
2024-06-30T03:49:06Z
2024-06-26T12:50:13Z
Online Learning of Multiple Tasks and Their Relationships : Testing on Spam Email Data and EEG Signals Recorded in Construction Fields
This paper examines an online multi-task learning (OMTL) method, which processes data sequentially to predict labels across related tasks. The framework learns task weights and their relatedness concurrently. Unlike previous models that assumed static task relatedness, our approach treats tasks as initially independent, updating their relatedness iteratively using newly calculated weight vectors. We introduced three rules to update the task relatedness matrix: OMTLCOV, OMTLLOG, and OMTLVON, and compared them against a conventional method (CMTL) that uses a fixed relatedness value. Performance evaluations on three datasets a spam dataset and two EEG datasets from construction workers under varying conditions demonstrated that our OMTL methods outperform CMTL, improving accuracy by 1% to 3% on EEG data, and maintaining low error rates around 12% on the spam dataset.
[ "['Yixin Jin' 'Wenjing Zhou' 'Meiqi Wang' 'Meng Li' 'Xintao Li' 'Tianyu Hu']" ]
null
null
2406.18314
null
null
http://arxiv.org/pdf/2406.18314v1
2024-06-26T12:54:41Z
2024-06-26T12:54:41Z
ContactNet: Geometric-Based Deep Learning Model for Predicting Protein-Protein Interactions
Deep learning approaches achieved significant progress in predicting protein structures. These methods are often applied to protein-protein interactions (PPIs) yet require Multiple Sequence Alignment (MSA) which is unavailable for various interactions, such as antibody-antigen. Computational docking methods are capable of sampling accurate complex models, but also produce thousands of invalid configurations. The design of scoring functions for identifying accurate models is a long-standing challenge. We develop a novel attention-based Graph Neural Network (GNN), ContactNet, for classifying PPI models obtained from docking algorithms into accurate and incorrect ones. When trained on docked antigen and modeled antibody structures, ContactNet doubles the accuracy of current state-of-the-art scoring functions, achieving accurate models among its Top-10 at 43% of the test cases. When applied to unbound antibodies, its Top-10 accuracy increases to 65%. This performance is achieved without MSA and the approach is applicable to other types of interactions, such as host-pathogens or general PPIs.
[ "['Matan Halfon' 'Tomer Cohen' 'Raanan Fattal' 'Dina Schneidman-Duhovny']" ]
null
null
2406.18316
null
null
http://arxiv.org/pdf/2406.18316v1
2024-06-26T12:59:37Z
2024-06-26T12:59:37Z
Trade-off between Gradient Measurement Efficiency and Expressivity in Deep Quantum Neural Networks
Quantum neural networks (QNNs) require an efficient training algorithm to achieve practical quantum advantages. A promising approach is the use of gradient-based optimization algorithms, where gradients are estimated through quantum measurements. However, it is generally difficult to efficiently measure gradients in QNNs because the quantum state collapses upon measurement. In this work, we prove a general trade-off between gradient measurement efficiency and expressivity in a wide class of deep QNNs, elucidating the theoretical limits and possibilities of efficient gradient estimation. This trade-off implies that a more expressive QNN requires a higher measurement cost in gradient estimation, whereas we can increase gradient measurement efficiency by reducing the QNN expressivity to suit a given task. We further propose a general QNN ansatz called the stabilizer-logical product ansatz (SLPA), which can reach the upper limit of the trade-off inequality by leveraging the symmetric structure of the quantum circuit. In learning an unknown symmetric function, the SLPA drastically reduces the quantum resources required for training while maintaining accuracy and trainability compared to a well-designed symmetric circuit based on the parameter-shift method. Our results not only reveal a theoretical understanding of efficient training in QNNs but also provide a standard and broadly applicable efficient QNN design.
[ "['Koki Chinzei' 'Shinichiro Yamano' 'Quoc Hoan Tran' 'Yasuhiro Endo'\n 'Hirotaka Oshima']" ]
null
null
2406.18327
null
null
http://arxiv.org/pdf/2406.18327v1
2024-06-26T13:14:24Z
2024-06-26T13:14:24Z
Multi-modal Evidential Fusion Network for Trusted PET/CT Tumor Segmentation
Accurate segmentation of tumors in PET/CT images is important in computer-aided diagnosis and treatment of cancer. The key issue of such a segmentation problem lies in the effective integration of complementary information from PET and CT images. However, the quality of PET and CT images varies widely in clinical settings, which leads to uncertainty in the modality information extracted by networks. To take the uncertainty into account in multi-modal information fusion, this paper proposes a novel Multi-modal Evidential Fusion Network (MEFN) comprising a Cross-Modal Feature Learning (CFL) module and a Multi-modal Trusted Fusion (MTF) module. The CFL module reduces the domain gap upon modality conversion and highlights common tumor features, thereby alleviating the needs of the segmentation module to handle modality specificity. The MTF module utilizes mutual attention mechanisms and an uncertainty calibrator to fuse modality features based on modality uncertainty and then fuse the segmentation results under the guidance of Dempster-Shafer Theory. Besides, a new uncertainty perceptual loss is introduced to force the model focusing on uncertain features and hence improve its ability to extract trusted modality information. Extensive comparative experiments are conducted on two publicly available PET/CT datasets to evaluate the performance of our proposed method whose results demonstrate that our MEFN significantly outperforms state-of-the-art methods with improvements of 2.15% and 3.23% in DSC scores on the AutoPET dataset and the Hecktor dataset, respectively. More importantly, our model can provide radiologists with credible uncertainty of the segmentation results for their decision in accepting or rejecting the automatic segmentation results, which is particularly important for clinical applications. Our code will be available at https://github.com/QPaws/MEFN.
[ "['Yuxuan Qi' 'Li Lin' 'Jiajun Wang' 'Jingya Zhang' 'Bin Zhang']" ]
null
null
2406.18328
null
null
http://arxiv.org/pdf/2406.18328v2
2024-06-28T12:45:52Z
2024-06-26T13:16:40Z
PDFA Distillation via String Probability Queries
Probabilistic deterministic finite automata (PDFA) are discrete event systems modeling conditional probabilities over languages: Given an already seen sequence of tokens they return the probability of tokens of interest to appear next. These types of models have gained interest in the domain of explainable machine learning, where they are used as surrogate models for neural networks trained as language models. In this work we present an algorithm to distill PDFA from neural networks. Our algorithm is a derivative of the L# algorithm and capable of learning PDFA from a new type of query, in which the algorithm infers conditional probabilities from the probability of the queried string to occur. We show its effectiveness on a recent public dataset by distilling PDFA from a set of trained neural networks.
[ "['Robert Baumgartner' 'Sicco Verwer']" ]
null
null
2406.18330
null
null
http://arxiv.org/pdf/2406.18330v1
2024-06-26T13:18:42Z
2024-06-26T13:18:42Z
Molecular Diffusion Models with Virtual Receptors
Machine learning approaches to Structure-Based Drug Design (SBDD) have proven quite fertile over the last few years. In particular, diffusion-based approaches to SBDD have shown great promise. We present a technique which expands on this diffusion approach in two crucial ways. First, we address the size disparity between the drug molecule and the target/receptor, which makes learning more challenging and inference slower. We do so through the notion of a Virtual Receptor, which is a compressed version of the receptor; it is learned so as to preserve key aspects of the structural information of the original receptor, while respecting the relevant group equivariance. Second, we incorporate a protein language embedding used originally in the context of protein folding. We experimentally demonstrate the contributions of both the virtual receptors and the protein embeddings: in practice, they lead to both better performance, as well as significantly faster computations.
[ "['Matan Halfon' 'Eyal Rozenberg' 'Ehud Rivlin' 'Daniel Freedman']" ]
null
null
2406.18332
null
null
http://arxiv.org/pdf/2406.18332v2
2024-07-12T13:16:16Z
2024-06-26T13:21:00Z
Early Classification of Time Series: Taxonomy and Benchmark
In many situations, the measurements of a studied phenomenon are provided sequentially, and the prediction of its class needs to be made as early as possible so as not to incur too high a time penalty, but not too early and risk paying the cost of misclassification. This problem has been particularly studied in the case of time series, and is known as Early Classification of Time Series (ECTS). Although it has been the subject of a growing body of literature, there is still a lack of a systematic, shared evaluation protocol to compare the relative merits of the various existing methods. This document begins by situating these methods within a principle-based taxonomy. It defines dimensions for organizing their evaluation, and then reports the results of a very extensive set of experiments along these dimensions involving nine state-of-the art ECTS algorithms. In addition, these and other experiments can be carried out using an open-source library in which most of the existing ECTS algorithms have been implemented (see url{https://github.com/ML-EDM/ml_edm}).
[ "['Aurélien Renault' 'Alexis Bondu' 'Antoine Cornuéjols' 'Vincent Lemaire']" ]
null
null
2406.18334
null
null
http://arxiv.org/pdf/2406.18334v1
2024-06-26T13:21:24Z
2024-06-26T13:21:24Z
Efficient and Accurate Explanation Estimation with Distribution Compression
Exact computation of various machine learning explanations requires numerous model evaluations and in extreme cases becomes impractical. The computational cost of approximation increases with an ever-increasing size of data and model parameters. Many heuristics have been proposed to approximate post-hoc explanations efficiently. This paper shows that the standard i.i.d. sampling used in a broad spectrum of algorithms for explanation estimation leads to an approximation error worthy of improvement. To this end, we introduce Compress Then Explain (CTE), a new paradigm for more efficient and accurate explanation estimation. CTE uses distribution compression through kernel thinning to obtain a data sample that best approximates the marginal distribution. We show that CTE improves the estimation of removal-based local and global explanations with negligible computational overhead. It often achieves an on-par explanation approximation error using 2-3x less samples, i.e. requiring 2-3x less model evaluations. CTE is a simple, yet powerful, plug-in for any explanation method that now relies on i.i.d. sampling.
[ "['Hubert Baniecki' 'Giuseppe Casalicchio' 'Bernd Bischl'\n 'Przemyslaw Biecek']" ]
null
null
2406.18345
null
null
http://arxiv.org/pdf/2406.18345v1
2024-06-26T13:42:11Z
2024-06-26T13:42:11Z
EmT: A Novel Transformer for Generalized Cross-subject EEG Emotion Recognition
Integrating prior knowledge of neurophysiology into neural network architecture enhances the performance of emotion decoding. While numerous techniques emphasize learning spatial and short-term temporal patterns, there has been limited emphasis on capturing the vital long-term contextual information associated with emotional cognitive processes. In order to address this discrepancy, we introduce a novel transformer model called emotion transformer (EmT). EmT is designed to excel in both generalized cross-subject EEG emotion classification and regression tasks. In EmT, EEG signals are transformed into a temporal graph format, creating a sequence of EEG feature graphs using a temporal graph construction module (TGC). A novel residual multi-view pyramid GCN module (RMPG) is then proposed to learn dynamic graph representations for each EEG feature graph within the series, and the learned representations of each graph are fused into one token. Furthermore, we design a temporal contextual transformer module (TCT) with two types of token mixers to learn the temporal contextual information. Finally, the task-specific output module (TSO) generates the desired outputs. Experiments on four publicly available datasets show that EmT achieves higher results than the baseline methods for both EEG emotion classification and regression tasks. The code is available at https://github.com/yi-ding-cs/EmT.
[ "['Yi Ding' 'Chengxuan Tong' 'Shuailei Zhang' 'Muyun Jiang' 'Yong Li'\n 'Kevin Lim Jun Liang' 'Cuntai Guan']" ]
null
null
2406.18351
null
null
http://arxiv.org/pdf/2406.18351v1
2024-06-26T13:52:47Z
2024-06-26T13:52:47Z
Reinforcement Learning with Intrinsically Motivated Feedback Graph for Lost-sales Inventory Control
Reinforcement learning (RL) has proven to be well-performed and general-purpose in the inventory control (IC). However, further improvement of RL algorithms in the IC domain is impeded due to two limitations of online experience. First, online experience is expensive to acquire in real-world applications. With the low sample efficiency nature of RL algorithms, it would take extensive time to train the RL policy to convergence. Second, online experience may not reflect the true demand due to the lost sales phenomenon typical in IC, which makes the learning process more challenging. To address the above challenges, we propose a decision framework that combines reinforcement learning with feedback graph (RLFG) and intrinsically motivated exploration (IME) to boost sample efficiency. In particular, we first take advantage of the inherent properties of lost-sales IC problems and design the feedback graph (FG) specially for lost-sales IC problems to generate abundant side experiences aid RL updates. Then we conduct a rigorous theoretical analysis of how the designed FG reduces the sample complexity of RL methods. Based on the theoretical insights, we design an intrinsic reward to direct the RL agent to explore to the state-action space with more side experiences, further exploiting FG's power. Experimental results demonstrate that our method greatly improves the sample efficiency of applying RL in IC. Our code is available at https://anonymous.4open.science/r/RLIMFG4IC-811D/
[ "['Zifan Liu' 'Xinran Li' 'Shibo Chen' 'Gen Li' 'Jiashuo Jiang' 'Jun Zhang']" ]
null
null
2406.18354
null
null
http://arxiv.org/pdf/2406.18354v1
2024-06-26T13:54:59Z
2024-06-26T13:54:59Z
Kolmogorov-Arnold Graph Neural Networks
Graph neural networks (GNNs) excel in learning from network-like data but often lack interpretability, making their application challenging in domains requiring transparent decision-making. We propose the Graph Kolmogorov-Arnold Network (GKAN), a novel GNN model leveraging spline-based activation functions on edges to enhance both accuracy and interpretability. Our experiments on five benchmark datasets demonstrate that GKAN outperforms state-of-the-art GNN models in node classification, link prediction, and graph classification tasks. In addition to the improved accuracy, GKAN's design inherently provides clear insights into the model's decision-making process, eliminating the need for post-hoc explainability techniques. This paper discusses the methodology, performance, and interpretability of GKAN, highlighting its potential for applications in domains where interpretability is crucial.
[ "['Gianluca De Carlo' 'Andrea Mastropietro' 'Aris Anagnostopoulos']" ]
null
null
2406.18370
null
null
http://arxiv.org/pdf/2406.18370v1
2024-06-26T14:13:50Z
2024-06-26T14:13:50Z
Learning pure quantum states (almost) without regret
We initiate the study of quantum state tomography with minimal regret. A learner has sequential oracle access to an unknown pure quantum state, and in each round selects a pure probe state. Regret is incurred if the unknown state is measured orthogonal to this probe, and the learner's goal is to minimise the expected cumulative regret over $T$ rounds. The challenge is to find a balance between the most informative measurements and measurements incurring minimal regret. We show that the cumulative regret scales as $Theta(operatorname{polylog} T)$ using a new tomography algorithm based on a median of means least squares estimator. This algorithm employs measurements biased towards the unknown state and produces online estimates that are optimal (up to logarithmic terms) in the number of observed samples.
[ "['Josep Lumbreras' 'Mikhail Terekhov' 'Marco Tomamichel']" ]
null
null
2406.18380
null
null
http://arxiv.org/pdf/2406.18380v2
2024-07-01T07:13:08Z
2024-06-26T14:21:21Z
KAGNNs: Kolmogorov-Arnold Networks meet Graph Learning
In recent years, Graph Neural Networks (GNNs) have become the de facto tool for learning node and graph representations. Most GNNs typically consist of a sequence of neighborhood aggregation (a.k.a., message passing) layers. Within each of these layers, the representation of each node is updated from an aggregation and transformation of its neighbours representations at the previous layer. The upper bound for the expressive power of message passing GNNs was reached through the use of MLPs as a transformation, due to their universal approximation capabilities. However, MLPs suffer from well-known limitations, which recently motivated the introduction of Kolmogorov-Arnold Networks (KANs). KANs rely on the Kolmogorov-Arnold representation theorem, rendering them a promising alternative to MLPs. In this work, we compare the performance of KANs against that of MLPs in graph learning tasks. We perform extensive experiments on node classification, graph classification and graph regression datasets. Our preliminary results indicate that while KANs are on-par with MLPs in classification tasks, they seem to have a clear advantage in the graph regression tasks. Code is available at https: //github.com/RomanBresson/KAGNN.
[ "['Roman Bresson' 'Giannis Nikolentzos' 'George Panagopoulos'\n 'Michail Chatzianastasis' 'Jun Pang' 'Michalis Vazirgiannis']" ]
null
null
2406.18382
null
null
http://arxiv.org/pdf/2406.18382v2
2024-07-02T08:56:48Z
2024-06-26T14:24:51Z
Adversarial Search Engine Optimization for Large Language Models
Large Language Models (LLMs) are increasingly used in applications where the model selects from competing third-party content, such as in LLM-powered search engines or chatbot plugins. In this paper, we introduce Preference Manipulation Attacks, a new class of attacks that manipulate an LLM's selections to favor the attacker. We demonstrate that carefully crafted website content or plugin documentations can trick an LLM to promote the attacker products and discredit competitors, thereby increasing user traffic and monetization. We show this leads to a prisoner's dilemma, where all parties are incentivized to launch attacks, but the collective effect degrades the LLM's outputs for everyone. We demonstrate our attacks on production LLM search engines (Bing and Perplexity) and plugin APIs (for GPT-4 and Claude). As LLMs are increasingly used to rank third-party content, we expect Preference Manipulation Attacks to emerge as a significant threat.
[ "['Fredrik Nestaas' 'Edoardo Debenedetti' 'Florian Tramèr']" ]
null
null
2406.18387
null
null
http://arxiv.org/pdf/2406.18387v2
2024-07-15T10:15:56Z
2024-06-26T14:29:05Z
DoubleTake: Geometry Guided Depth Estimation
Estimating depth from a sequence of posed RGB images is a fundamental computer vision task, with applications in augmented reality, path planning etc. Prior work typically makes use of previous frames in a multi view stereo framework, relying on matching textures in a local neighborhood. In contrast, our model leverages historical predictions by giving the latest 3D geometry data as an extra input to our network. This self-generated geometric hint can encode information from areas of the scene not covered by the keyframes and it is more regularized when compared to individual predicted depth maps for previous frames. We introduce a Hint MLP which combines cost volume features with a hint of the prior geometry, rendered as a depth map from the current camera location, together with a measure of the confidence in the prior geometry. We demonstrate that our method, which can run at interactive speeds, achieves state-of-the-art estimates of depth and 3D scene reconstruction in both offline and incremental evaluation scenarios.
[ "['Mohamed Sayed' 'Filippo Aleotti' 'Jamie Watson' 'Zawar Qureshi'\n 'Guillermo Garcia-Hernando' 'Gabriel Brostow' 'Sara Vicente'\n 'Michael Firman']" ]
null
null
2406.18397
null
null
http://arxiv.org/pdf/2406.18397v2
2024-07-05T07:59:57Z
2024-06-26T14:44:24Z
Second Maximum of a Gaussian Random Field and Exact (t-)Spacing test
In this article, we introduce the novel concept of the second maximum of a Gaussian random field on a Riemannian submanifold. This second maximum serves as a powerful tool for characterizing the distribution of the maximum. By utilizing an ad-hoc Kac Rice formula, we derive the explicit form of the maximum's distribution, conditioned on the second maximum and some regressed component of the Riemannian Hessian. This approach results in an exact test, based on the evaluation of spacing between these maxima, which we refer to as the spacing test. We investigate the applicability of this test in detecting sparse alternatives within Gaussian symmetric tensors, continuous sparse deconvolution, and two-layered neural networks with smooth rectifiers. Our theoretical results are supported by numerical experiments, which illustrate the calibration and power of the proposed tests. More generally, this test can be applied to any Gaussian random field on a Riemannian manifold, and we provide a general framework for the application of the spacing test in continuous sparse kernel regression. Furthermore, when the variance-covariance function of the Gaussian random field is known up to a scaling factor, we derive an exact Studentized version of our test, coined the $t$-spacing test. This test is perfectly calibrated under the null hypothesis and has high power for detecting sparse alternatives.
[ "['Jean-Marc Azaïs' 'Federico Dalmao' 'Yohann De Castro']" ]
null
null
2406.18400
null
null
http://arxiv.org/pdf/2406.18400v1
2024-06-26T14:49:54Z
2024-06-26T14:49:54Z
Do LLMs dream of elephants (when told not to)? Latent concept association and associative memory in transformers
Large Language Models (LLMs) have the capacity to store and recall facts. Through experimentation with open-source models, we observe that this ability to retrieve facts can be easily manipulated by changing contexts, even without altering their factual meanings. These findings highlight that LLMs might behave like an associative memory model where certain tokens in the contexts serve as clues to retrieving facts. We mathematically explore this property by studying how transformers, the building blocks of LLMs, can complete such memory tasks. We study a simple latent concept association problem with a one-layer transformer and we show theoretically and empirically that the transformer gathers information using self-attention and uses the value matrix for associative memory.
[ "['Yibo Jiang' 'Goutham Rajendran' 'Pradeep Ravikumar' 'Bryon Aragam']" ]
null
null
2406.18417
null
null
http://arxiv.org/pdf/2406.18417v1
2024-06-26T15:11:15Z
2024-06-26T15:11:15Z
Towards diffusion models for large-scale sea-ice modelling
We make the first steps towards diffusion models for unconditional generation of multivariate and Arctic-wide sea-ice states. While targeting to reduce the computational costs by diffusion in latent space, latent diffusion models also offer the possibility to integrate physical knowledge into the generation process. We tailor latent diffusion models to sea-ice physics with a censored Gaussian distribution in data space to generate data that follows the physical bounds of the modelled variables. Our latent diffusion models reach similar scores as the diffusion model trained in data space, but they smooth the generated fields as caused by the latent mapping. While enforcing physical bounds cannot reduce the smoothing, it improves the representation of the marginal ice zone. Therefore, for large-scale Earth system modelling, latent diffusion models can have many advantages compared to diffusion in data space if the significant barrier of smoothing can be resolved.
[ "['Tobias Sebastian Finn' 'Charlotte Durand' 'Alban Farchi' 'Marc Bocquet'\n 'Julien Brajard']" ]
null
null
2406.18418
null
null
http://arxiv.org/pdf/2406.18418v1
2024-06-26T15:11:26Z
2024-06-26T15:11:26Z
Differential error feedback for communication-efficient decentralized learning
Communication-constrained algorithms for decentralized learning and optimization rely on local updates coupled with the exchange of compressed signals. In this context, differential quantization is an effective technique to mitigate the negative impact of compression by leveraging correlations between successive iterates. In addition, the use of error feedback, which consists of incorporating the compression error into subsequent steps, is a powerful mechanism to compensate for the bias caused by the compression. Under error feedback, performance guarantees in the literature have so far focused on algorithms employing a fusion center or a special class of contractive compressors that cannot be implemented with a finite number of bits. In this work, we propose a new decentralized communication-efficient learning approach that blends differential quantization with error feedback. The approach is specifically tailored for decentralized learning problems where agents have individual risk functions to minimize subject to subspace constraints that require the minimizers across the network to lie in low-dimensional subspaces. This constrained formulation includes consensus or single-task optimization as special cases, and allows for more general task relatedness models such as multitask smoothness and coupled optimization. We show that, under some general conditions on the compression noise, and for sufficiently small step-sizes $mu$, the resulting communication-efficient strategy is stable both in terms of mean-square error and average bit rate: by reducing $mu$, it is possible to keep the estimation errors small (on the order of $mu$) without increasing indefinitely the bit rate as $murightarrow 0$. The results establish that, in the small step-size regime and with a finite number of bits, it is possible to attain the performance achievable in the absence of compression.
[ "['Roula Nassif' 'Stefan Vlaski' 'Marco Carpentiero' 'Vincenzo Matta'\n 'Ali H. Sayed']" ]
null
null
2406.18420
null
null
http://arxiv.org/pdf/2406.18420v1
2024-06-26T15:15:15Z
2024-06-26T15:15:15Z
Mixture of Experts in a Mixture of RL settings
Mixtures of Experts (MoEs) have gained prominence in (self-)supervised learning due to their enhanced inference efficiency, adaptability to distributed training, and modularity. Previous research has illustrated that MoEs can significantly boost Deep Reinforcement Learning (DRL) performance by expanding the network's parameter count while reducing dormant neurons, thereby enhancing the model's learning capacity and ability to deal with non-stationarity. In this work, we shed more light on MoEs' ability to deal with non-stationarity and investigate MoEs in DRL settings with "amplified" non-stationarity via multi-task training, providing further evidence that MoEs improve learning capacity. In contrast to previous work, our multi-task results allow us to better understand the underlying causes for the beneficial effect of MoE in DRL training, the impact of the various MoE components, and insights into how best to incorporate them in actor-critic-based DRL networks. Finally, we also confirm results from previous work.
[ "['Timon Willi' 'Johan Obando-Ceron' 'Jakob Foerster' 'Karolina Dziugaite'\n 'Pablo Samuel Castro']" ]
null
null
2406.18423
null
null
http://arxiv.org/pdf/2406.18423v1
2024-06-26T15:18:49Z
2024-06-26T15:18:49Z
Graph Neural Networks for Emulation of Finite-Element Ice Dynamics in Greenland and Antarctic Ice Sheets
Although numerical models provide accurate solutions for ice sheet dynamics based on physics laws, they accompany intensified computational demands to solve partial differential equations. In recent years, convolutional neural networks (CNNs) have been widely used as statistical emulators for those numerical models. However, since CNNs operate on regular grids, they cannot represent the refined meshes and computational efficiency of finite-element numerical models. Therefore, instead of CNNs, this study adopts an equivariant graph convolutional network (EGCN) as an emulator for the ice sheet dynamics modeling. EGCN reproduces ice thickness and velocity changes in the Helheim Glacier, Greenland, and Pine Island Glacier, Antarctica, with 260 times and 44 times faster computation time, respectively. Compared to the traditional CNN and graph convolutional network, EGCN shows outstanding accuracy in thickness prediction near fast ice streams by preserving the equivariance to the translation and rotation of graphs.
[ "['Younghyun Koo' 'Maryam Rahnemoonfar']" ]
null
null
2406.18445
null
null
http://arxiv.org/pdf/2406.18445v1
2024-06-26T15:50:13Z
2024-06-26T15:50:13Z
An Autotuning-based Optimization Framework for Mixed-kernel SVM Classifications in Smart Pixel Datasets and Heterojunction Transistors
Support Vector Machine (SVM) is a state-of-the-art classification method widely used in science and engineering due to its high accuracy, its ability to deal with high dimensional data, and its flexibility in modeling diverse sources of data. In this paper, we propose an autotuning-based optimization framework to quantify the ranges of hyperparameters in SVMs to identify their optimal choices, and apply the framework to two SVMs with the mixed-kernel between Sigmoid and Gaussian kernels for smart pixel datasets in high energy physics (HEP) and mixed-kernel heterojunction transistors (MKH). Our experimental results show that the optimal selection of hyperparameters in the SVMs and the kernels greatly varies for different applications and datasets, and choosing their optimal choices is critical for a high classification accuracy of the mixed kernel SVMs. Uninformed choices of hyperparameters C and coef0 in the mixed-kernel SVMs result in severely low accuracy, and the proposed framework effectively quantifies the proper ranges for the hyperparameters in the SVMs to identify their optimal choices to achieve the highest accuracy 94.6% for the HEP application and the highest average accuracy 97.2% with far less tuning time for the MKH application.
[ "['Xingfu Wu' 'Tupendra Oli' 'ustin H. Qian' 'Valerie Taylor'\n 'Mark C. Hersam' 'Vinod K. Sangwan']" ]
null
null
2406.18450
null
null
http://arxiv.org/pdf/2406.18450v1
2024-06-26T15:59:13Z
2024-06-26T15:59:13Z
Preference Elicitation for Offline Reinforcement Learning
Applying reinforcement learning (RL) to real-world problems is often made challenging by the inability to interact with the environment and the difficulty of designing reward functions. Offline RL addresses the first challenge by considering access to an offline dataset of environment interactions labeled by the reward function. In contrast, Preference-based RL does not assume access to the reward function and learns it from preferences, but typically requires an online interaction with the environment. We bridge the gap between these frameworks by exploring efficient methods for acquiring preference feedback in a fully offline setup. We propose Sim-OPRL, an offline preference-based reinforcement learning algorithm, which leverages a learned environment model to elicit preference feedback on simulated rollouts. Drawing on insights from both the offline RL and the preference-based RL literature, our algorithm employs a pessimistic approach for out-of-distribution data, and an optimistic approach for acquiring informative preferences about the optimal policy. We provide theoretical guarantees regarding the sample complexity of our approach, dependent on how well the offline data covers the optimal policy. Finally, we demonstrate the empirical performance of Sim-OPRL in different environments.
[ "['Alizée Pace' 'Bernhard Schölkopf' 'Gunnar Rätsch' 'Giorgia Ramponi']" ]
null
null
2406.18451
null
null
http://arxiv.org/pdf/2406.18451v1
2024-06-26T16:00:35Z
2024-06-26T16:00:35Z
Detecting Brittle Decisions for Free: Leveraging Margin Consistency in Deep Robust Classifiers
Despite extensive research on adversarial training strategies to improve robustness, the decisions of even the most robust deep learning models can still be quite sensitive to imperceptible perturbations, creating serious risks when deploying them for high-stakes real-world applications. While detecting such cases may be critical, evaluating a model's vulnerability at a per-instance level using adversarial attacks is computationally too intensive and unsuitable for real-time deployment scenarios. The input space margin is the exact score to detect non-robust samples and is intractable for deep neural networks. This paper introduces the concept of margin consistency -- a property that links the input space margins and the logit margins in robust models -- for efficient detection of vulnerable samples. First, we establish that margin consistency is a necessary and sufficient condition to use a model's logit margin as a score for identifying non-robust samples. Next, through comprehensive empirical analysis of various robustly trained models on CIFAR10 and CIFAR100 datasets, we show that they indicate strong margin consistency with a strong correlation between their input space margins and the logit margins. Then, we show that we can effectively use the logit margin to confidently detect brittle decisions with such models and accurately estimate robust accuracy on an arbitrarily large test set by estimating the input margins only on a small subset. Finally, we address cases where the model is not sufficiently margin-consistent by learning a pseudo-margin from the feature representation. Our findings highlight the potential of leveraging deep representations to efficiently assess adversarial vulnerability in deployment scenarios.
[ "['Jonas Ngnawé' 'Sabyasachi Sahoo' 'Yann Pequignot' 'Frédéric Precioso'\n 'Christian Gagné']" ]
null
null
2406.18464
null
null
http://arxiv.org/pdf/2406.18464v1
2024-06-26T16:16:36Z
2024-06-26T16:16:36Z
Bayesian inverse Navier-Stokes problems: joint flow field reconstruction and parameter learning
We formulate and solve a Bayesian inverse Navier-Stokes (N-S) problem that assimilates velocimetry data in order to jointly reconstruct a 3D flow field and learn the unknown N-S parameters, including the boundary position. By hardwiring a generalised N-S problem, and regularising its unknown parameters using Gaussian prior distributions, we learn the most likely parameters in a collapsed search space. The most likely flow field reconstruction is then the N-S solution that corresponds to the learned parameters. We develop the method in the variational setting and use a stabilised Nitsche weak form of the N-S problem that permits the control of all N-S parameters. To regularise the inferred the geometry, we use a viscous signed distance field (vSDF) as an auxiliary variable, which is given as the solution of a viscous Eikonal boundary value problem. We devise an algorithm that solves this inverse problem, and numerically implement it using an adjoint-consistent stabilised cut-cell finite element method. We then use this method to reconstruct magnetic resonance velocimetry (flow-MRI) data of a 3D steady laminar flow through a physical model of an aortic arch for two different Reynolds numbers and signal-to-noise ratio (SNR) levels (low/high). We find that the method can accurately i) reconstruct the low SNR data by filtering out the noise/artefacts and recovering flow features that are obscured by noise, and ii) reproduce the high SNR data without overfitting. Although the framework that we develop applies to 3D steady laminar flows in complex geometries, it readily extends to time-dependent laminar and Reynolds-averaged turbulent flows, as well as non-Newtonian (e.g. viscoelastic) fluids.
[ "['Alexandros Kontogiannis' 'Scott V. Elgersma' 'Andrew J. Sederman'\n 'Matthew P. Juniper']" ]
null
null
2406.18470
null
null
http://arxiv.org/pdf/2406.18470v2
2024-07-03T13:32:34Z
2024-06-26T16:28:24Z
UFRec: Integrating Uniformity and Frequency to Enhance Sequential Recommendations
Effective representation learning in sequential recommendation systems is pivotal for precisely capturing user interaction patterns and enhancing recommendation accuracy. Nonetheless, current methodologies largely focus on item-to-item transitions, frequently overlooking the time intervals between interactions, which are integral to understanding behavior pattern shifts. Moreover, critical interaction attributes like item frequency are often neglected. Our research indicates that sequences with more consistent time intervals and items with higher interaction frequency result in superior predictive performance. In contrast, sequences with non-uniform intervals contribute to user interest drift, and infrequently interacted items are challenging to model due to sparse data, posing unique challenges that existing methods fail to adequately address. In this study, we introduce UFRec, an innovative bidirectional enhancement method for sequential recommendations. UFRec harnesses sequence uniformity and item frequency to boost performance, particularly improving the representation of non-uniform sequences and less-frequent items. These two components synergistically enhance each other, driving holistic performance optimization in intricate sequential recommendation scenarios. Additionally, we introduce a multidimensional time module to further augment adaptability. To the best of our knowledge, UFRec is the pioneering method to exploit the properties of uniformity and frequency for feature augmentation. Through comparisons with eleven state-of-the-art models across four datasets, we demonstrate that UFRec significantly surpasses current leading models.
[ "['Yang Liu' 'Yitong Wang' 'Chenyue Feng']" ]
null
null
2406.18491
null
null
http://arxiv.org/pdf/2406.18491v1
2024-06-26T16:55:07Z
2024-06-26T16:55:07Z
Enhancing Federated Learning with Adaptive Differential Privacy and Priority-Based Aggregation
Federated learning (FL), a novel branch of distributed machine learning (ML), develops global models through a private procedure without direct access to local datasets. However, it is still possible to access the model updates (gradient updates of deep neural networks) transferred between clients and servers, potentially revealing sensitive local information to adversaries using model inversion attacks. Differential privacy (DP) offers a promising approach to addressing this issue by adding noise to the parameters. On the other hand, heterogeneities in data structure, storage, communication, and computational capabilities of devices can cause convergence problems and delays in developing the global model. A personalized weighted averaging of local parameters based on the resources of each device can yield a better aggregated model in each round. In this paper, to efficiently preserve privacy, we propose a personalized DP framework that injects noise based on clients' relative impact factors and aggregates parameters while considering heterogeneities and adjusting properties. To fulfill the DP requirements, we first analyze the convergence boundary of the FL algorithm when impact factors are personalized and fixed throughout the learning process. We then further study the convergence property considering time-varying (adaptive) impact factors.
[ "['Mahtab Talaei' 'Iman Izadi']" ]
null
null
2406.18505
null
null
http://arxiv.org/pdf/2406.18505v1
2024-06-26T17:14:45Z
2024-06-26T17:14:45Z
Mental Modeling of Reinforcement Learning Agents by Language Models
Can emergent language models faithfully model the intelligence of decision-making agents? Though modern language models exhibit already some reasoning ability, and theoretically can potentially express any probable distribution over tokens, it remains underexplored how the world knowledge these pretrained models have memorized can be utilized to comprehend an agent's behaviour in the physical world. This study empirically examines, for the first time, how well large language models (LLMs) can build a mental model of agents, termed agent mental modelling, by reasoning about an agent's behaviour and its effect on states from agent interaction history. This research may unveil the potential of leveraging LLMs for elucidating RL agent behaviour, addressing a key challenge in eXplainable reinforcement learning (XRL). To this end, we propose specific evaluation metrics and test them on selected RL task datasets of varying complexity, reporting findings on agent mental model establishment. Our results disclose that LLMs are not yet capable of fully mental modelling agents through inference alone without further innovations. This work thus provides new insights into the capabilities and limitations of modern LLMs.
[ "['Wenhao Lu' 'Xufeng Zhao' 'Josua Spisak' 'Jae Hee Lee' 'Stefan Wermter']" ]
null
null
2406.18518
null
null
http://arxiv.org/pdf/2406.18518v1
2024-06-26T17:49:11Z
2024-06-26T17:49:11Z
APIGen: Automated Pipeline for Generating Verifiable and Diverse Function-Calling Datasets
The advancement of function-calling agent models requires diverse, reliable, and high-quality datasets. This paper presents APIGen, an automated data generation pipeline designed to synthesize verifiable high-quality datasets for function-calling applications. We leverage APIGen and collect 3,673 executable APIs across 21 different categories to generate diverse function-calling datasets in a scalable and structured manner. Each data in our dataset is verified through three hierarchical stages: format checking, actual function executions, and semantic verification, ensuring its reliability and correctness. We demonstrate that models trained with our curated datasets, even with only 7B parameters, can achieve state-of-the-art performance on the Berkeley Function-Calling Benchmark, outperforming multiple GPT-4 models. Moreover, our 1B model achieves exceptional performance, surpassing GPT-3.5-Turbo and Claude-3 Haiku. We release a dataset containing 60,000 high-quality entries, aiming to advance the field of function-calling agent domains. The dataset is available on Huggingface: https://huggingface.co/datasets/Salesforce/xlam-function-calling-60k and the project homepage: https://apigen-pipeline.github.io/
[ "['Zuxin Liu' 'Thai Hoang' 'Jianguo Zhang' 'Ming Zhu' 'Tian Lan'\n 'Shirley Kokane' 'Juntao Tan' 'Weiran Yao' 'Zhiwei Liu' 'Yihao Feng'\n 'Rithesh Murthy' 'Liangwei Yang' 'Silvio Savarese' 'Juan Carlos Niebles'\n 'Huan Wang' 'Shelby Heinecke' 'Caiming Xiong']" ]
null
null
2406.18529
null
null
http://arxiv.org/pdf/2406.18529v1
2024-06-26T17:57:13Z
2024-06-26T17:57:13Z
Confident Natural Policy Gradient for Local Planning in $q_π$-realizable Constrained MDPs
The constrained Markov decision process (CMDP) framework emerges as an important reinforcement learning approach for imposing safety or other critical objectives while maximizing cumulative reward. However, the current understanding of how to learn efficiently in a CMDP environment with a potentially infinite number of states remains under investigation, particularly when function approximation is applied to the value functions. In this paper, we address the learning problem given linear function approximation with $q_{pi}$-realizability, where the value functions of all policies are linearly representable with a known feature map, a setting known to be more general and challenging than other linear settings. Utilizing a local-access model, we propose a novel primal-dual algorithm that, after $tilde{O}(text{poly}(d) epsilon^{-3})$ queries, outputs with high probability a policy that strictly satisfies the constraints while nearly optimizing the value with respect to a reward function. Here, $d$ is the feature dimension and $epsilon > 0$ is a given error. The algorithm relies on a carefully crafted off-policy evaluation procedure to evaluate the policy using historical data, which informs policy updates through policy gradients and conserves samples. To our knowledge, this is the first result achieving polynomial sample complexity for CMDP in the $q_{pi}$-realizable setting.
[ "['Tian Tian' 'Lin F. Yang' 'Csaba Szepesvári']" ]
null
null
2406.18532
null
null
http://arxiv.org/pdf/2406.18532v1
2024-06-26T17:59:18Z
2024-06-26T17:59:18Z
Symbolic Learning Enables Self-Evolving Agents
The AI community has been exploring a pathway to artificial general intelligence (AGI) by developing "language agents", which are complex large language models (LLMs) pipelines involving both prompting techniques and tool usage methods. While language agents have demonstrated impressive capabilities for many real-world tasks, a fundamental limitation of current language agents research is that they are model-centric, or engineering-centric. That's to say, the progress on prompts, tools, and pipelines of language agents requires substantial manual engineering efforts from human experts rather than automatically learning from data. We believe the transition from model-centric, or engineering-centric, to data-centric, i.e., the ability of language agents to autonomously learn and evolve in environments, is the key for them to possibly achieve AGI. In this work, we introduce agent symbolic learning, a systematic framework that enables language agents to optimize themselves on their own in a data-centric way using symbolic optimizers. Specifically, we consider agents as symbolic networks where learnable weights are defined by prompts, tools, and the way they are stacked together. Agent symbolic learning is designed to optimize the symbolic network within language agents by mimicking two fundamental algorithms in connectionist learning: back-propagation and gradient descent. Instead of dealing with numeric weights, agent symbolic learning works with natural language simulacrums of weights, loss, and gradients. We conduct proof-of-concept experiments on both standard benchmarks and complex real-world tasks and show that agent symbolic learning enables language agents to update themselves after being created and deployed in the wild, resulting in "self-evolving agents".
[ "['Wangchunshu Zhou' 'Yixin Ou' 'Shengwei Ding' 'Long Li' 'Jialong Wu'\n 'Tiannan Wang' 'Jiamin Chen' 'Shuai Wang' 'Xiaohua Xu' 'Ningyu Zhang'\n 'Huajun Chen' 'Yuchen Eleanor Jiang']" ]
null
null
2406.18534
null
null
http://arxiv.org/pdf/2406.18534v1
2024-06-26T17:59:30Z
2024-06-26T17:59:30Z
Towards Compositionality in Concept Learning
Concept-based interpretability methods offer a lens into the internals of foundation models by decomposing their embeddings into high-level concepts. These concept representations are most useful when they are compositional, meaning that the individual concepts compose to explain the full sample. We show that existing unsupervised concept extraction methods find concepts which are not compositional. To automatically discover compositional concept representations, we identify two salient properties of such representations, and propose Compositional Concept Extraction (CCE) for finding concepts which obey these properties. We evaluate CCE on five different datasets over image and text data. Our evaluation shows that CCE finds more compositional concept representations than baselines and yields better accuracy on four downstream classification tasks. Code and data are available at https://github.com/adaminsky/compositional_concepts .
[ "['Adam Stein' 'Aaditya Naik' 'Yinjun Wu' 'Mayur Naik' 'Eric Wong']" ]
null
null
2406.18545
null
null
http://arxiv.org/pdf/2406.18545v1
2024-05-22T20:01:31Z
2024-05-22T20:01:31Z
Visual Analysis of Prediction Uncertainty in Neural Networks for Deep Image Synthesis
Ubiquitous applications of Deep neural networks (DNNs) in different artificial intelligence systems have led to their adoption in solving challenging visualization problems in recent years. While sophisticated DNNs offer an impressive generalization, it is imperative to comprehend the quality, confidence, robustness, and uncertainty associated with their prediction. A thorough understanding of these quantities produces actionable insights that help application scientists make informed decisions. Unfortunately, the intrinsic design principles of the DNNs cannot beget prediction uncertainty, necessitating separate formulations for robust uncertainty-aware models for diverse visualization applications. To that end, this contribution demonstrates how the prediction uncertainty and sensitivity of DNNs can be estimated efficiently using various methods and then interactively compared and contrasted for deep image synthesis tasks. Our inspection suggests that uncertainty-aware deep visualization models generate illustrations of informative and superior quality and diversity. Furthermore, prediction uncertainty improves the robustness and interpretability of deep visualization models, making them practical and convenient for various scientific domains that thrive on visual analyses.
[ "['Soumya Dutta' 'Faheem Nizar' 'Ahmad Amaan' 'Ayan Acharya']" ]
null
null
2406.18554
null
null
http://arxiv.org/pdf/2406.18554v1
2024-05-24T15:49:00Z
2024-05-24T15:49:00Z
Planted: a dataset for planted forest identification from multi-satellite time series
Protecting and restoring forest ecosystems is critical for biodiversity conservation and carbon sequestration. Forest monitoring on a global scale is essential for prioritizing and assessing conservation efforts. Satellite-based remote sensing is the only viable solution for providing global coverage, but to date, large-scale forest monitoring is limited to single modalities and single time points. In this paper, we present a dataset consisting of data from five public satellites for recognizing forest plantations and planted tree species across the globe. Each satellite modality consists of a multi-year time series. The dataset, named PlantD, includes over 2M examples of 64 tree label classes (46 genera and 40 species), distributed among 41 countries. This dataset is released to foster research in forest monitoring using multimodal, multi-scale, multi-temporal data sources. Additionally, we present initial baseline results and evaluate modality fusion and data augmentation approaches for this dataset.
[ "['Luis Miguel Pazos-Outón' 'Cristina Nader Vasconcelos' 'Anton Raichuk'\n 'Anurag Arnab' 'Dan Morris' 'Maxim Neumann']" ]
null
null
2406.18556
null
null
http://arxiv.org/pdf/2406.18556v1
2024-05-27T01:03:12Z
2024-05-27T01:03:12Z
Renal digital pathology visual knowledge search platform based on language large model and book knowledge
Large models have become mainstream, yet their applications in digital pathology still require exploration. Meanwhile renal pathology images play an important role in the diagnosis of renal diseases. We conducted image segmentation and paired corresponding text descriptions based on 60 books for renal pathology, clustering analysis for all image and text description features based on large models, ultimately building a retrieval system based on the semantic features of large models. Based above analysis, we established a knowledge base of 10,317 renal pathology images and paired corresponding text descriptions, and then we evaluated the semantic feature capabilities of 4 large models, including GPT2, gemma, LLma and Qwen, and the image-based feature capabilities of dinov2 large model. Furthermore, we built a semantic retrieval system to retrieve pathological images based on text descriptions, and named RppD (aidp.zjsru.edu.cn).
[ "['Xiaomin Lv' 'Chong Lai' 'Liya Ding' 'Maode Lai' 'Qingrong Sun']" ]
null
null
2406.18559
null
null
http://arxiv.org/pdf/2406.18559v1
2024-05-27T17:54:51Z
2024-05-27T17:54:51Z
Revision Matters: Generative Design Guided by Revision Edits
Layout design, such as user interface or graphical layout in general, is fundamentally an iterative revision process. Through revising a design repeatedly, the designer converges on an ideal layout. In this paper, we investigate how revision edits from human designer can benefit a multimodal generative model. To do so, we curate an expert dataset that traces how human designers iteratively edit and improve a layout generation with a prompted language goal. Based on such data, we explore various supervised fine-tuning task setups on top of a Gemini multimodal backbone, a large multimodal model. Our results show that human revision plays a critical role in iterative layout refinement. While being noisy, expert revision edits lead our model to a surprisingly strong design FID score ~10 which is close to human performance (~6). In contrast, self-revisions that fully rely on model's own judgement, lead to an echo chamber that prevents iterative improvement, and sometimes leads to generative degradation. Fortunately, we found that providing human guidance plays at early stage plays a critical role in final generation. In such human-in-the-loop scenario, our work paves the way for iterative design revision based on pre-trained large multimodal models.
[ "['Tao Li' 'Chin-Yi Cheng' 'Amber Xie' 'Gang Li' 'Yang Li']" ]
null
null
2406.18560
null
null
http://arxiv.org/pdf/2406.18560v1
2024-05-27T19:44:29Z
2024-05-27T19:44:29Z
A Multi-resolution Low-rank Tensor Decomposition
The (efficient and parsimonious) decomposition of higher-order tensors is a fundamental problem with numerous applications in a variety of fields. Several methods have been proposed in the literature to that end, with the Tucker and PARAFAC decompositions being the most prominent ones. Inspired by the latter, in this work we propose a multi-resolution low-rank tensor decomposition to describe (approximate) a tensor in a hierarchical fashion. The central idea of the decomposition is to recast the tensor into emph{multiple} lower-dimensional tensors to exploit the structure at different levels of resolution. The method is first explained, an alternating least squares algorithm is discussed, and preliminary simulations illustrating the potential practical relevance are provided.
[ "['Sergio Rozada' 'Antonio G. Marques']" ]
null
null
2406.18561
null
null
http://arxiv.org/pdf/2406.18561v1
2024-05-28T06:54:04Z
2024-05-28T06:54:04Z
SelMatch: Effectively Scaling Up Dataset Distillation via Selection-Based Initialization and Partial Updates by Trajectory Matching
Dataset distillation aims to synthesize a small number of images per class (IPC) from a large dataset to approximate full dataset training with minimal performance loss. While effective in very small IPC ranges, many distillation methods become less effective, even underperforming random sample selection, as IPC increases. Our examination of state-of-the-art trajectory-matching based distillation methods across various IPC scales reveals that these methods struggle to incorporate the complex, rare features of harder samples into the synthetic dataset even with the increased IPC, resulting in a persistent coverage gap between easy and hard test samples. Motivated by such observations, we introduce SelMatch, a novel distillation method that effectively scales with IPC. SelMatch uses selection-based initialization and partial updates through trajectory matching to manage the synthetic dataset's desired difficulty level tailored to IPC scales. When tested on CIFAR-10/100 and TinyImageNet, SelMatch consistently outperforms leading selection-only and distillation-only methods across subset ratios from 5% to 30%.
[ "['Yongmin Lee' 'Hye Won Chung']" ]
null
null
2406.18562
null
null
http://arxiv.org/pdf/2406.18562v1
2024-05-28T18:42:13Z
2024-05-28T18:42:13Z
Views Can Be Deceiving: Improved SSL Through Feature Space Augmentation
Supervised learning methods have been found to exhibit inductive biases favoring simpler features. When such features are spuriously correlated with the label, this can result in suboptimal performance on minority subgroups. Despite the growing popularity of methods which learn from unlabeled data, the extent to which these representations rely on spurious features for prediction is unclear. In this work, we explore the impact of spurious features on Self-Supervised Learning (SSL) for visual representation learning. We first empirically show that commonly used augmentations in SSL can cause undesired invariances in the image space, and illustrate this with a simple example. We further show that classical approaches in combating spurious correlations, such as dataset re-sampling during SSL, do not consistently lead to invariant representations. Motivated by these findings, we propose LateTVG to remove spurious information from these representations during pre-training, by regularizing later layers of the encoder via pruning. We find that our method produces representations which outperform the baselines on several benchmarks, without the need for group or label information during SSL.
[ "['Kimia Hamidieh' 'Haoran Zhang' 'Swami Sankaranarayanan'\n 'Marzyeh Ghassemi']" ]
null
null
2406.18563
null
null
http://arxiv.org/pdf/2406.18563v1
2024-05-29T17:45:38Z
2024-05-29T17:45:38Z
Interdisciplinary Expertise to Advance Equitable Explainable AI
The field of artificial intelligence (AI) is rapidly influencing health and healthcare, but bias and poor performance persists for populations who face widespread structural oppression. Previous work has clearly outlined the need for more rigorous attention to data representativeness and model performance to advance equity and reduce bias. However, there is an opportunity to also improve the explainability of AI by leveraging best practices of social epidemiology and health equity to help us develop hypotheses for associations found. In this paper, we focus on explainable AI (XAI) and describe a framework for interdisciplinary expert panel review to discuss and critically assess AI model explanations from multiple perspectives and identify areas of bias and directions for future research. We emphasize the importance of the interdisciplinary expert panel to produce more accurate, equitable interpretations which are historically and contextually informed. Interdisciplinary panel discussions can help reduce bias, identify potential confounders, and identify opportunities for additional research where there are gaps in the literature. In turn, these insights can suggest opportunities for AI model improvement.
[ "['Chloe R. Bennett' 'Heather Cole-Lewis' 'Stephanie Farquhar'\n 'Naama Haamel' 'Boris Babenko' 'Oran Lang' 'Mat Fleck' 'Ilana Traynis'\n 'Charles Lau' 'Ivor Horn' 'Courtney Lyles']" ]
null
null
2406.18566
null
null
http://arxiv.org/pdf/2406.18566v1
2024-06-01T15:47:13Z
2024-06-01T15:47:13Z
Memorized Images in Diffusion Models share a Subspace that can be Located and Deleted
Large-scale text-to-image diffusion models excel in generating high-quality images from textual inputs, yet concerns arise as research indicates their tendency to memorize and replicate training data, raising We also addressed the issue of memorization in diffusion models, where models tend to replicate exact training samples raising copyright infringement and privacy issues. Efforts within the text-to-image community to address memorization explore causes such as data duplication, replicated captions, or trigger tokens, proposing per-prompt inference-time or training-time mitigation strategies. In this paper, we focus on the feed-forward layers and begin by contrasting neuron activations of a set of memorized and non-memorized prompts. Experiments reveal a surprising finding: many different sets of memorized prompts significantly activate a common subspace in the model, demonstrating, for the first time, that memorization in the diffusion models lies in a special subspace. Subsequently, we introduce a novel post-hoc method for editing pre-trained models, whereby memorization is mitigated through the straightforward pruning of weights in specialized subspaces, avoiding the need to disrupt the training or inference process as seen in prior research. Finally, we demonstrate the robustness of the pruned model against training data extraction attacks, thereby unveiling new avenues for a practical and one-for-all solution to memorization.
[ "['Ruchika Chavhan' 'Ondrej Bohdal' 'Yongshuo Zong' 'Da Li'\n 'Timothy Hospedales']" ]
null
null
2406.18568
null
null
http://arxiv.org/pdf/2406.18568v1
2024-06-02T13:25:44Z
2024-06-02T13:25:44Z
A Diagnostic Model for Acute Lymphoblastic Leukemia Using Metaheuristics and Deep Learning Methods
Acute lymphoblastic leukemia (ALL) severity is determined by the presence and ratios of blast cells (abnormal white blood cells) in both bone marrow and peripheral blood. Manual diagnosis of this disease is a tedious and time-consuming operation, making it difficult for professionals to accurately examine blast cell characteristics. To address this difficulty, researchers use deep learning and machine learning. In this paper, a ResNet-based feature extractor is utilized to detect ALL, along with a variety of feature selectors and classifiers. To get the best results, a variety of transfer learning models, including the Resnet, VGG, EfficientNet, and DensNet families, are used as deep feature extractors. Following extraction, different feature selectors are used, including Genetic algorithm, PCA, ANOVA, Random Forest, Univariate, Mutual information, Lasso, XGB, Variance, and Binary ant colony. After feature qualification, a variety of classifiers are used, with MLP outperforming the others. The recommended technique is used to categorize ALL and HEM in the selected dataset which is C-NMC 2019. This technique got an impressive 90.71% accuracy and 95.76% sensitivity for the relevant classifications, and its metrics on this dataset outperformed others.
[ "['M. Hosseinzadeh' 'P. Khoshaght' 'S. Sadeghi' 'P. Asghari' 'Z. Arabi'\n 'J. Lansky' 'P. Budinsky' 'A. Masoud Rahmani' 'S. W. Lee']" ]
null
null
2406.18572
null
null
http://arxiv.org/pdf/2406.18572v1
2024-06-03T18:08:56Z
2024-06-03T18:08:56Z
GeoReasoner: Geo-localization with Reasoning in Street Views using a Large Vision-Language Model
This work tackles the problem of geo-localization with a new paradigm using a large vision-language model (LVLM) augmented with human inference knowledge. A primary challenge here is the scarcity of data for training the LVLM - existing street-view datasets often contain numerous low-quality images lacking visual clues, and lack any reasoning inference. To address the data-quality issue, we devise a CLIP-based network to quantify the degree of street-view images being locatable, leading to the creation of a new dataset comprising highly locatable street views. To enhance reasoning inference, we integrate external knowledge obtained from real geo-localization games, tapping into valuable human inference capabilities. The data are utilized to train GeoReasoner, which undergoes fine-tuning through dedicated reasoning and location-tuning stages. Qualitative and quantitative evaluations illustrate that GeoReasoner outperforms counterpart LVLMs by more than 25% at country-level and 38% at city-level geo-localization tasks, and surpasses StreetCLIP performance while requiring fewer training resources. The data and code are available at https://github.com/lingli1996/GeoReasoner.
[ "['Ling Li' 'Yu Ye' 'Bingchuan Jiang' 'Wei Zeng']" ]
null
null
2406.18574
null
null
http://arxiv.org/pdf/2406.18574v1
2024-06-04T03:06:41Z
2024-06-04T03:06:41Z
Unsupervised Few-Shot Continual Learning for Remote Sensing Image Scene Classification
A continual learning (CL) model is desired for remote sensing image analysis because of varying camera parameters, spectral ranges, resolutions, etc. There exist some recent initiatives to develop CL techniques in this domain but they still depend on massive labelled samples which do not fully fit remote sensing applications because ground truths are often obtained via field-based surveys. This paper addresses this problem with a proposal of unsupervised flat-wide learning approach (UNISA) for unsupervised few-shot continual learning approaches of remote sensing image scene classifications which do not depend on any labelled samples for its model updates. UNISA is developed from the idea of prototype scattering and positive sampling for learning representations while the catastrophic forgetting problem is tackled with the flat-wide learning approach combined with a ball generator to address the data scarcity problem. Our numerical study with remote sensing image scene datasets and a hyperspectral dataset confirms the advantages of our solution. Source codes of UNISA are shared publicly in url{https://github.com/anwarmaxsum/UNISA} to allow convenient future studies and reproductions of our numerical results.
[ "[\"Muhammad Anwar Ma'sum\" 'Mahardhika Pratama' 'Ramasamy Savitha' 'Lin Liu'\n 'Habibullah' 'Ryszard Kowalczyk']" ]
null
null
2406.18575
null
null
http://arxiv.org/pdf/2406.18575v1
2024-06-04T05:45:11Z
2024-06-04T05:45:11Z
Research on Driver Facial Fatigue Detection Based on Yolov8 Model
In a society where traffic accidents frequently occur, fatigue driving has emerged as a grave issue. Fatigue driving detection technology, especially those based on the YOLOv8 deep learning model, has seen extensive research and application as an effective preventive measure. This paper discusses in depth the methods and technologies utilized in the YOLOv8 model to detect driver fatigue, elaborates on the current research status both domestically and internationally, and systematically introduces the processing methods and algorithm principles for various datasets. This study aims to provide a robust technical solution for preventing and detecting fatigue driving, thereby contributing significantly to reducing traffic accidents and safeguarding lives.
[ "['Chang Zhou' 'Yang Zhao' 'Shaobo Liu' 'Yi Zhao' 'Xingchen Li'\n 'Chiyu Cheng']" ]
null
null
2406.18580
null
null
http://arxiv.org/pdf/2406.18580v1
2024-06-05T14:03:21Z
2024-06-05T14:03:21Z
Shedding Light on Large Generative Networks: Estimating Epistemic Uncertainty in Diffusion Models
Generative diffusion models, notable for their large parameter count (exceeding 100 million) and operation within high-dimensional image spaces, pose significant challenges for traditional uncertainty estimation methods due to computational demands. In this work, we introduce an innovative framework, Diffusion Ensembles for Capturing Uncertainty (DECU), designed for estimating epistemic uncertainty for diffusion models. The DECU framework introduces a novel method that efficiently trains ensembles of conditional diffusion models by incorporating a static set of pre-trained parameters, drastically reducing the computational burden and the number of parameters that require training. Additionally, DECU employs Pairwise-Distance Estimators (PaiDEs) to accurately measure epistemic uncertainty by evaluating the mutual information between model outputs and weights in high-dimensional spaces. The effectiveness of this framework is demonstrated through experiments on the ImageNet dataset, highlighting its capability to capture epistemic uncertainty, specifically in under-sampled image classes.
[ "['Lucas Berry' 'Axel Brando' 'David Meger']" ]
null
null
2406.18583
null
null
http://arxiv.org/pdf/2406.18583v1
2024-06-05T17:53:26Z
2024-06-05T17:53:26Z
Lumina-Next: Making Lumina-T2X Stronger and Faster with Next-DiT
Lumina-T2X is a nascent family of Flow-based Large Diffusion Transformers that establishes a unified framework for transforming noise into various modalities, such as images and videos, conditioned on text instructions. Despite its promising capabilities, Lumina-T2X still encounters challenges including training instability, slow inference, and extrapolation artifacts. In this paper, we present Lumina-Next, an improved version of Lumina-T2X, showcasing stronger generation performance with increased training and inference efficiency. We begin with a comprehensive analysis of the Flag-DiT architecture and identify several suboptimal components, which we address by introducing the Next-DiT architecture with 3D RoPE and sandwich normalizations. To enable better resolution extrapolation, we thoroughly compare different context extrapolation methods applied to text-to-image generation with 3D RoPE, and propose Frequency- and Time-Aware Scaled RoPE tailored for diffusion transformers. Additionally, we introduced a sigmoid time discretization schedule to reduce sampling steps in solving the Flow ODE and the Context Drop method to merge redundant visual tokens for faster network evaluation, effectively boosting the overall sampling speed. Thanks to these improvements, Lumina-Next not only improves the quality and efficiency of basic text-to-image generation but also demonstrates superior resolution extrapolation capabilities and multilingual generation using decoder-based LLMs as the text encoder, all in a zero-shot manner. To further validate Lumina-Next as a versatile generative framework, we instantiate it on diverse tasks including visual recognition, multi-view, audio, music, and point cloud generation, showcasing strong performance across these domains. By releasing all codes and model weights, we aim to advance the development of next-generation generative AI capable of universal modeling.
[ "['Le Zhuo' 'Ruoyi Du' 'Han Xiao' 'Yangguang Li' 'Dongyang Liu'\n 'Rongjie Huang' 'Wenze Liu' 'Lirui Zhao' 'Fu-Yun Wang' 'Zhanyu Ma'\n 'Xu Luo' 'Zehan Wang' 'Kaipeng Zhang' 'Xiangyang Zhu' 'Si Liu'\n 'Xiangyu Yue' 'Dingning Liu' 'Wanli Ouyang' 'Ziwei Liu' 'Yu Qiao'\n 'Hongsheng Li' 'Peng Gao']" ]
null
null
2406.18588
null
null
http://arxiv.org/pdf/2406.18588v1
2024-06-07T07:32:41Z
2024-06-07T07:32:41Z
Varying Manifolds in Diffusion: From Time-varying Geometries to Visual Saliency
Deep generative models learn the data distribution, which is concentrated on a low-dimensional manifold. The geometric analysis of distribution transformation provides a better understanding of data structure and enables a variety of applications. In this paper, we study the geometric properties of the diffusion model, whose forward diffusion process and reverse generation process construct a series of distributions on manifolds which vary over time. Our key contribution is the introduction of generation rate, which corresponds to the local deformation of manifold over time around an image component. We show that the generation rate is highly correlated with intuitive visual properties, such as visual saliency, of the image component. Further, we propose an efficient and differentiable scheme to estimate the generation rate for a given image component over time, giving rise to a generation curve. The differentiable nature of our scheme allows us to control the shape of the generation curve via optimization. Using different loss functions, our generation curve matching algorithm provides a unified framework for a range of image manipulation tasks, including semantic transfer, object removal, saliency manipulation, image blending, etc. We conduct comprehensive analytical evaluations to support our findings and evaluate our framework on various manipulation tasks. The results show that our method consistently leads to better manipulation results, compared to recent baselines.
[ "['Junhao Chen' 'Manyi Li' 'Zherong Pan' 'Xifeng Gao' 'Changhe Tu']" ]
null
null
2406.18589
null
null
http://arxiv.org/pdf/2406.18589v1
2024-06-07T08:37:57Z
2024-06-07T08:37:57Z
Text-Guided Alternative Image Clustering
Traditional image clustering techniques only find a single grouping within visual data. In particular, they do not provide a possibility to explicitly define multiple types of clustering. This work explores the potential of large vision-language models to facilitate alternative image clustering. We propose Text-Guided Alternative Image Consensus Clustering (TGAICC), a novel approach that leverages user-specified interests via prompts to guide the discovery of diverse clusterings. To achieve this, it generates a clustering for each prompt, groups them using hierarchical clustering, and then aggregates them using consensus clustering. TGAICC outperforms image- and text-based baselines on four alternative image clustering benchmark datasets. Furthermore, using count-based word statistics, we are able to obtain text-based explanations of the alternative clusterings. In conclusion, our research illustrates how contemporary large vision-language models can transform explanatory data analysis, enabling the generation of insightful, customizable, and diverse image clusterings.
[ "['Andreas Stephan' 'Lukas Miklautz' 'Collin Leiber'\n 'Pedro Henrique Luz de Araujo' 'Dominik Répás' 'Claudia Plant'\n 'Benjamin Roth']" ]
null
null
2406.18591
null
null
http://arxiv.org/pdf/2406.18591v1
2024-06-07T16:28:06Z
2024-06-07T16:28:06Z
Composition Vision-Language Understanding via Segment and Depth Anything Model
We introduce a pioneering unified library that leverages depth anything, segment anything models to augment neural comprehension in language-vision model zero-shot understanding. This library synergizes the capabilities of the Depth Anything Model (DAM), Segment Anything Model (SAM), and GPT-4V, enhancing multimodal tasks such as vision-question-answering (VQA) and composition reasoning. Through the fusion of segmentation and depth analysis at the symbolic instance level, our library provides nuanced inputs for language models, significantly advancing image interpretation. Validated across a spectrum of in-the-wild real-world images, our findings showcase progress in vision-language models through neural-symbolic integration. This novel approach melds visual and language analysis in an unprecedented manner. Overall, our library opens new directions for future research aimed at decoding the complexities of the real world through advanced multimodal technologies and our code is available at url{https://github.com/AnthonyHuo/SAM-DAM-for-Compositional-Reasoning}.
[ "['Mingxiao Huo' 'Pengliang Ji' 'Haotian Lin' 'Junchen Liu' 'Yixiao Wang'\n 'Yijun Chen']" ]
null
null
2406.18595
null
null
http://arxiv.org/pdf/2406.18595v1
2024-06-09T20:52:47Z
2024-06-09T20:52:47Z
Realtime Dynamic Gaze Target Tracking and Depth-Level Estimation
The integration of Transparent Displays (TD) in various applications, such as Heads-Up Displays (HUDs) in vehicles, is a burgeoning field, poised to revolutionize user experiences. However, this innovation brings forth significant challenges in realtime human-device interaction, particularly in accurately identifying and tracking a user's gaze on dynamically changing TDs. In this paper, we present a two-fold robust and efficient systematic solution for realtime gaze monitoring, comprised of: (1) a tree-based algorithm for identifying and dynamically tracking gaze targets (i.e., moving, size-changing, and overlapping 2D content) projected on a transparent display, in realtime; (2) a multi-stream self-attention architecture to estimate the depth-level of human gaze from eye tracking data, to account for the display's transparency and preventing undesired interactions with the TD. We collected a real-world eye-tracking dataset to train and test our gaze monitoring system. We present extensive results and ablation studies, including inference experiments on System on Chip (SoC) evaluation boards, demonstrating our model's scalability, precision, and realtime feasibility in both static and dynamic contexts. Our solution marks a significant stride in enhancing next-generation user-device interaction and experience, setting a new benchmark for algorithmic gaze monitoring technology in dynamic transparent displays.
[ "['Esmaeil Seraj' 'Harsh Bhate' 'Walter Talamonti']" ]
null
null
2406.18602
null
null
http://arxiv.org/pdf/2406.18602v1
2024-06-19T04:50:16Z
2024-06-19T04:50:16Z
Multi-level Phenotypic Models of Cardiovascular Disease and Obstructive Sleep Apnea Comorbidities: A Longitudinal Wisconsin Sleep Cohort Study
Cardiovascular diseases (CVDs) are notably prevalent among patients with obstructive sleep apnea (OSA), posing unique challenges in predicting CVD progression due to the intricate interactions of comorbidities. Traditional models typically lack the necessary dynamic and longitudinal scope to accurately forecast CVD trajectories in OSA patients. This study introduces a novel multi-level phenotypic model to analyze the progression and interplay of these conditions over time, utilizing data from the Wisconsin Sleep Cohort, which includes 1,123 participants followed for decades. Our methodology comprises three advanced steps: (1) Conducting feature importance analysis through tree-based models to underscore critical predictive variables like total cholesterol, low-density lipoprotein (LDL), and diabetes. (2) Developing a logistic mixed-effects model (LGMM) to track longitudinal transitions and pinpoint significant factors, which displayed a diagnostic accuracy of 0.9556. (3) Implementing t-distributed Stochastic Neighbor Embedding (t-SNE) alongside Gaussian Mixture Models (GMM) to segment patient data into distinct phenotypic clusters that reflect varied risk profiles and disease progression pathways. This phenotypic clustering revealed two main groups, with one showing a markedly increased risk of major adverse cardiovascular events (MACEs), underscored by the significant predictive role of nocturnal hypoxia and sympathetic nervous system activity from sleep data. Analysis of transitions and trajectories with t-SNE and GMM highlighted different progression rates within the cohort, with one cluster progressing more slowly towards severe CVD states than the other. This study offers a comprehensive understanding of the dynamic relationship between CVD and OSA, providing valuable tools for predicting disease onset and tailoring treatment approaches.
[ "['Duy Nguyen' 'Ca Hoang' 'Phat K. Huynh' 'Tien Truong' 'Dang Nguyen'\n 'Abhay Sharma' 'Trung Q. Le']" ]
null
null
2406.18603
null
null
http://arxiv.org/pdf/2406.18603v1
2024-06-19T10:07:19Z
2024-06-19T10:07:19Z
Confidence interval estimation of mixed oil length with conditional diffusion model
Accurately estimating the mixed oil length plays a big role in the economic benefit for oil pipeline network. While various proposed methods have tried to predict the mixed oil length, they often exhibit an extremely high probability (around 50%) of underestimating it. This is attributed to their failure to consider the statistical variability inherent in the estimated length of mixed oil. To address such issues, we propose to use the conditional diffusion model to learn the distribution of the mixed oil length given pipeline features. Subsequently, we design a confidence interval estimation for the length of the mixed oil based on the pseudo-samples generated by the learned diffusion model. To our knowledge, we are the first to present an estimation scheme for confidence interval of the oil-mixing length that considers statistical variability, thereby reducing the possibility of underestimating it. When employing the upper bound of the interval as a reference for excluding the mixed oil, the probability of underestimation can be as minimal as 5%, a substantial reduction compared to 50%. Furthermore, utilizing the mean of the generated pseudo samples as the estimator for the mixed oil length enhances prediction accuracy by at least 10% compared to commonly used methods.
[ "['Yanfeng Yang' 'Lihong Zhang' 'Ziqi Chen' 'Miaomiao Yu' 'Lei Chen']" ]
null
null
2406.18612
null
null
http://arxiv.org/pdf/2406.18612v1
2024-06-25T13:22:13Z
2024-06-25T13:22:13Z
Optimal spanning tree reconstruction in symbolic regression
This paper investigates the problem of regression model generation. A model is a superposition of primitive functions. The model structure is described by a weighted colored graph. Each graph vertex corresponds to some primitive function. An edge assigns a superposition of two functions. The weight of an edge equals the probability of superposition. To generate an optimal model one has to reconstruct its structure from its graph adjacency matrix. The proposed algorithm reconstructs the~minimum spanning tree from the~weighted colored graph. This paper presents a novel solution based on the prize-collecting Steiner tree algorithm. This algorithm is compared with its alternatives.
[ "['Radoslav G. Neychev' 'Innokentiy A. Shibaev' 'Vadim V. Strijov']" ]
null
null
2406.18613
null
null
http://arxiv.org/pdf/2406.18613v1
2024-06-25T17:07:01Z
2024-06-25T17:07:01Z
Inducing Riesz and orthonormal bases in $L^2$ via composition operators
We investigate perturbations of orthonormal bases of $L^2$ via a composition operator $C_h$ induced by a mapping $h$. We provide a comprehensive characterization of the mapping $h$ required for the perturbed sequence to form an orthonormal or Riesz basis. Restricting our analysis to differentiable mappings, we reveal that all Riesz bases of the given form are induced by bi-Lipschitz mappings. In addition, we discuss implications of these results for approximation theory, highlighting the potential of using bijective neural networks to construct complete sequences with favorable approximation properties.
[ "['Yahya Saleh' 'Armin Iske']" ]
null
null
2406.18623
null
null
http://arxiv.org/pdf/2406.18623v1
2024-06-26T11:39:22Z
2024-06-26T11:39:22Z
Unbiased least squares regression via averaged stochastic gradient descent
We consider an on-line least squares regression problem with optimal solution $theta^*$ and Hessian matrix H, and study a time-average stochastic gradient descent estimator of $theta^*$. For $kge2$, we provide an unbiased estimator of $theta^*$ that is a modification of the time-average estimator, runs with an expected number of time-steps of order k, with O(1/k) expected excess risk. The constant behind the O notation depends on parameters of the regression and is a poly-logarithmic function of the smallest eigenvalue of H. We provide both a biased and unbiased estimator of the expected excess risk of the time-average estimator and of its unbiased counterpart, without requiring knowledge of either H or $theta^*$. We describe an "average-start" version of our estimators with similar properties. Our approach is based on randomized multilevel Monte Carlo. Our numerical experiments confirm our theoretical findings.
[ "['Nabil Kahalé']" ]
null
null
2406.18624
null
null
http://arxiv.org/pdf/2406.18624v2
2024-07-01T12:07:16Z
2024-06-26T12:50:55Z
Robust Low-Cost Drone Detection and Classification in Low SNR Environments
The proliferation of drones, or unmanned aerial vehicles (UAVs), has raised significant safety concerns due to their potential misuse in activities such as espionage, smuggling, and infrastructure disruption. This paper addresses the critical need for effective drone detection and classification systems that operate independently of UAV cooperation. We evaluate various convolutional neural networks (CNNs) for their ability to detect and classify drones using spectrogram data derived from consecutive Fourier transforms of signal components. The focus is on model robustness in low signal-to-noise ratio (SNR) environments, which is critical for real-world applications. A comprehensive dataset is provided to support future model development. In addition, we demonstrate a low-cost drone detection system using a standard computer, software-defined radio (SDR) and antenna, validated through real-world field testing. On our development dataset, all models consistently achieved an average balanced classification accuracy of >= 85% at SNR > -12dB. In the field test, these models achieved an average balance accuracy of > 80%, depending on transmitter distance and antenna direction. Our contributions include: a publicly available dataset for model development, a comparative analysis of CNN for drone detection under low SNR conditions, and the deployment and field evaluation of a practical, low-cost detection system.
[ "['Stefan Glüge' 'Matthias Nyfeler' 'Ahmad Aghaebrahimian'\n 'Nicola Ramagnano' 'Christof Schüpbach']" ]
null
null
2406.18627
null
null
http://arxiv.org/pdf/2406.18627v1
2024-06-26T14:47:28Z
2024-06-26T14:47:28Z
AssertionBench: A Benchmark to Evaluate Large-Language Models for Assertion Generation
Assertions have been the de facto collateral for simulation-based and formal verification of hardware designs for over a decade. The quality of hardware verification, ie, detection and diagnosis of corner-case design bugs, is critically dependent on the quality of the assertions. There has been a considerable amount of research leveraging a blend of data-driven statistical analysis and static analysis to generate high-quality assertions from hardware design source code and design execution trace data. Despite such concerted effort, all prior research struggles to scale to industrial-scale large designs, generates too many low-quality assertions, often fails to capture subtle and non-trivial design functionality, and does not produce any easy-to-comprehend explanations of the generated assertions to understand assertions' suitability to different downstream validation tasks. Recently, with the advent of Large-Language Models (LLMs), there has been a widespread effort to leverage prompt engineering to generate assertions. However, there is little effort to quantitatively establish the effectiveness and suitability of various LLMs for assertion generation. In this paper, we present AssertionBench, a novel benchmark to evaluate LLMs' effectiveness for assertion generation quantitatively. AssertioBench contains 100 curated Verilog hardware designs from OpenCores and formally verified assertions for each design generated from GoldMine and HARM. We use AssertionBench to compare state-of-the-art LLMs to assess their effectiveness in inferring functionally correct assertions for hardware designs. Our experiments demonstrate how LLMs perform relative to each other, the benefits of using more in-context exemplars in generating a higher fraction of functionally correct assertions, and the significant room for improvement for LLM-based assertion generators.
[ "['Vaishnavi Pulavarthi' 'Deeksha Nandal' 'Soham Dan' 'Debjit Pal']" ]
null
null
2406.18629
null
null
http://arxiv.org/pdf/2406.18629v1
2024-06-26T17:43:06Z
2024-06-26T17:43:06Z
Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs
Mathematical reasoning presents a significant challenge for Large Language Models (LLMs) due to the extensive and precise chain of reasoning required for accuracy. Ensuring the correctness of each reasoning step is critical. To address this, we aim to enhance the robustness and factuality of LLMs by learning from human feedback. However, Direct Preference Optimization (DPO) has shown limited benefits for long-chain mathematical reasoning, as models employing DPO struggle to identify detailed errors in incorrect answers. This limitation stems from a lack of fine-grained process supervision. We propose a simple, effective, and data-efficient method called Step-DPO, which treats individual reasoning steps as units for preference optimization rather than evaluating answers holistically. Additionally, we have developed a data construction pipeline for Step-DPO, enabling the creation of a high-quality dataset containing 10K step-wise preference pairs. We also observe that in DPO, self-generated data is more effective than data generated by humans or GPT-4, due to the latter's out-of-distribution nature. Our findings demonstrate that as few as 10K preference data pairs and fewer than 500 Step-DPO training steps can yield a nearly 3% gain in accuracy on MATH for models with over 70B parameters. Notably, Step-DPO, when applied to Qwen2-72B-Instruct, achieves scores of 70.8% and 94.0% on the test sets of MATH and GSM8K, respectively, surpassing a series of closed-source models, including GPT-4-1106, Claude-3-Opus, and Gemini-1.5-Pro. Our code, data, and models are available at https://github.com/dvlab-research/Step-DPO.
[ "['Xin Lai' 'Zhuotao Tian' 'Yukang Chen' 'Senqiao Yang' 'Xiangru Peng'\n 'Jiaya Jia']" ]
null
null
2406.18630
null
null
http://arxiv.org/pdf/2406.18630v1
2024-06-26T17:59:54Z
2024-06-26T17:59:54Z
Improving Hyperparameter Optimization with Checkpointed Model Weights
When training deep learning models, the performance depends largely on the selected hyperparameters. However, hyperparameter optimization (HPO) is often one of the most expensive parts of model design. Classical HPO methods treat this as a black-box optimization problem. However, gray-box HPO methods, which incorporate more information about the setup, have emerged as a promising direction for more efficient optimization. For example, using intermediate loss evaluations to terminate bad selections. In this work, we propose an HPO method for neural networks using logged checkpoints of the trained weights to guide future hyperparameter selections. Our method, Forecasting Model Search (FMS), embeds weights into a Gaussian process deep kernel surrogate model, using a permutation-invariant graph metanetwork to be data-efficient with the logged network weights. To facilitate reproducibility and further research, we open-source our code at https://github.com/NVlabs/forecasting-model-search.
[ "['Nikhil Mehta' 'Jonathan Lorraine' 'Steve Masson'\n 'Ramanathan Arunachalam' 'Zaid Pervaiz Bhat' 'James Lucas'\n 'Arun George Zachariah']" ]
null
null
2406.18651
null
null
http://arxiv.org/pdf/2406.18651v1
2024-06-26T18:00:03Z
2024-06-26T18:00:03Z
Contraction of Private Quantum Channels and Private Quantum Hypothesis Testing
A quantum generalized divergence by definition satisfies the data-processing inequality; as such, the relative decrease in such a divergence under the action of a quantum channel is at most one. This relative decrease is formally known as the contraction coefficient of the channel and the divergence. Interestingly, there exist combinations of channels and divergences for which the contraction coefficient is strictly less than one. Furthermore, understanding the contraction coefficient is fundamental for the study of statistical tasks under privacy constraints. To this end, here we establish upper bounds on contraction coefficients for the hockey-stick divergence under privacy constraints, where privacy is quantified with respect to the quantum local differential privacy (QLDP) framework, and we fully characterize the contraction coefficient for the trace distance under privacy constraints. With the machinery developed, we also determine an upper bound on the contraction of both the Bures distance and quantum relative entropy relative to the normalized trace distance, under QLDP constraints. Next, we apply our findings to establish bounds on the sample complexity of quantum hypothesis testing under privacy constraints. Furthermore, we study various scenarios in which the sample complexity bounds are tight, while providing order-optimal quantum channels that achieve those bounds. Lastly, we show how private quantum channels provide fairness and Holevo information stability in quantum learning settings.
[ "['Theshani Nuradha' 'Mark M. Wilde']" ]
null
null
2406.18664
null
null
http://arxiv.org/pdf/2406.18664v3
2024-07-11T07:45:04Z
2024-06-26T18:09:46Z
Evaluating Copyright Takedown Methods for Language Models
Language models (LMs) derive their capabilities from extensive training on diverse data, including potentially copyrighted material. These models can memorize and generate content similar to their training data, posing potential concerns. Therefore, model creators are motivated to develop mitigation methods that prevent generating protected content. We term this procedure as copyright takedowns for LMs, noting the conceptual similarity to (but legal distinction from) the DMCA takedown This paper introduces the first evaluation of the feasibility and side effects of copyright takedowns for LMs. We propose CoTaEval, an evaluation framework to assess the effectiveness of copyright takedown methods, the impact on the model's ability to retain uncopyrightable factual knowledge from the training data whose recitation is embargoed, and how well the model maintains its general utility and efficiency. We examine several strategies, including adding system prompts, decoding-time filtering interventions, and unlearning approaches. Our findings indicate that no tested method excels across all metrics, showing significant room for research in this unique problem setting and indicating potential unresolved challenges for live policy proposals.
[ "['Boyi Wei' 'Weijia Shi' 'Yangsibo Huang' 'Noah A. Smith' 'Chiyuan Zhang'\n 'Luke Zettlemoyer' 'Kai Li' 'Peter Henderson']" ]
null
null
2406.18665
null
null
http://arxiv.org/pdf/2406.18665v2
2024-07-01T05:38:08Z
2024-06-26T18:10:22Z
RouteLLM: Learning to Route LLMs with Preference Data
Large language models (LLMs) exhibit impressive capabilities across a wide range of tasks, yet the choice of which model to use often involves a trade-off between performance and cost. More powerful models, though effective, come with higher expenses, while less capable models are more cost-effective. To address this dilemma, we propose several efficient router models that dynamically select between a stronger and a weaker LLM during inference, aiming to optimize the balance between cost and response quality. We develop a training framework for these routers leveraging human preference data and data augmentation techniques to enhance performance. Our evaluation on widely-recognized benchmarks shows that our approach significantly reduces costs-by over 2 times in certain cases-without compromising the quality of responses. Interestingly, our router models also demonstrate significant transfer learning capabilities, maintaining their performance even when the strong and weak models are changed at test time. This highlights the potential of these routers to provide a cost-effective yet high-performance solution for deploying LLMs.
[ "['Isaac Ong' 'Amjad Almahairi' 'Vincent Wu' 'Wei-Lin Chiang' 'Tianhao Wu'\n 'Joseph E. Gonzalez' 'M Waleed Kadous' 'Ion Stoica']" ]
null
null
2406.18671
null
null
http://arxiv.org/pdf/2406.18671v1
2024-06-26T18:14:36Z
2024-06-26T18:14:36Z
A Zero Auxiliary Knowledge Membership Inference Attack on Aggregate Location Data
Location data is frequently collected from populations and shared in aggregate form to guide policy and decision making. However, the prevalence of aggregated data also raises the privacy concern of membership inference attacks (MIAs). MIAs infer whether an individual's data contributed to the aggregate release. Although effective MIAs have been developed for aggregate location data, these require access to an extensive auxiliary dataset of individual traces over the same locations, which are collected from a similar population. This assumption is often impractical given common privacy practices surrounding location data. To measure the risk of an MIA performed by a realistic adversary, we develop the first Zero Auxiliary Knowledge (ZK) MIA on aggregate location data, which eliminates the need for an auxiliary dataset of real individual traces. Instead, we develop a novel synthetic approach, such that suitable synthetic traces are generated from the released aggregate. We also develop methods to correct for bias and noise, to show that our synthetic-based attack is still applicable when privacy mechanisms are applied prior to release. Using two large-scale location datasets, we demonstrate that our ZK MIA matches the state-of-the-art Knock-Knock (KK) MIA across a wide range of settings, including popular implementations of differential privacy (DP) and suppression of small counts. Furthermore, we show that ZK MIA remains highly effective even when the adversary only knows a small fraction (10%) of their target's location history. This demonstrates that effective MIAs can be performed by realistic adversaries, highlighting the need for strong DP protection.
[ "['Vincent Guan' 'Florent Guépin' 'Ana-Maria Cretu'\n 'Yves-Alexandre de Montjoye']" ]
null
null
2406.18672
null
null
http://arxiv.org/pdf/2406.18672v1
2024-06-26T18:19:10Z
2024-06-26T18:19:10Z
A simple and improved algorithm for noisy, convex, zeroth-order optimisation
In this paper, we study the problem of noisy, convex, zeroth order optimisation of a function $f$ over a bounded convex set $bar{mathcal X}subset mathbb{R}^d$. Given a budget $n$ of noisy queries to the function $f$ that can be allocated sequentially and adaptively, our aim is to construct an algorithm that returns a point $hat xin bar{mathcal X}$ such that $f(hat x)$ is as small as possible. We provide a conceptually simple method inspired by the textbook center of gravity method, but adapted to the noisy and zeroth order setting. We prove that this method is such that the $f(hat x) - min_{xin bar{mathcal X}} f(x)$ is of smaller order than $d^2/sqrt{n}$ up to poly-logarithmic terms. We slightly improve upon existing literature, where to the best of our knowledge the best known rate is in [Lattimore, 2024] is of order $d^{2.5}/sqrt{n}$, albeit for a more challenging problem. Our main contribution is however conceptual, as we believe that our algorithm and its analysis bring novel ideas and are significantly simpler than existing approaches.
[ "['Alexandra Carpentier']" ]
null
null
2406.18676
null
null
http://arxiv.org/pdf/2406.18676v1
2024-06-26T18:26:53Z
2024-06-26T18:26:53Z
Understand What LLM Needs: Dual Preference Alignment for Retrieval-Augmented Generation
Retrieval-augmented generation (RAG) has demonstrated effectiveness in mitigating the hallucination problem of large language models (LLMs). However, the difficulty of aligning the retriever with the diverse LLMs' knowledge preferences inevitably poses an inevitable challenge in developing a reliable RAG system. To address this issue, we propose DPA-RAG, a universal framework designed to align diverse knowledge preferences within RAG systems. Specifically, we initially introduce a preference knowledge construction pipline and incorporate five novel query augmentation strategies to alleviate preference data scarcity. Based on preference data, DPA-RAG accomplishes both external and internal preference alignment: 1) It jointly integrate pair-wise, point-wise, and contrastive preference alignment abilities into the reranker, achieving external preference alignment among RAG components. 2) It further introduces a pre-aligned stage before vanilla Supervised Fine-tuning (SFT), enabling LLMs to implicitly capture knowledge aligned with their reasoning preferences, achieving LLMs' internal alignment. Experimental results across four knowledge-intensive QA datasets demonstrate that DPA-RAG outperforms all baselines and seamlessly integrates both black-box and open-sourced LLM readers. Further qualitative analysis and discussions also provide empirical guidance for achieving reliable RAG systems. Our code is publicly available at https://github.com/dongguanting/DPA-RAG.
[ "['Guanting Dong' 'Yutao Zhu' 'Chenghao Zhang' 'Zechen Wang' 'Zhicheng Dou'\n 'Ji-Rong Wen']" ]
null
null
2406.18678
null
null
http://arxiv.org/pdf/2406.18678v1
2024-06-26T18:29:12Z
2024-06-26T18:29:12Z
Few-shot Personalization of LLMs with Mis-aligned Responses
As the diversity of users increases, the capability of providing personalized responses by large language models (LLMs) has become increasingly important. Existing approaches have only limited successes in LLM personalization, due to the absence of personalized learning or the reliance on shared personal data. This paper proposes a new approach for a few-shot personalization of LLMs with their mis-aligned responses (Fermi). Our key idea is to learn a set of personalized prompts for each user by progressively improving the prompts using LLMs, based on user profile (e.g., demographic information) and a few examples of previous opinions. During an iterative process of prompt improvement, we incorporate the contexts of mis-aligned responses by LLMs, which are especially crucial for the effective personalization of LLMs. In addition, we develop an effective inference method to further leverage the context of the test query and the personalized prompts. Our experimental results demonstrate that Fermi significantly improves performance across various benchmarks, compared to the best-performing baselines.
[ "['Jaehyung Kim' 'Yiming Yang']" ]
null
null
2406.18679
null
null
http://arxiv.org/pdf/2406.18679v1
2024-06-26T18:32:16Z
2024-06-26T18:32:16Z
Speakers Unembedded: Embedding-free Approach to Long-form Neural Diarization
End-to-end neural diarization (EEND) models offer significant improvements over traditional embedding-based Speaker Diarization (SD) approaches but falls short on generalizing to long-form audio with large number of speakers. EEND-vector-clustering method mitigates this by combining local EEND with global clustering of speaker embeddings from local windows, but this requires an additional speaker embedding framework alongside the EEND module. In this paper, we propose a novel framework applying EEND both locally and globally for long-form audio without separate speaker embeddings. This approach achieves significant relative DER reduction of 13% and 10% over the conventional 1-pass EEND on Callhome American English and RT03-CTS datasets respectively and marginal improvements over EEND-vector-clustering without the need for additional speaker embeddings. Furthermore, we discuss the computational complexity of our proposed framework and explore strategies for reducing processing times.
[ "['Xiang Li' 'Vivek Govindan' 'Rohit Paturi' 'Sundararajan Srinivasan']" ]
null
null
2406.18682
null
null
http://arxiv.org/pdf/2406.18682v2
2024-07-08T14:26:16Z
2024-06-26T18:39:08Z
The Multilingual Alignment Prism: Aligning Global and Local Preferences to Reduce Harm
A key concern with the concept of "alignment" is the implicit question of "alignment to what?". AI systems are increasingly used across the world, yet safety alignment is often focused on homogeneous monolingual settings. Additionally, preference training and safety measures often overfit to harms common in Western-centric datasets. Here, we explore the viability of different alignment approaches when balancing dual objectives: addressing and optimizing for a non-homogeneous set of languages and cultural preferences while minimizing both global and local harms. We collect the first set of human annotated red-teaming prompts in different languages distinguishing between global and local harm, which serve as a laboratory for understanding the reliability of alignment techniques when faced with preference distributions that are non-stationary across geographies and languages. While this setting is seldom covered by the literature to date, which primarily centers on English harm mitigation, it captures real-world interactions with AI systems around the world. We establish a new precedent for state-of-the-art alignment techniques across 6 languages with minimal degradation in general performance. Our work provides important insights into cross-lingual transfer and novel optimization approaches to safeguard AI systems designed to serve global populations.
[ "['Aakanksha' 'Arash Ahmadian' 'Beyza Ermis' 'Seraphina Goldfarb-Tarrant'\n 'Julia Kreutzer' 'Marzieh Fadaee' 'Sara Hooker']" ]
null
null
2406.18690
null
null
http://arxiv.org/pdf/2406.18690v1
2024-06-26T18:48:50Z
2024-06-26T18:48:50Z
Petal-X: Human-Centered Visual Explanations to Improve Cardiovascular Risk Communication
Cardiovascular diseases (CVDs), the leading cause of death worldwide, can be prevented in most cases through behavioral interventions. Therefore, effective communication of CVD risk and projected risk reduction by risk factor modification plays a crucial role in reducing CVD risk at the individual level. However, despite interest in refining risk estimation with improved prediction models such as SCORE2, the guidelines for presenting these risk estimations in clinical practice remained essentially unchanged in the last few years, with graphical score charts (GSCs) continuing to be one of the prevalent systems. This work describes the design and implementation of Petal-X, a novel tool to support clinician-patient shared decision-making by explaining the CVD risk contributions of different factors and facilitating what-if analysis. Petal-X relies on a novel visualization, Petal Product Plots, and a tailor-made global surrogate model of SCORE2, whose fidelity is comparable to that of the GSCs used in clinical practice. We evaluated Petal-X compared to GSCs in a controlled experiment with 88 healthcare students, all but one with experience with chronic patients. The results show that Petal-X outperforms GSC in critical tasks, such as comparing the contribution to the patient's 10-year CVD risk of each modifiable risk factor, without a significant loss of perceived transparency, trust, or intent to use. Our study provides an innovative approach to the visualization and explanation of risk in clinical practice that, due to its model-agnostic nature, could continue to support next-generation artificial intelligence risk assessment models.
[ "['Diego Rojo' 'Houda Lamqaddam' 'Lucija Gosak' 'Katrien Verbert']" ]
null
null
2406.18695
null
null
http://arxiv.org/pdf/2406.18695v1
2024-06-26T18:57:32Z
2024-06-26T18:57:32Z
Learning to Correct for QA Reasoning with Black-box LLMs
An open challenge in recent machine learning is about how to improve the reasoning capability of large language models (LLMs) in a black-box setting, i.e., without access to detailed information such as output token probabilities. Existing approaches either rely on accessibility (which is often unrealistic) or involve significantly increased train- and inference-time costs. This paper addresses those limitations or shortcomings by proposing a novel approach, namely CoBB (Correct for improving QA reasoning of Black-Box LLMs). It uses a trained adaptation model to perform a seq2seq mapping from the often-imperfect reasonings of the original black-box LLM to the correct or improved reasonings. Specifically, the adaptation model is initialized with a relatively small open-source LLM and adapted over a collection of sub-sampled training pairs. To select the representative pairs of correct and incorrect reasonings, we formulated the dataset construction as an optimization problem that minimizes the statistical divergence between the sampled subset and the entire collection, and solved it via a genetic algorithm. We then train the adaptation model over the sampled pairs by contrasting the likelihoods of correct and incorrect reasonings. Our experimental results demonstrate that CoBB significantly improves reasoning accuracy across various QA benchmarks, compared to the best-performing adaptation baselines.
[ "['Jaehyung Kim' 'Dongyoung Kim' 'Yiming Yang']" ]
null
null
2406.18701
null
null
http://arxiv.org/pdf/2406.18701v1
2024-06-26T19:10:34Z
2024-06-26T19:10:34Z
Fast Optimizer Benchmark
In this paper, we present the Fast Optimizer Benchmark (FOB), a tool designed for evaluating deep learning optimizers during their development. The benchmark supports tasks from multiple domains such as computer vision, natural language processing, and graph learning. The focus is on convenient usage, featuring human-readable YAML configurations, SLURM integration, and plotting utilities. FOB can be used together with existing hyperparameter optimization (HPO) tools as it handles training and resuming of runs. The modular design enables integration into custom pipelines, using it simply as a collection of tasks. We showcase an optimizer comparison as a usage example of our tool. FOB can be found on GitHub: https://github.com/automl/FOB.
[ "['Simon Blauth' 'Tobias Bürger' 'Zacharias Häringer' 'Jörg Franke'\n 'Frank Hutter']" ]
null
null
2406.18708
null
null
http://arxiv.org/pdf/2406.18708v1
2024-06-26T19:18:28Z
2024-06-26T19:18:28Z
Learn it or Leave it: Module Composition and Pruning for Continual Learning
In real-world environments, continual learning is essential for machine learning models, as they need to acquire new knowledge incrementally without forgetting what they have already learned. While pretrained language models have shown impressive capabilities on various static tasks, applying them to continual learning poses significant challenges, including avoiding catastrophic forgetting, facilitating knowledge transfer, and maintaining parameter efficiency. In this paper, we introduce MoCL-P, a novel lightweight continual learning method that addresses these challenges simultaneously. Unlike traditional approaches that continuously expand parameters for newly arriving tasks, MoCL-P integrates task representation-guided module composition with adaptive pruning, effectively balancing knowledge integration and computational overhead. Our evaluation across three continual learning benchmarks with up to 176 tasks shows that MoCL-P achieves state-of-the-art performance and improves parameter efficiency by up to three times, demonstrating its potential for practical applications where resource requirements are constrained.
[ "['Mingyang Wang' 'Heike Adel' 'Lukas Lange' 'Jannik Strötgen'\n 'Hinrich Schütze']" ]
null
null
2406.18725
null
null
http://arxiv.org/pdf/2406.18725v1
2024-06-26T19:48:48Z
2024-06-26T19:48:48Z
Jailbreaking LLMs with Arabic Transliteration and Arabizi
This study identifies the potential vulnerabilities of Large Language Models (LLMs) to 'jailbreak' attacks, specifically focusing on the Arabic language and its various forms. While most research has concentrated on English-based prompt manipulation, our investigation broadens the scope to investigate the Arabic language. We initially tested the AdvBench benchmark in Standardized Arabic, finding that even with prompt manipulation techniques like prefix injection, it was insufficient to provoke LLMs into generating unsafe content. However, when using Arabic transliteration and chatspeak (or arabizi), we found that unsafe content could be produced on platforms like OpenAI GPT-4 and Anthropic Claude 3 Sonnet. Our findings suggest that using Arabic and its various forms could expose information that might remain hidden, potentially increasing the risk of jailbreak attacks. We hypothesize that this exposure could be due to the model's learned connection to specific words, highlighting the need for more comprehensive safety training across all language forms.
[ "['Mansour Al Ghanim' 'Saleh Almohaimeed' 'Mengxin Zheng' 'Yan Solihin'\n 'Qian Lou']" ]
null
null
2406.18726
null
null
http://arxiv.org/pdf/2406.18726v1
2024-06-26T19:51:53Z
2024-06-26T19:51:53Z
Data-driven identification of port-Hamiltonian DAE systems by Gaussian processes
Port-Hamiltonian systems (pHS) allow for a structure-preserving modeling of dynamical systems. Coupling pHS via linear relations between input and output defines an overall pHS, which is structure preserving. However, in multiphysics applications, some subsystems do not allow for a physical pHS description, as (a) this is not available or (b) too expensive. Here, data-driven approaches can be used to deliver a pHS for such subsystems, which can then be coupled to the other subsystems in a structure-preserving way. In this work, we derive a data-driven identification approach for port-Hamiltonian differential algebraic equation (DAE) systems. The approach uses input and state space data to estimate nonlinear effort functions of pH-DAEs. As underlying technique, we us (multi-task) Gaussian processes. This work thereby extends over the current state of the art, in which only port-Hamiltonian ordinary differential equation systems could be identified via Gaussian processes. We apply this approach successfully to two applications from network design and constrained multibody system dynamics, based on pH-DAE system of index one and three, respectively.
[ "['Peter Zaspel' 'Michael Günther']" ]
null
null
2406.18739
null
null
http://arxiv.org/pdf/2406.18739v1
2024-06-26T20:10:03Z
2024-06-26T20:10:03Z
RetroGFN: Diverse and Feasible Retrosynthesis using GFlowNets
Single-step retrosynthesis aims to predict a set of reactions that lead to the creation of a target molecule, which is a crucial task in molecular discovery. Although a target molecule can often be synthesized with multiple different reactions, it is not clear how to verify the feasibility of a reaction, because the available datasets cover only a tiny fraction of the possible solutions. Consequently, the existing models are not encouraged to explore the space of possible reactions sufficiently. In this paper, we propose a novel single-step retrosynthesis model, RetroGFN, that can explore outside the limited dataset and return a diverse set of feasible reactions by leveraging a feasibility proxy model during the training. We show that RetroGFN achieves competitive results on standard top-k accuracy while outperforming existing methods on round-trip accuracy. Moreover, we provide empirical arguments in favor of using round-trip accuracy which expands the notion of feasibility with respect to the standard top-k accuracy metric.
[ "['Piotr Gaiński' 'Michał Koziarski' 'Krzysztof Maziarz' 'Marwin Segler'\n 'Jacek Tabor' 'Marek Śmieja']" ]
null
null
2406.18741
null
null
http://arxiv.org/pdf/2406.18741v1
2024-06-26T20:12:48Z
2024-06-26T20:12:48Z
Decentralized Semantic Traffic Control in AVs Using RL and DQN for Dynamic Roadblocks
Autonomous Vehicles (AVs), furnished with sensors capable of capturing essential vehicle dynamics such as speed, acceleration, and precise location, possess the capacity to execute intelligent maneuvers, including lane changes, in anticipation of approaching roadblocks. Nevertheless, the sheer volume of sensory data and the processing necessary to derive informed decisions can often overwhelm the vehicles, rendering them unable to handle the task independently. Consequently, a common approach in traffic scenarios involves transmitting the data to servers for processing, a practice that introduces challenges, particularly in situations demanding real-time processing. In response to this challenge, we present a novel DL-based semantic traffic control system that entrusts semantic encoding responsibilities to the vehicles themselves. This system processes driving decisions obtained from a Reinforcement Learning (RL) agent, streamlining the decision-making process. Specifically, our framework envisions scenarios where abrupt roadblocks materialize due to factors such as road maintenance, accidents, or vehicle repairs, necessitating vehicles to make determinations concerning lane-keeping or lane-changing actions to navigate past these obstacles. To formulate this scenario mathematically, we employ a Markov Decision Process (MDP) and harness the Deep Q Learning (DQN) algorithm to unearth viable solutions.
[ "['Emanuel Figetakis' 'Yahuza Bello' 'Ahmed Refaey' 'Abdallah Shami']" ]
null
null
2406.18745
null
null
http://arxiv.org/pdf/2406.18745v1
2024-06-26T20:19:32Z
2024-06-26T20:19:32Z
QBI: Quantile-based Bias Initialization for Efficient Private Data Reconstruction in Federated Learning
Federated learning enables the training of machine learning models on distributed data without compromising user privacy, as data remains on personal devices and only model updates, such as gradients, are shared with a central coordinator. However, recent research has shown that the central entity can perfectly reconstruct private data from shared model updates by maliciously initializing the model's parameters. In this paper, we propose QBI, a novel bias initialization method that significantly enhances reconstruction capabilities. This is accomplished by directly solving for bias values yielding sparse activation patterns. Further, we propose PAIRS, an algorithm that builds on QBI. PAIRS can be deployed when a separate dataset from the target domain is available to further increase the percentage of data that can be fully recovered. Measured by the percentage of samples that can be perfectly reconstructed from batches of various sizes, our approach achieves significant improvements over previous methods with gains of up to 50% on ImageNet and up to 60% on the IMDB sentiment analysis text dataset. Furthermore, we establish theoretical limits for attacks leveraging stochastic gradient sparsity, providing a foundation for understanding the fundamental constraints of these attacks. We empirically assess these limits using synthetic datasets. Finally, we propose and evaluate AGGP, a defensive framework designed to prevent gradient sparsity attacks, contributing to the development of more secure and private federated learning systems.
[ "['Micha V. Nowak' 'Tim P. Bott' 'David Khachaturov' 'Frank Puppe'\n 'Adrian Krenzer' 'Amar Hekalo']" ]