categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
null
null
2403.07036
null
null
http://arxiv.org/pdf/2403.07036v1
2024-03-11T08:13:42Z
2024-03-11T08:13:42Z
A Converting Autoencoder Toward Low-latency and Energy-efficient DNN Inference at the Edge
Reducing inference time and energy usage while maintaining prediction accuracy has become a significant concern for deep neural networks (DNN) inference on resource-constrained edge devices. To address this problem, we propose a novel approach based on "converting" autoencoder and lightweight DNNs. This improves upon recent work such as early-exiting framework and DNN partitioning. Early-exiting frameworks spend different amounts of computation power for different input data depending upon their complexity. However, they can be inefficient in real-world scenarios that deal with many hard image samples. On the other hand, DNN partitioning algorithms that utilize the computation power of both the cloud and edge devices can be affected by network delays and intermittent connections between the cloud and the edge. We present CBNet, a low-latency and energy-efficient DNN inference framework tailored for edge devices. It utilizes a "converting" autoencoder to efficiently transform hard images into easy ones, which are subsequently processed by a lightweight DNN for inference. To the best of our knowledge, such autoencoder has not been proposed earlier. Our experimental results using three popular image-classification datasets on a Raspberry Pi 4, a Google Cloud instance, and an instance with Nvidia Tesla K80 GPU show that CBNet achieves up to 4.8x speedup in inference latency and 79% reduction in energy usage compared to competing techniques while maintaining similar or higher accuracy.
[ "['Hasanul Mahmud' 'Peng Kang' 'Kevin Desai' 'Palden Lama' 'Sushil Prasad']" ]
null
null
2403.07038
null
null
http://arxiv.org/pdf/2403.07038v1
2024-03-11T09:54:35Z
2024-03-11T09:54:35Z
Leveraging graph neural networks for supporting Automatic Triage of Patients
Patient triage plays a crucial role in emergency departments, ensuring timely and appropriate care based on correctly evaluating the emergency grade of patient conditions. Triage methods are generally performed by human operator based on her own experience and information that are gathered from the patient management process. Thus, it is a process that can generate errors in emergency level associations. Recently, Traditional triage methods heavily rely on human decisions, which can be subjective and prone to errors. Recently, a growing interest has been focused on leveraging artificial intelligence (AI) to develop algorithms able to maximize information gathering and minimize errors in patient triage processing. We define and implement an AI based module to manage patients emergency code assignments in emergency departments. It uses emergency department historical data to train the medical decision process. Data containing relevant patient information, such as vital signs, symptoms, and medical history, are used to accurately classify patients into triage categories. Experimental results demonstrate that the proposed algorithm achieved high accuracy outperforming traditional triage methods. By using the proposed method we claim that healthcare professionals can predict severity index to guide patient management processing and resource allocation.
[ "['Annamaria Defilippo' 'Pierangelo Veltri' \"Pietro Lio'\"\n 'Pietro Hiram Guzzi']" ]
null
null
2403.07040
null
null
http://arxiv.org/pdf/2403.07040v1
2024-03-11T16:04:58Z
2024-03-11T16:04:58Z
All in One: Multi-Task Prompting for Graph Neural Networks (Extended Abstract)
This paper is an extended abstract of our original work published in KDD23, where we won the best research paper award (Xiangguo Sun, Hong Cheng, Jia Li, Bo Liu, and Jihong Guan. All in one: Multi-task prompting for graph neural networks. KDD 23) The paper introduces a novel approach to bridging the gap between pre-trained graph models and the diverse tasks they're applied to, inspired by the success of prompt learning in NLP. Recognizing the challenge of aligning pre-trained models with varied graph tasks (node level, edge level, and graph level), which can lead to negative transfer and poor performance, we propose a multi-task prompting method for graphs. This method involves unifying graph and language prompt formats, enabling NLP's prompting strategies to be adapted for graph tasks. By analyzing the task space of graph applications, we reformulate problems to fit graph-level tasks and apply meta-learning to improve prompt initialization for multiple tasks. Experiments show our method's effectiveness in enhancing model performance across different graph tasks. Beyond the original work, in this extended abstract, we further discuss the graph prompt from a bigger picture and provide some of the latest work toward this area.
[ "['Xiangguo Sun' 'Hong Cheng' 'Jia Li' 'Bo Liu' 'Jihong Guan']" ]
null
null
2403.07041
null
null
http://arxiv.org/pdf/2403.07041v2
2024-05-22T21:18:43Z
2024-03-11T16:26:06Z
Ant Colony Sampling with GFlowNets for Combinatorial Optimization
This paper introduces the Generative Flow Ant Colony Sampler (GFACS), a neural-guided probabilistic search algorithm for solving combinatorial optimization (CO). GFACS integrates generative flow networks (GFlowNets), an emerging amortized inference method, with ant colony optimization (ACO), a promising probabilistic search algorithm. Specifically, we use GFlowNets to learn a constructive policy in combinatorial spaces for enhancing ACO by providing an informed prior distribution over decision variables conditioned on input graph instances. Furthermore, we introduce a novel off-policy training algorithm for scaling conditional GFlowNets into large-scale combinatorial spaces by leveraging local search and shared energy normalization. Our experimental results demonstrate that GFACS outperforms baseline ACO algorithms in seven CO tasks and is competitive with problem-specific heuristics for vehicle routing problems.
[ "['Minsu Kim' 'Sanghyeok Choi' 'Hyeonah Kim' 'Jiwoo Son' 'Jinkyoo Park'\n 'Yoshua Bengio']" ]
null
null
2403.07059
null
null
http://arxiv.org/pdf/2403.07059v2
2024-03-14T11:02:26Z
2024-03-11T18:00:06Z
Better than classical? The subtle art of benchmarking quantum machine learning models
Benchmarking models via classical simulations is one of the main ways to judge ideas in quantum machine learning before noise-free hardware is available. However, the huge impact of the experimental design on the results, the small scales within reach today, as well as narratives influenced by the commercialisation of quantum technologies make it difficult to gain robust insights. To facilitate better decision-making we develop an open-source package based on the PennyLane software framework and use it to conduct a large-scale study that systematically tests 12 popular quantum machine learning models on 6 binary classification tasks used to create 160 individual datasets. We find that overall, out-of-the-box classical machine learning models outperform the quantum classifiers. Moreover, removing entanglement from a quantum model often results in as good or better performance, suggesting that "quantumness" may not be the crucial ingredient for the small learning tasks considered here. Our benchmarks also unlock investigations beyond simplistic leaderboard comparisons, and we identify five important questions for quantum model design that follow from our results.
[ "['Joseph Bowles' 'Shahnawaz Ahmed' 'Maria Schuld']" ]
null
null
2403.07066
null
null
http://arxiv.org/pdf/2403.07066v1
2024-03-11T18:00:47Z
2024-03-11T18:00:47Z
Re-Simulation-based Self-Supervised Learning for Pre-Training Foundation Models
Self-Supervised Learning (SSL) is at the core of training modern large machine learning models, providing a scheme for learning powerful representations that can be used in a variety of downstream tasks. However, SSL strategies must be adapted to the type of training data and downstream tasks required. We propose RS3L, a novel simulation-based SSL strategy that employs a method of re-simulation to drive data augmentation for contrastive learning. By intervening in the middle of the simulation process and re-running simulation components downstream of the intervention, we generate multiple realizations of an event, thus producing a set of augmentations covering all physics-driven variations available in the simulator. Using experiments from high-energy physics, we explore how this strategy may enable the development of a foundation model; we show how R3SL pre-training enables powerful performance in downstream tasks such as discrimination of a variety of objects and uncertainty mitigation. In addition to our results, we make the RS3L dataset publicly available for further studies on how to improve SSL strategies.
[ "['Philip Harris' 'Michael Kagan' 'Jeffrey Krupa' 'Benedikt Maier'\n 'Nathaniel Woodward']" ]
null
null
2403.07072
null
null
http://arxiv.org/pdf/2403.07072v1
2024-03-11T18:03:02Z
2024-03-11T18:03:02Z
Explainable Learning with Gaussian Processes
The field of explainable artificial intelligence (XAI) attempts to develop methods that provide insight into how complicated machine learning methods make predictions. Many methods of explanation have focused on the concept of feature attribution, a decomposition of the model's prediction into individual contributions corresponding to each input feature. In this work, we explore the problem of feature attribution in the context of Gaussian process regression (GPR). We take a principled approach to defining attributions under model uncertainty, extending the existing literature. We show that although GPR is a highly flexible and non-parametric approach, we can derive interpretable, closed-form expressions for the feature attributions. When using integrated gradients as an attribution method, we show that the attributions of a GPR model also follow a Gaussian process distribution, which quantifies the uncertainty in attribution arising from uncertainty in the model. We demonstrate, both through theory and experimentation, the versatility and robustness of this approach. We also show that, when applicable, the exact expressions for GPR attributions are both more accurate and less computationally expensive than the approximations currently used in practice. The source code for this project is freely available under MIT license at https://github.com/KurtButler/2024_attributions_paper.
[ "['Kurt Butler' 'Guanchao Feng' 'Petar M. Djuric']" ]
null
null
2403.07078
null
null
http://arxiv.org/abs/2403.07078v1
2024-03-11T18:11:00Z
2024-03-11T18:11:00Z
Improving deep learning with prior knowledge and cognitive models: A survey on enhancing explainability, adversarial robustness and zero-shot learning
We review current and emerging knowledge-informed and brain-inspired cognitive systems for realizing adversarial defenses, eXplainable Artificial Intelligence (XAI), and zero-shot or few-short learning. Data-driven deep learning models have achieved remarkable performance and demonstrated capabilities surpassing human experts in many applications. Yet, their inability to exploit domain knowledge leads to serious performance limitations in practical applications. In particular, deep learning systems are exposed to adversarial attacks, which can trick them into making glaringly incorrect decisions. Moreover, complex data-driven models typically lack interpretability or explainability, i.e., their decisions cannot be understood by human subjects. Furthermore, models are usually trained on standard datasets with a closed-world assumption. Hence, they struggle to generalize to unseen cases during inference in practical open-world environments, thus, raising the zero- or few-shot generalization problem. Although many conventional solutions exist, explicit domain knowledge, brain-inspired neural network and cognitive architectures offer powerful new dimensions towards alleviating these problems. Prior knowledge is represented in appropriate forms and incorporated in deep learning frameworks to improve performance. Brain-inspired cognition methods use computational models that mimic the human mind to enhance intelligent behavior in artificial agents and autonomous robots. Ultimately, these models achieve better explainability, higher adversarial robustness and data-efficient learning, and can, in turn, provide insights for cognitive science and neuroscience-that is, to deepen human understanding on how the brain works in general, and how it handles these problems.
[ "['Fuseinin Mumuni' 'Alhassan Mumuni']" ]
null
null
2403.07092
null
null
http://arxiv.org/abs/2403.07092v1
2024-03-11T18:36:55Z
2024-03-11T18:36:55Z
A cascaded deep network for automated tumor detection and segmentation in clinical PET imaging of diffuse large B-cell lymphoma
Accurate detection and segmentation of diffuse large B-cell lymphoma (DLBCL) from PET images has important implications for estimation of total metabolic tumor volume, radiomics analysis, surgical intervention and radiotherapy. Manual segmentation of tumors in whole-body PET images is time-consuming, labor-intensive and operator-dependent. In this work, we develop and validate a fast and efficient three-step cascaded deep learning model for automated detection and segmentation of DLBCL tumors from PET images. As compared to a single end-to-end network for segmentation of tumors in whole-body PET images, our three-step model is more effective (improves 3D Dice score from 58.9% to 78.1%) since each of its specialized modules, namely the slice classifier, the tumor detector and the tumor segmentor, can be trained independently to a high degree of skill to carry out a specific task, rather than a single network with suboptimal performance on overall segmentation.
[ "['Shadab Ahamed' 'Natalia Dubljevic' 'Ingrid Bloise' 'Claire Gowdy'\n 'Patrick Martineau' 'Don Wilson' 'Carlos F. Uribe' 'Arman Rahmim'\n 'Fereshteh Yousefirizi']" ]
null
null
2403.07094
null
null
http://arxiv.org/pdf/2403.07094v1
2024-03-11T18:40:47Z
2024-03-11T18:40:47Z
FALCON: FLOP-Aware Combinatorial Optimization for Neural Network Pruning
The increasing computational demands of modern neural networks present deployment challenges on resource-constrained devices. Network pruning offers a solution to reduce model size and computational cost while maintaining performance. However, most current pruning methods focus primarily on improving sparsity by reducing the number of nonzero parameters, often neglecting other deployment costs such as inference time, which are closely related to the number of floating-point operations (FLOPs). In this paper, we propose FALCON, a novel combinatorial-optimization-based framework for network pruning that jointly takes into account model accuracy (fidelity), FLOPs, and sparsity constraints. A main building block of our approach is an integer linear program (ILP) that simultaneously handles FLOP and sparsity constraints. We present a novel algorithm to approximately solve the ILP. We propose a novel first-order method for our optimization framework which makes use of our ILP solver. Using problem structure (e.g., the low-rank structure of approx. Hessian), we can address instances with millions of parameters. Our experiments demonstrate that FALCON achieves superior accuracy compared to other pruning approaches within a fixed FLOP budget. For instance, for ResNet50 with 20% of the total FLOPs retained, our approach improves the accuracy by 48% relative to state-of-the-art. Furthermore, in gradual pruning settings with re-training between pruning steps, our framework outperforms existing pruning methods, emphasizing the significance of incorporating both FLOP and sparsity constraints for effective network pruning.
[ "['Xiang Meng' 'Wenyu Chen' 'Riade Benbaki' 'Rahul Mazumder']" ]
null
null
2403.07095
null
null
http://arxiv.org/pdf/2403.07095v2
2024-06-25T13:46:24Z
2024-03-11T18:44:36Z
Overcoming the Paradox of Certified Training with Gaussian Smoothing
Training neural networks with high certified accuracy against adversarial examples remains an open problem despite significant efforts. While certification methods can effectively leverage tight convex relaxations for bound computation, in training, these methods perform worse than looser relaxations. Prior work hypothesized that this is caused by the discontinuity and perturbation sensitivity of the loss surface induced by these tighter relaxations. In this work, we show theoretically that Gaussian Loss Smoothing can alleviate both issues. We confirm this empirically by proposing a certified training method combining PGPE, an algorithm computing gradients of a smoothed loss, with different convex relaxations. When using this training method, we observe that tighter bounds indeed lead to strictly better networks. While scaling PGPE training remains challenging due to high computational cost, we show that by using a not theoretically sound, yet much cheaper smoothing approximation, we obtain better certified accuracies than state-of-the-art methods when training on the same network architecture. Our results clearly demonstrate the promise of Gaussian Loss Smoothing for training certifiably robust neural networks.
[ "['Stefan Balauca' 'Mark Niklas Müller' 'Yuhao Mao' 'Maximilian Baader'\n 'Marc Fischer' 'Martin Vechev']" ]
null
null
2403.07105
null
null
http://arxiv.org/abs/2403.07105v1
2024-03-11T18:57:45Z
2024-03-11T18:57:45Z
A slice classification neural network for automated classification of axial PET/CT slices from a multi-centric lymphoma dataset
Automated slice classification is clinically relevant since it can be incorporated into medical image segmentation workflows as a preprocessing step that would flag slices with a higher probability of containing tumors, thereby directing physicians attention to the important slices. In this work, we train a ResNet-18 network to classify axial slices of lymphoma PET/CT images (collected from two institutions) depending on whether the slice intercepted a tumor (positive slice) in the 3D image or if the slice did not (negative slice). Various instances of the network were trained on 2D axial datasets created in different ways: (i) slice-level split and (ii) patient-level split; inputs of different types were used: (i) only PET slices and (ii) concatenated PET and CT slices; and different training strategies were employed: (i) center-aware (CAW) and (ii) center-agnostic (CAG). Model performances were compared using the area under the receiver operating characteristic curve (AUROC) and the area under the precision-recall curve (AUPRC), and various binary classification metrics. We observe and describe a performance overestimation in the case of slice-level split as compared to the patient-level split training. The model trained using patient-level split data with the network input containing only PET slices in the CAG training regime was the best performing/generalizing model on a majority of metrics. Our models were additionally more closely compared using the sensitivity metric on the positive slices from their respective test sets.
[ "['Shadab Ahamed' 'Yixi Xu' 'Ingrid Bloise' 'Joo H. O' 'Carlos F. Uribe'\n 'Rahul Dodhia' 'Juan L. Ferres' 'Arman Rahmim']" ]
null
null
2403.07113
null
null
http://arxiv.org/pdf/2403.07113v1
2024-03-11T19:06:04Z
2024-03-11T19:06:04Z
Class Imbalance in Object Detection: An Experimental Diagnosis and Study of Mitigation Strategies
Object detection, a pivotal task in computer vision, is frequently hindered by dataset imbalances, particularly the under-explored issue of foreground-foreground class imbalance. This lack of attention to foreground-foreground class imbalance becomes even more pronounced in the context of single-stage detectors. This study introduces a benchmarking framework utilizing the YOLOv5 single-stage detector to address the problem of foreground-foreground class imbalance. We crafted a novel 10-class long-tailed dataset from the COCO dataset, termed COCO-ZIPF, tailored to reflect common real-world detection scenarios with a limited number of object classes. Against this backdrop, we scrutinized three established techniques: sampling, loss weighing, and data augmentation. Our comparative analysis reveals that sampling and loss reweighing methods, while shown to be beneficial in two-stage detector settings, do not translate as effectively in improving YOLOv5's performance on the COCO-ZIPF dataset. On the other hand, data augmentation methods, specifically mosaic and mixup, significantly enhance the model's mean Average Precision (mAP), by introducing more variability and complexity into the training data. (Code available: https://github.com/craston/object_detection_cib)
[ "['Nieves Crasto']" ]
null
null
2403.07128
null
null
http://arxiv.org/pdf/2403.07128v1
2024-03-11T19:51:01Z
2024-03-11T19:51:01Z
FAX: Scalable and Differentiable Federated Primitives in JAX
We present FAX, a JAX-based library designed to support large-scale distributed and federated computations in both data center and cross-device applications. FAX leverages JAX's sharding mechanisms to enable native targeting of TPUs and state-of-the-art JAX runtimes, including Pathways. FAX embeds building blocks for federated computations as primitives in JAX. This enables three key benefits. First, FAX computations can be translated to XLA HLO. Second, FAX provides a full implementation of federated automatic differentiation, greatly simplifying the expression of federated computations. Last, FAX computations can be interpreted out to existing production cross-device federated compute systems. We show that FAX provides an easily programmable, performant, and scalable framework for federated computations in the data center. FAX is available at https://github.com/google-research/google-research/tree/master/fax .
[ "['Keith Rush' 'Zachary Charles' 'Zachary Garrett']" ]
null
null
2403.07132
null
null
http://arxiv.org/pdf/2403.07132v1
2024-03-11T20:02:17Z
2024-03-11T20:02:17Z
A New Machine Learning Dataset of Bulldog Nostril Images for Stenosis Degree Classification
Brachycephaly, a conformation trait in some dog breeds, causes BOAS, a respiratory disorder that affects the health and welfare of the dogs with various symptoms. In this paper, a new annotated dataset composed of 190 images of bulldogs' nostrils is presented. Three degrees of stenosis are approximately equally represented in the dataset: mild, moderate and severe stenosis. The dataset also comprises a small quantity of non stenotic nostril images. To the best of our knowledge, this is the first image dataset addressing this problem. Furthermore, deep learning is investigated as an alternative to automatically infer stenosis degree using nostril images. In this work, several neural networks were tested: ResNet50, MobileNetV3, DenseNet201, SwinV2 and MaxViT. For this evaluation, the problem was modeled in two different ways: first, as a three-class classification problem (mild or open, moderate, and severe); second, as a binary classification problem, with severe stenosis as target. For the multiclass classification, a maximum median f-score of 53.77% was achieved by the MobileNetV3. For binary classification, a maximum median f-score of 72.08% has been reached by ResNet50, indicating that the problem is challenging but possibly tractable.
[ "['Gabriel Toshio Hirokawa Higa' 'Joyce Katiuccia Medeiros Ramos Carvalho'\n 'Paolo Brito Pascoalini Zanoni' 'Gisele Braziliano de Andrade'\n 'Hemerson Pistori']" ]
null
null
2403.07134
null
null
http://arxiv.org/pdf/2403.07134v2
2024-06-04T16:57:16Z
2024-03-11T20:04:03Z
COMQ: A Backpropagation-Free Algorithm for Post-Training Quantization
Post-training quantization (PTQ) has emerged as a practical approach to compress large neural networks, making them highly efficient for deployment. However, effectively reducing these models to their low-bit counterparts without compromising the original accuracy remains a key challenge. In this paper, we propose an innovative PTQ algorithm termed COMQ, which sequentially conducts coordinate-wise minimization of the layer-wise reconstruction errors. We consider the widely used integer quantization, where every quantized weight can be decomposed into a shared floating-point scalar and an integer bit-code. Within a fixed layer, COMQ treats all the scaling factor(s) and bit-codes as the variables of the reconstruction error. Every iteration improves this error along a single coordinate while keeping all other variables constant. COMQ is easy to use and requires no hyper-parameter tuning. It instead involves only dot products and rounding operations. We update these variables in a carefully designed greedy order, significantly enhancing the accuracy. COMQ achieves remarkable results in quantizing 4-bit Vision Transformers, with a negligible loss of less than 1% in Top-1 accuracy. In 4-bit INT quantization of convolutional neural networks, COMQ maintains near-lossless accuracy with a minimal drop of merely 0.3% in Top-1 accuracy.
[ "['Aozhong Zhang' 'Zi Yang' 'Naigang Wang' 'Yingyong Qin' 'Jack Xin'\n 'Xin Li' 'Penghang Yin']" ]
null
null
2403.07136
null
null
http://arxiv.org/pdf/2403.07136v1
2024-03-11T20:05:48Z
2024-03-11T20:05:48Z
On the Limited Representational Power of Value Functions and its Links to Statistical (In)Efficiency
Identifying the trade-offs between model-based and model-free methods is a central question in reinforcement learning. Value-based methods offer substantial computational advantages and are sometimes just as statistically efficient as model-based methods. However, focusing on the core problem of policy evaluation, we show information about the transition dynamics may be impossible to represent in the space of value functions. We explore this through a series of case studies focused on structures that arises in many important problems. In several, there is no information loss and value-based methods are as statistically efficient as model based ones. In other closely-related examples, information loss is severe and value-based methods are severely outperformed. A deeper investigation points to the limitations of the representational power as the driver of the inefficiency, as opposed to failure in algorithm design.
[ "['David Cheikhi' 'Daniel Russo']" ]
null
null
2403.07137
null
null
http://arxiv.org/pdf/2403.07137v1
2024-03-11T20:07:05Z
2024-03-11T20:07:05Z
Exploring Cluster Analysis in Nelore Cattle Visual Score Attribution
Assessing the biotype of cattle through human visual inspection is a very common and important practice in precision cattle breeding. This paper presents the results of a correlation analysis between scores produced by humans for Nelore cattle and a variety of measurements that can be derived from images or other instruments. It also presents a study using the k-means algorithm to generate new ways of clustering a batch of cattle using the measurements that most correlate with the animal's body weight and visual scores.
[ "['Alexandre de Oliveira Bezerra' 'Rodrigo Goncalves Mateus'\n 'Vanessa Ap. de Moraes Weber' 'Fabricio de Lima Weber'\n 'Yasmin Alves de Arruda' 'Rodrigo da Costa Gomes'\n 'Gabriel Toshio Hirokawa Higa' 'Hemerson Pistori']" ]
null
null
2403.07142
null
null
http://arxiv.org/pdf/2403.07142v1
2024-03-11T20:23:59Z
2024-03-11T20:23:59Z
One Category One Prompt: Dataset Distillation using Diffusion Models
The extensive amounts of data required for training deep neural networks pose significant challenges on storage and transmission fronts. Dataset distillation has emerged as a promising technique to condense the information of massive datasets into a much smaller yet representative set of synthetic samples. However, traditional dataset distillation approaches often struggle to scale effectively with high-resolution images and more complex architectures due to the limitations in bi-level optimization. Recently, several works have proposed exploiting knowledge distillation with decoupled optimization schemes to scale up dataset distillation. Although these methods effectively address the scalability issue, they rely on extensive image augmentations requiring the storage of soft labels for augmented images. In this paper, we introduce Dataset Distillation using Diffusion Models (D3M) as a novel paradigm for dataset distillation, leveraging recent advancements in generative text-to-image foundation models. Our approach utilizes textual inversion, a technique for fine-tuning text-to-image generative models, to create concise and informative representations for large datasets. By employing these learned text prompts, we can efficiently store and infer new samples for introducing data variability within a fixed memory budget. We show the effectiveness of our method through extensive experiments across various computer vision benchmark datasets with different memory budgets.
[ "['Ali Abbasi' 'Ashkan Shahbazi' 'Hamed Pirsiavash' 'Soheil Kolouri']" ]
null
null
2403.07143
null
null
http://arxiv.org/pdf/2403.07143v2
2024-05-22T22:53:18Z
2024-03-11T20:28:23Z
New Perspectives in Online Contract Design
This work studies the repeated principal-agent problem from an online learning perspective. The principal's goal is to learn the optimal contract that maximizes her utility through repeated interactions, without prior knowledge of the agent's type (i.e., the agent's cost and production functions). This work contains three technical results. First, learning linear contracts with binary outcomes is equivalent to dynamic pricing with an unknown demand curve. Second, learning an approximately optimal contract with identical agents can be accomplished with a polynomial sample complexity scheme. Third, learning the optimal contract with heterogeneous agents can be reduced to Lipschitz bandits under mild regularity conditions. The technical results demonstrate that the one-dimensional effort model, the default model for principal-agent problems in economics which seems largely ignored in recent works from the computer science community, may possibly be the more suitable choice when studying contract design from a learning perspective.
[ "['Shiliang Zuo']" ]
null
null
2403.07148
null
null
http://arxiv.org/pdf/2403.07148v1
2024-03-11T20:35:52Z
2024-03-11T20:35:52Z
Stochastic Extragradient with Random Reshuffling: Improved Convergence for Variational Inequalities
The Stochastic Extragradient (SEG) method is one of the most popular algorithms for solving finite-sum min-max optimization and variational inequality problems (VIPs) appearing in various machine learning tasks. However, existing convergence analyses of SEG focus on its with-replacement variants, while practical implementations of the method randomly reshuffle components and sequentially use them. Unlike the well-studied with-replacement variants, SEG with Random Reshuffling (SEG-RR) lacks established theoretical guarantees. In this work, we provide a convergence analysis of SEG-RR for three classes of VIPs: (i) strongly monotone, (ii) affine, and (iii) monotone. We derive conditions under which SEG-RR achieves a faster convergence rate than the uniform with-replacement sampling SEG. In the monotone setting, our analysis of SEG-RR guarantees convergence to an arbitrary accuracy without large batch sizes, a strong requirement needed in the classical with-replacement SEG. As a byproduct of our results, we provide convergence guarantees for Shuffle Once SEG (shuffles the data only at the beginning of the algorithm) and the Incremental Extragradient (does not shuffle the data). We supplement our analysis with experiments validating empirically the superior performance of SEG-RR over the classical with-replacement sampling SEG.
[ "['Konstantinos Emmanouilidis' 'René Vidal' 'Nicolas Loizou']" ]
null
null
2403.07151
null
null
http://arxiv.org/pdf/2403.07151v1
2024-03-11T20:39:32Z
2024-03-11T20:39:32Z
Don't Forget What I did?: Assessing Client Contributions in Federated Learning
Federated Learning (FL) is a collaborative machine learning (ML) approach, where multiple clients participate in training an ML model without exposing the private data. Fair and accurate assessment of client contributions is an important problem in FL to facilitate incentive allocation and encouraging diverse clients to participate in a unified model training. Existing methods for assessing client contribution adopts co-operative game-theoretic concepts, such as Shapley values, but under simplified assumptions. In this paper, we propose a history-aware game-theoretic framework, called FLContrib, to assess client contributions when a subset of (potentially non-i.i.d.) clients participate in each epoch of FL training. By exploiting the FL training process and linearity of Shapley value, we develop FLContrib that yields a historical timeline of client contributions as FL training progresses over epochs. Additionally, to assess client contribution under limited computational budget, we propose a scheduling procedure that considers a two-sided fairness criteria to perform expensive Shapley value computation only in a subset of training epochs. In experiments, we demonstrate a controlled trade-off between the correctness and efficiency of client contributions assessed via FLContrib. To demonstrate the benefits of history-aware client contributions, we apply FLContrib to detect dishonest clients conducting data poisoning in FL training.
[ "['Bishwamittra Ghosh' 'Debabrota Basu' 'Fu Huazhu' 'Wang Yuan'\n 'Renuga Kanagavelu' 'Jiang Jin Peng' 'Liu Yong' 'Goh Siow Mong Rick'\n 'Wei Qingsong']" ]
null
null
2403.07179
null
null
http://arxiv.org/pdf/2403.07179v1
2024-03-11T21:44:54Z
2024-03-11T21:44:54Z
3M-Diffusion: Latent Multi-Modal Diffusion for Text-Guided Generation of Molecular Graphs
Generating molecules with desired properties is a critical task with broad applications in drug discovery and materials design. Inspired by recent advances in large language models, there is a growing interest in using natural language descriptions of molecules to generate molecules with the desired properties. Most existing methods focus on generating molecules that precisely match the text description. However, practical applications call for methods that generate diverse, and ideally novel, molecules with the desired properties. We propose 3M-Diffusion, a novel multi-modal molecular graph generation method, to address this challenge. 3M-Diffusion first encodes molecular graphs into a graph latent space aligned with text descriptions. It then reconstructs the molecular structure and atomic attributes based on the given text descriptions using the molecule decoder. It then learns a probabilistic mapping from the text space to the latent molecular graph space using a diffusion model. The results of our extensive experiments on several datasets demonstrate that 3M-Diffusion can generate high-quality, novel and diverse molecular graphs that semantically match the textual description provided.
[ "['Huaisheng Zhu' 'Teng Xiao' 'Vasant G Honavar']" ]
null
null
2403.07183
null
null
http://arxiv.org/pdf/2403.07183v2
2024-06-15T05:23:06Z
2024-03-11T21:51:39Z
Monitoring AI-Modified Content at Scale: A Case Study on the Impact of ChatGPT on AI Conference Peer Reviews
We present an approach for estimating the fraction of text in a large corpus which is likely to be substantially modified or produced by a large language model (LLM). Our maximum likelihood model leverages expert-written and AI-generated reference texts to accurately and efficiently examine real-world LLM-use at the corpus level. We apply this approach to a case study of scientific peer review in AI conferences that took place after the release of ChatGPT: ICLR 2024, NeurIPS 2023, CoRL 2023 and EMNLP 2023. Our results suggest that between 6.5% and 16.9% of text submitted as peer reviews to these conferences could have been substantially modified by LLMs, i.e. beyond spell-checking or minor writing updates. The circumstances in which generated text occurs offer insight into user behavior: the estimated fraction of LLM-generated text is higher in reviews which report lower confidence, were submitted close to the deadline, and from reviewers who are less likely to respond to author rebuttals. We also observe corpus-level trends in generated text which may be too subtle to detect at the individual level, and discuss the implications of such trends on peer review. We call for future interdisciplinary work to examine how LLM use is changing our information and knowledge practices.
[ "['Weixin Liang' 'Zachary Izzo' 'Yaohui Zhang' 'Haley Lepp' 'Hancheng Cao'\n 'Xuandong Zhao' 'Lingjiao Chen' 'Haotian Ye' 'Sheng Liu' 'Zhi Huang'\n 'Daniel A. McFarland' 'James Y. Zou']" ]
null
null
2403.07185
null
null
http://arxiv.org/pdf/2403.07185v1
2024-03-11T21:54:52Z
2024-03-11T21:54:52Z
Uncertainty in Graph Neural Networks: A Survey
Graph Neural Networks (GNNs) have been extensively used in various real-world applications. However, the predictive uncertainty of GNNs stemming from diverse sources such as inherent randomness in data and model training errors can lead to unstable and erroneous predictions. Therefore, identifying, quantifying, and utilizing uncertainty are essential to enhance the performance of the model for the downstream tasks as well as the reliability of the GNN predictions. This survey aims to provide a comprehensive overview of the GNNs from the perspective of uncertainty with an emphasis on its integration in graph learning. We compare and summarize existing graph uncertainty theory and methods, alongside the corresponding downstream tasks. Thereby, we bridge the gap between theory and practice, meanwhile connecting different GNN communities. Moreover, our work provides valuable insights into promising directions in this field.
[ "['Fangxin Wang' 'Yuqing Liu' 'Kay Liu' 'Yibo Wang' 'Sourav Medya'\n 'Philip S. Yu']" ]
null
null
2403.07187
null
null
http://arxiv.org/pdf/2403.07187v2
2024-05-24T03:44:20Z
2024-03-11T22:00:39Z
UPS: Efficiently Building Foundation Models for PDE Solving via Cross-Modal Adaptation
We present Unified PDE Solvers (UPS), a data- and compute-efficient approach to developing unified neural operators for diverse families of spatiotemporal PDEs from various domains, dimensions, and resolutions. UPS embeds different PDEs into a shared representation space and processes them using a FNO-transformer architecture. Rather than training the network from scratch, which is data-demanding and computationally expensive, we warm-start the transformer from pretrained LLMs and perform explicit alignment to reduce the modality gap while improving data and compute efficiency. The cross-modal UPS achieves state-of-the-art results on a wide range of 1D and 2D PDE families from PDEBench, outperforming existing unified models using 4 times less data and 26 times less compute. Meanwhile, it is capable of few-shot transfer to unseen PDE families and coefficients.
[ "['Junhong Shen' 'Tanya Marwah' 'Ameet Talwalkar']" ]
null
null
2403.07191
null
null
http://arxiv.org/pdf/2403.07191v1
2024-03-11T22:24:14Z
2024-03-11T22:24:14Z
$\mathbf{(N,K)}$-Puzzle: A Cost-Efficient Testbed for Benchmarking Reinforcement Learning Algorithms in Generative Language Model
Recent advances in reinforcement learning (RL) algorithms aim to enhance the performance of language models at scale. Yet, there is a noticeable absence of a cost-effective and standardized testbed tailored to evaluating and comparing these algorithms. To bridge this gap, we present a generalized version of the 24-Puzzle: the $(N,K)$-Puzzle, which challenges language models to reach a target value $K$ with $N$ integers. We evaluate the effectiveness of established RL algorithms such as Proximal Policy Optimization (PPO), alongside novel approaches like Identity Policy Optimization (IPO) and Direct Policy Optimization (DPO).
[ "['Yufeng Zhang' 'Liyu Chen' 'Boyi Liu' 'Yingxiang Yang' 'Qiwen Cui'\n 'Yunzhe Tao' 'Hongxia Yang']" ]
null
null
2403.07194
null
null
http://arxiv.org/abs/2403.07194v1
2024-02-10T09:31:39Z
2024-02-10T09:31:39Z
Improving prediction of students' performance in intelligent tutoring systems using attribute selection and ensembles of different multimodal data sources
The aim of this study was to predict university students' learning performance using different sources of data from an Intelligent Tutoring System. We collected and preprocessed data from 40 students from different multimodal sources: learning strategies from system logs, emotions from face recording videos, interaction zones from eye tracking, and test performance from final knowledge evaluation. Our objective was to test whether the prediction could be improved by using attribute selection and classification ensembles. We carried out three experiments by applying six classification algorithms to numerical and discretized preprocessed multimodal data. The results show that the best predictions were produced using ensembles and selecting the best attributes approach with numerical data.
[ "['W. Chango' 'R. Cerezo' 'M. Sanchez-Santillan' 'R. Azevedo' 'C. Romero']" ]
null
null
2403.07201
null
null
http://arxiv.org/pdf/2403.07201v1
2024-03-11T22:58:11Z
2024-03-11T22:58:11Z
A multi-cohort study on prediction of acute brain dysfunction states using selective state space models
Assessing acute brain dysfunction (ABD), including delirium and coma in the intensive care unit (ICU), is a critical challenge due to its prevalence and severe implications for patient outcomes. Current diagnostic methods rely on infrequent clinical observations, which can only determine a patient's ABD status after onset. Our research attempts to solve these problems by harnessing Electronic Health Records (EHR) data to develop automated methods for ABD prediction for patients in the ICU. Existing models solely predict a single state (e.g., either delirium or coma), require at least 24 hours of observation data to make predictions, do not dynamically predict fluctuating ABD conditions during ICU stay (typically a one-time prediction), and use small sample size, proprietary single-hospital datasets. Our research fills these gaps in the existing literature by dynamically predicting delirium, coma, and mortality for 12-hour intervals throughout an ICU stay and validating on two public datasets. Our research also introduces the concept of dynamically predicting critical transitions from non-ABD to ABD and between different ABD states in real time, which could be clinically more informative for the hospital staff. We compared the predictive performance of two state-of-the-art neural network models, the MAMBA selective state space model and the Longformer Transformer model. Using the MAMBA model, we achieved a mean area under the receiving operator characteristic curve (AUROC) of 0.95 on outcome prediction of ABD for 12-hour intervals. The model achieves a mean AUROC of 0.79 when predicting transitions between ABD states. Our study uses a curated dataset from the University of Florida Health Shands Hospital for internal validation and two publicly available datasets, MIMIC-IV and eICU, for external validation, demonstrating robustness across ICU stays from 203 hospitals and 140,945 patients.
[ "['Brandon Silva' 'Miguel Contreras' 'Sabyasachi Bandyopadhyay'\n 'Yuanfang Ren' 'Ziyuan Guan' 'Jeremy Balch' 'Kia Khezeli'\n 'Tezcan Ozrazgat Baslanti' 'Ben Shickel' 'Azra Bihorac' 'Parisa Rashidi']" ]
null
null
2403.07207
null
null
http://arxiv.org/pdf/2403.07207v1
2024-03-11T23:21:26Z
2024-03-11T23:21:26Z
Tracking Dynamic Gaussian Density with a Theoretically Optimal Sliding Window Approach
Dynamic density estimation is ubiquitous in many applications, including computer vision and signal processing. One popular method to tackle this problem is the "sliding window" kernel density estimator. There exist various implementations of this method that use heuristically defined weight sequences for the observed data. The weight sequence, however, is a key aspect of the estimator affecting the tracking performance significantly. In this work, we study the exact mean integrated squared error (MISE) of "sliding window" Gaussian Kernel Density Estimators for evolving Gaussian densities. We provide a principled guide for choosing the optimal weight sequence by theoretically characterizing the exact MISE, which can be formulated as constrained quadratic programming. We present empirical evidence with synthetic datasets to show that our weighting scheme indeed improves the tracking performance compared to heuristic approaches.
[ "['Yinsong Wang' 'Yu Ding' 'Shahin Shahrampour']" ]
null
null
2403.07213
null
null
http://arxiv.org/pdf/2403.07213v1
2024-03-11T23:52:46Z
2024-03-11T23:52:46Z
Which LLM to Play? Convergence-Aware Online Model Selection with Time-Increasing Bandits
Web-based applications such as chatbots, search engines and news recommendations continue to grow in scale and complexity with the recent surge in the adoption of LLMs. Online model selection has thus garnered increasing attention due to the need to choose the best model among a diverse set while balancing task reward and exploration cost. Organizations faces decisions like whether to employ a costly API-based LLM or a locally finetuned small LLM, weighing cost against performance. Traditional selection methods often evaluate every candidate model before choosing one, which are becoming impractical given the rising costs of training and finetuning LLMs. Moreover, it is undesirable to allocate excessive resources towards exploring poor-performing models. While some recent works leverage online bandit algorithm to manage such exploration-exploitation trade-off in model selection, they tend to overlook the increasing-then-converging trend in model performances as the model is iteratively finetuned, leading to less accurate predictions and suboptimal model selections. In this paper, we propose a time-increasing bandit algorithm TI-UCB, which effectively predicts the increase of model performances due to finetuning and efficiently balances exploration and exploitation in model selection. To further capture the converging points of models, we develop a change detection mechanism by comparing consecutive increase predictions. We theoretically prove that our algorithm achieves a logarithmic regret upper bound in a typical increasing bandit setting, which implies a fast convergence rate. The advantage of our method is also empirically validated through extensive experiments on classification model selection and online selection of LLMs. Our results highlight the importance of utilizing increasing-then-converging pattern for more efficient and economic model selection in the deployment of LLMs.
[ "['Yu Xia' 'Fang Kong' 'Tong Yu' 'Liya Guo' 'Ryan A. Rossi' 'Sungchul Kim'\n 'Shuai Li']" ]
null
null
2403.07216
null
null
http://arxiv.org/pdf/2403.07216v1
2024-03-12T00:08:54Z
2024-03-12T00:08:54Z
Adaptive Gain Scheduling using Reinforcement Learning for Quadcopter Control
The paper presents a technique using reinforcement learning (RL) to adapt the control gains of a quadcopter controller. Specifically, we employed Proximal Policy Optimization (PPO) to train a policy which adapts the gains of a cascaded feedback controller in-flight. The primary goal of this controller is to minimize tracking error while following a specified trajectory. The paper's key objective is to analyze the effectiveness of the adaptive gain policy and compare it to the performance of a static gain control algorithm, where the Integral Squared Error and Integral Time Squared Error are used as metrics. The results show that the adaptive gain scheme achieves over 40$%$ decrease in tracking error as compared to the static gain controller.
[ "['Mike Timmerman' 'Aryan Patel' 'Tim Reinhart']" ]
null
null
2403.07218
null
null
http://arxiv.org/abs/2403.07218v2
2024-06-27T10:49:08Z
2024-03-12T00:25:14Z
SoK: Can Trajectory Generation Combine Privacy and Utility?
While location trajectories represent a valuable data source for analyses and location-based services, they can reveal sensitive information, such as political and religious preferences. Differentially private publication mechanisms have been proposed to allow for analyses under rigorous privacy guarantees. However, the traditional protection schemes suffer from a limiting privacy-utility trade-off and are vulnerable to correlation and reconstruction attacks. Synthetic trajectory data generation and release represent a promising alternative to protection algorithms. While initial proposals achieve remarkable utility, they fail to provide rigorous privacy guarantees. This paper proposes a framework for designing a privacy-preserving trajectory publication approach by defining five design goals, particularly stressing the importance of choosing an appropriate Unit of Privacy. Based on this framework, we briefly discuss the existing trajectory protection approaches, emphasising their shortcomings. This work focuses on the systematisation of the state-of-the-art generative models for trajectories in the context of the proposed framework. We find that no existing solution satisfies all requirements. Thus, we perform an experimental study evaluating the applicability of six sequential generative models to the trajectory domain. Finally, we conclude that a generative trajectory model providing semantic guarantees remains an open research question and propose concrete next steps for future research.
[ "['Erik Buchholz' 'Alsharif Abuadbba' 'Shuo Wang' 'Surya Nepal'\n 'Salil S. Kanhere']" ]
null
null
2403.07221
null
null
http://arxiv.org/pdf/2403.07221v1
2024-03-12T00:26:16Z
2024-03-12T00:26:16Z
LookupFFN: Making Transformers Compute-lite for CPU inference
While GPU clusters are the de facto choice for training large deep neural network (DNN) models today, several reasons including ease of workflow, security and cost have led to efforts investigating whether CPUs may be viable for inference in routine use in many sectors of the industry. But the imbalance between the compute capabilities of GPUs and CPUs is huge. Motivated by these considerations, we study a module which is a workhorse within modern DNN architectures, GEMM based Feed Forward Networks (FFNs), and assess the extent to which it can be made compute- (or FLOP-) lite. Specifically, we propose an alternative formulation (we call it LookupFFN) to GEMM based FFNs inspired by the recent studies of using Locality Sensitive Hashing (LSH) to approximate FFNs. Our formulation recasts most essential operations as a memory look-up, leveraging the trade-off between the two resources on any platform: compute and memory (since CPUs offer it in abundance). For RoBERTa language model pretraining, our formulation achieves similar performance compared to GEMM based FFNs, while dramatically reducing the required FLOP. Our development is complemented with a detailed hardware profiling of strategies that will maximize efficiency -- not just on contemporary hardware but on products that will be offered in the near/medium term future. Code is avaiable at url{https://github.com/mlpen/LookupFFN}.
[ "['Zhanpeng Zeng' 'Michael Davies' 'Pranav Pulijala'\n 'Karthikeyan Sankaralingam' 'Vikas Singh']" ]
null
null
2403.07230
null
null
http://arxiv.org/pdf/2403.07230v1
2024-03-12T00:58:19Z
2024-03-12T00:58:19Z
Curry-DPO: Enhancing Alignment using Curriculum Learning & Ranked Preferences
Direct Preference Optimization (DPO) is an effective technique that leverages pairwise preference data (usually one chosen and rejected response pair per user prompt) to align LLMs to human preferences. In practice, multiple responses can exist for a given prompt with varying quality relative to each other. With availability of such quality ratings for multiple responses, we propose utilizing these responses to create multiple preference pairs for a given prompt. Our work focuses on systematically using the constructed multiple preference pair in DPO training via curriculum learning methodology. In particular, we order these multiple pairs of preference data from easy to hard (emulating curriculum training) according to various criteria. We show detailed comparisons of our proposed approach to the standard single-pair DPO setting. Our method, which we call Curry-DPO consistently shows increased performance gains on MTbench, Vicuna, WizardLM, and the UltraFeedback test set, highlighting its effectiveness. More specifically, Curry-DPO achieves a score of 7.43 on MT-bench with Zephy-7B model outperforming majority of existing LLMs with similar parameter size. Curry-DPO also achieves the highest adjusted win rates on Vicuna, WizardLM, and UltraFeedback test datasets (90.7%, 87.1%, and 87.9% respectively) in our experiments, with notable gains of upto 7.5% when compared to standard DPO technique.
[ "['Pulkit Pattnaik' 'Rishabh Maheshwary' 'Kelechi Ogueji' 'Vikas Yadav'\n 'Sathwik Tejaswi Madhusudhan']" ]
null
null
2403.07232
null
null
http://arxiv.org/pdf/2403.07232v1
2024-03-12T01:00:52Z
2024-03-12T01:00:52Z
Tractable Joint Prediction and Planning over Discrete Behavior Modes for Urban Driving
Significant progress has been made in training multimodal trajectory forecasting models for autonomous driving. However, effectively integrating these models with downstream planners and model-based control approaches is still an open problem. Although these models have conventionally been evaluated for open-loop prediction, we show that they can be used to parameterize autoregressive closed-loop models without retraining. We consider recent trajectory prediction approaches which leverage learned anchor embeddings to predict multiple trajectories, finding that these anchor embeddings can parameterize discrete and distinct modes representing high-level driving behaviors. We propose to perform fully reactive closed-loop planning over these discrete latent modes, allowing us to tractably model the causal interactions between agents at each step. We validate our approach on a suite of more dynamic merging scenarios, finding that our approach avoids the $textit{frozen robot problem}$ which is pervasive in conventional planners. Our approach also outperforms the previous state-of-the-art in CARLA on challenging dense traffic scenarios when evaluated at realistic speeds.
[ "['Adam Villaflor' 'Brian Yang' 'Huangyuan Su' 'Katerina Fragkiadaki'\n 'John Dolan' 'Jeff Schneider']" ]
null
null
2403.07241
null
null
http://arxiv.org/pdf/2403.07241v1
2024-03-12T01:47:17Z
2024-03-12T01:47:17Z
Calibrating Multi-modal Representations: A Pursuit of Group Robustness without Annotations
Fine-tuning pre-trained vision-language models, like CLIP, has yielded success on diverse downstream tasks. However, several pain points persist for this paradigm: (i) directly tuning entire pre-trained models becomes both time-intensive and computationally costly. Additionally, these tuned models tend to become highly specialized, limiting their practicality for real-world deployment; (ii) recent studies indicate that pre-trained vision-language classifiers may overly depend on spurious features -- patterns that correlate with the target in training data, but are not related to the true labeling function; and (iii) existing studies on mitigating the reliance on spurious features, largely based on the assumption that we can identify such features, does not provide definitive assurance for real-world applications. As a piloting study, this work focuses on exploring mitigating the reliance on spurious features for CLIP without using any group annotation. To this end, we systematically study the existence of spurious correlation on CLIP and CILP+ERM. We first, following recent work on Deep Feature Reweighting (DFR), verify that last-layer retraining can greatly improve group robustness on pretrained CLIP. In view of them, we advocate a lightweight representation calibration method for fine-tuning CLIP, by first generating a calibration set using the pretrained CLIP, and then calibrating representations of samples within this set through contrastive learning, all without the need for group labels. Extensive experiments and in-depth visualizations on several benchmarks validate the effectiveness of our proposals, largely reducing reliance and significantly boosting the model generalization.
[ "['Chenyu You' 'Yifei Min' 'Weicheng Dai' 'Jasjeet S. Sekhon'\n 'Lawrence Staib' 'James S. Duncan']" ]
null
null
2403.07245
null
null
http://arxiv.org/pdf/2403.07245v3
2024-06-10T13:55:22Z
2024-03-12T02:05:06Z
Dataset Condensation for Time Series Classification via Dual Domain Matching
Time series data has been demonstrated to be crucial in various research fields. The management of large quantities of time series data presents challenges in terms of deep learning tasks, particularly for training a deep neural network. Recently, a technique named textit{Dataset Condensation} has emerged as a solution to this problem. This technique generates a smaller synthetic dataset that has comparable performance to the full real dataset in downstream tasks such as classification. However, previous methods are primarily designed for image and graph datasets, and directly adapting them to the time series dataset leads to suboptimal performance due to their inability to effectively leverage the rich information inherent in time series data, particularly in the frequency domain. In this paper, we propose a novel framework named Dataset textit{textbf{Cond}}ensation for textit{textbf{T}}ime textit{textbf{S}}eries textit{textbf{C}}lassification via Dual Domain Matching (textbf{CondTSC}) which focuses on the time series classification dataset condensation task. Different from previous methods, our proposed framework aims to generate a condensed dataset that matches the surrogate objectives in both the time and frequency domains. Specifically, CondTSC incorporates multi-view data augmentation, dual domain training, and dual surrogate objectives to enhance the dataset condensation process in the time and frequency domains. Through extensive experiments, we demonstrate the effectiveness of our proposed framework, which outperforms other baselines and learns a condensed synthetic dataset that exhibits desirable characteristics such as conforming to the distribution of the original data.
[ "['Zhanyu Liu' 'Ke Hao' 'Guanjie Zheng' 'Yanwei Yu']" ]
null
null
2403.07247
null
null
http://arxiv.org/pdf/2403.07247v1
2024-03-12T02:09:39Z
2024-03-12T02:09:39Z
GuideGen: A Text-guided Framework for Joint CT Volume and Anatomical structure Generation
The annotation burden and extensive labor for gathering a large medical dataset with images and corresponding labels are rarely cost-effective and highly intimidating. This results in a lack of abundant training data that undermines downstream tasks and partially contributes to the challenge image analysis faces in the medical field. As a workaround, given the recent success of generative neural models, it is now possible to synthesize image datasets at a high fidelity guided by external constraints. This paper explores this possibility and presents textbf{GuideGen}: a pipeline that jointly generates CT images and tissue masks for abdominal organs and colorectal cancer conditioned on a text prompt. Firstly, we introduce Volumetric Mask Sampler to fit the discrete distribution of mask labels and generate low-resolution 3D tissue masks. Secondly, our Conditional Image Generator autoregressively generates CT slices conditioned on a corresponding mask slice to incorporate both style information and anatomical guidance. This pipeline guarantees high fidelity and variability as well as exact alignment between generated CT volumes and tissue masks. Both qualitative and quantitative experiments on 3D abdominal CTs demonstrate a high performance of our proposed pipeline, thereby proving our method can serve as a dataset generator and provide potential benefits to downstream tasks. It is hoped that our work will offer a promising solution on the multimodality generation of CT and its anatomical mask. Our source code is publicly available at https://github.com/OvO1111/JointImageGeneration.
[ "['Linrui Dai' 'Rongzhao Zhang' 'Zhongzhen Huang' 'Xiaofan Zhang']" ]
null
null
2403.07255
null
null
http://arxiv.org/pdf/2403.07255v1
2024-03-12T02:24:37Z
2024-03-12T02:24:37Z
Deep Learning-Assisted Parallel Interference Cancellation for Grant-Free NOMA in Machine-Type Communication
In this paper, we present a novel approach for joint activity detection (AD), channel estimation (CE), and data detection (DD) in uplink grant-free non-orthogonal multiple access (NOMA) systems. Our approach employs an iterative and parallel interference removal strategy inspired by parallel interference cancellation (PIC), enhanced with deep learning to jointly tackle the AD, CE, and DD problems. Based on this approach, we develop three PIC frameworks, each of which is designed for either coherent or non-coherence schemes. The first framework performs joint AD and CE using received pilot signals in the coherent scheme. Building upon this framework, the second framework utilizes both the received pilot and data signals for CE, further enhancing the performances of AD, CE, and DD in the coherent scheme. The third framework is designed to accommodate the non-coherent scheme involving a small number of data bits, which simultaneously performs AD and DD. Through joint loss functions and interference cancellation modules, our approach supports end-to-end training, contributing to enhanced performances of AD, CE, and DD for both coherent and non-coherent schemes. Simulation results demonstrate the superiority of our approach over traditional techniques, exhibiting enhanced performances of AD, CE, and DD while maintaining lower computational complexity.
[ "['Yongjeong Oh' 'Jaehong Jo' 'Byonghyo Shim' 'Yo-Seb Jeon']" ]
null
null
2403.07261
null
null
http://arxiv.org/pdf/2403.07261v1
2024-03-12T02:38:36Z
2024-03-12T02:38:36Z
Disentangling Policy from Offline Task Representation Learning via Adversarial Data Augmentation
Offline meta-reinforcement learning (OMRL) proficiently allows an agent to tackle novel tasks while solely relying on a static dataset. For precise and efficient task identification, existing OMRL research suggests learning separate task representations that be incorporated with policy input, thus forming a context-based meta-policy. A major approach to train task representations is to adopt contrastive learning using multi-task offline data. The dataset typically encompasses interactions from various policies (i.e., the behavior policies), thus providing a plethora of contextual information regarding different tasks. Nonetheless, amassing data from a substantial number of policies is not only impractical but also often unattainable in realistic settings. Instead, we resort to a more constrained yet practical scenario, where multi-task data collection occurs with a limited number of policies. We observed that learned task representations from previous OMRL methods tend to correlate spuriously with the behavior policy instead of reflecting the essential characteristics of the task, resulting in unfavorable out-of-distribution generalization. To alleviate this issue, we introduce a novel algorithm to disentangle the impact of behavior policy from task representation learning through a process called adversarial data augmentation. Specifically, the objective of adversarial data augmentation is not merely to generate data analogous to offline data distribution; instead, it aims to create adversarial examples designed to confound learned task representations and lead to incorrect task identification. Our experiments show that learning from such adversarial samples significantly enhances the robustness and effectiveness of the task identification process and realizes satisfactory out-of-distribution generalization.
[ "['Chengxing Jia' 'Fuxiang Zhang' 'Yi-Chen Li' 'Chen-Xiao Gao' 'Xu-Hui Liu'\n 'Lei Yuan' 'Zongzhang Zhang' 'Yang Yu']" ]
null
null
2403.07262
null
null
http://arxiv.org/pdf/2403.07262v2
2024-05-30T15:04:42Z
2024-03-12T02:43:41Z
A2PO: Towards Effective Offline Reinforcement Learning from an Advantage-aware Perspective
Offline reinforcement learning endeavors to leverage offline datasets to craft effective agent policy without online interaction, which imposes proper conservative constraints with the support of behavior policies to tackle the out-of-distribution problem. However, existing works often suffer from the constraint conflict issue when offline datasets are collected from multiple behavior policies, i.e., different behavior policies may exhibit inconsistent actions with distinct returns across the state space. To remedy this issue, recent advantage-weighted methods prioritize samples with high advantage values for agent training while inevitably ignoring the diversity of behavior policy. In this paper, we introduce a novel Advantage-Aware Policy Optimization (A2PO) method to explicitly construct advantage-aware policy constraints for offline learning under mixed-quality datasets. Specifically, A2PO employs a conditional variational auto-encoder to disentangle the action distributions of intertwined behavior policies by modeling the advantage values of all training data as conditional variables. Then the agent can follow such disentangled action distribution constraints to optimize the advantage-aware policy towards high advantage values. Extensive experiments conducted on both the single-quality and mixed-quality datasets of the D4RL benchmark demonstrate that A2PO yields results superior to the counterparts. Our code will be made publicly available.
[ "['Yunpeng Qing' 'Shunyu liu' 'Jingyuan Cong' 'Kaixuan Chen' 'Yihe Zhou'\n 'Mingli Song']" ]
null
null
2403.07263
null
null
http://arxiv.org/pdf/2403.07263v1
2024-03-12T02:45:24Z
2024-03-12T02:45:24Z
Adaptive Bounding Box Uncertainties via Two-Step Conformal Prediction
Quantifying a model's predictive uncertainty is essential for safety-critical applications such as autonomous driving. We consider quantifying such uncertainty for multi-object detection. In particular, we leverage conformal prediction to obtain uncertainty intervals with guaranteed coverage for object bounding boxes. One challenge in doing so is that bounding box predictions are conditioned on the object's class label. Thus, we develop a novel two-step conformal approach that propagates uncertainty in predicted class labels into the uncertainty intervals for the bounding boxes. This broadens the validity of our conformal coverage guarantees to include incorrectly classified objects, ensuring their usefulness when maximal safety assurances are required. Moreover, we investigate novel ensemble and quantile regression formulations to ensure the bounding box intervals are adaptive to object size, leading to a more balanced coverage across sizes. Validating our two-step approach on real-world datasets for 2D bounding box localization, we find that desired coverage levels are satisfied with actionably tight predictive uncertainty intervals.
[ "['Alexander Timans' 'Christoph-Nikolas Straehle' 'Kaspar Sakmann'\n 'Eric Nalisnick']" ]
null
null
2403.07264
null
null
http://arxiv.org/pdf/2403.07264v1
2024-03-12T02:47:00Z
2024-03-12T02:47:00Z
Near-Interpolators: Rapid Norm Growth and the Trade-Off between Interpolation and Generalization
We study the generalization capability of nearly-interpolating linear regressors: $boldsymbol{beta}$'s whose training error $tau$ is positive but small, i.e., below the noise floor. Under a random matrix theoretic assumption on the data distribution and an eigendecay assumption on the data covariance matrix $boldsymbol{Sigma}$, we demonstrate that any near-interpolator exhibits rapid norm growth: for $tau$ fixed, $boldsymbol{beta}$ has squared $ell_2$-norm $mathbb{E}[|{boldsymbol{beta}}|_{2}^{2}] = Omega(n^{alpha})$ where $n$ is the number of samples and $alpha >1$ is the exponent of the eigendecay, i.e., $lambda_i(boldsymbol{Sigma}) sim i^{-alpha}$. This implies that existing data-independent norm-based bounds are necessarily loose. On the other hand, in the same regime we precisely characterize the asymptotic trade-off between interpolation and generalization. Our characterization reveals that larger norm scaling exponents $alpha$ correspond to worse trade-offs between interpolation and generalization. We verify empirically that a similar phenomenon holds for nearly-interpolating shallow neural networks.
[ "['Yutong Wang' 'Rishi Sonthalia' 'Wei Hu']" ]
null
null
2403.07271
null
null
http://arxiv.org/pdf/2403.07271v1
2024-03-12T03:00:15Z
2024-03-12T03:00:15Z
Anderson acceleration for iteratively reweighted $\ell_1$ algorithm
Iteratively reweighted L1 (IRL1) algorithm is a common algorithm for solving sparse optimization problems with nonconvex and nonsmooth regularization. The development of its acceleration algorithm, often employing Nesterov acceleration, has sparked significant interest. Nevertheless, the convergence and complexity analysis of these acceleration algorithms consistently poses substantial challenges. Recently, Anderson acceleration has gained prominence owing to its exceptional performance for speeding up fixed-point iteration, with numerous recent studies applying it to gradient-based algorithms. Motivated by the powerful impact of Anderson acceleration, we propose an Anderson-accelerated IRL1 algorithm and establish its local linear convergence rate. We extend this convergence result, typically observed in smooth settings, to a nonsmooth scenario. Importantly, our theoretical results do not depend on the Kurdyka-Lojasiewicz condition, a necessary condition in existing Nesterov acceleration-based algorithms. Furthermore, to ensure global convergence, we introduce a globally convergent Anderson accelerated IRL1 algorithm by incorporating a classical nonmonotone line search condition. Experimental results indicate that our algorithm outperforms existing Nesterov acceleration-based algorithms.
[ "['Kexin Li']" ]
null
null
2403.07282
null
null
http://arxiv.org/pdf/2403.07282v1
2024-03-12T03:26:58Z
2024-03-12T03:26:58Z
Enhancing Transfer Learning with Flexible Nonparametric Posterior Sampling
Transfer learning has recently shown significant performance across various tasks involving deep neural networks. In these transfer learning scenarios, the prior distribution for downstream data becomes crucial in Bayesian model averaging (BMA). While previous works proposed the prior over the neural network parameters centered around the pre-trained solution, such strategies have limitations when dealing with distribution shifts between upstream and downstream data. This paper introduces nonparametric transfer learning (NPTL), a flexible posterior sampling method to address the distribution shift issue within the context of nonparametric learning. The nonparametric learning (NPL) method is a recent approach that employs a nonparametric prior for posterior sampling, efficiently accounting for model misspecification scenarios, which is suitable for transfer learning scenarios that may involve the distribution shift between upstream and downstream tasks. Through extensive empirical validations, we demonstrate that our approach surpasses other baselines in BMA performance.
[ "['Hyungi Lee' 'Giung Nam' 'Edwin Fong' 'Juho Lee']" ]
null
null
2403.07283
null
null
http://arxiv.org/pdf/2403.07283v1
2024-03-12T03:30:04Z
2024-03-12T03:30:04Z
A Framework for Cost-Effective and Self-Adaptive LLM Shaking and Recovery Mechanism
As Large Language Models (LLMs) gain great success in real-world applications, an increasing number of users are seeking to develop and deploy their customized LLMs through cloud services. Nonetheless, in some specific domains, there are still concerns regarding cost and trade-offs between privacy issues and accuracy. In this study, we introduce a cost-effective and self-adaptive LLM shaking tuning and recovery mechanism, named CypherTalk. With carefully designed horizontal and vertical shaking operators, we can achieve comparable accuracy results with SOTA privacy-preserving LLM schemes using Cryptography-based or Differential Privacy-based methods. Experiments also show that with the CypherTalk framework, users can achieve reliable accuracy when using optimized shaking operator settings. To our best knowledge, this is the first work that considers cost, and trade-off between model utility and privacy in LLM scenarios.
[ "['Zhiyu Chen' 'Yu Li' 'Suochao Zhang' 'Jingbo Zhou' 'Jiwen Zhou'\n 'Chenfu Bao' 'Dianhai Yu']" ]
null
null
2403.07294
null
null
http://arxiv.org/pdf/2403.07294v2
2024-06-07T04:36:34Z
2024-03-12T03:54:25Z
Graph Data Condensation via Self-expressive Graph Structure Reconstruction
With the increasing demands of training graph neural networks (GNNs) on large-scale graphs, graph data condensation has emerged as a critical technique to relieve the storage and time costs during the training phase. It aims to condense the original large-scale graph to a much smaller synthetic graph while preserving the essential information necessary for efficiently training a downstream GNN. However, existing methods concentrate either on optimizing node features exclusively or endeavor to independently learn node features and the graph structure generator. They could not explicitly leverage the information of the original graph structure and failed to construct an interpretable graph structure for the synthetic dataset. To address these issues, we introduce a novel framework named textbf{G}raph Data textbf{C}ondensation via textbf{S}elf-expressive Graph Structure textbf{R}econstruction (textbf{GCSR}). Our method stands out by (1) explicitly incorporating the original graph structure into the condensing process and (2) capturing the nuanced interdependencies between the condensed nodes by reconstructing an interpretable self-expressive graph structure. Extensive experiments and comprehensive analysis validate the efficacy of the proposed method across diverse GNN models and datasets. Our code is available at url{https://github.com/zclzcl0223/GCSR}.
[ "['Zhanyu Liu' 'Chaolv Zeng' 'Guanjie Zheng']" ]
null
null
2403.07300
null
null
http://arxiv.org/pdf/2403.07300v2
2024-05-23T04:47:29Z
2024-03-12T04:04:38Z
CALF: Aligning LLMs for Time Series Forecasting via Cross-modal Fine-Tuning
Deep learning (e.g., Transformer) has been widely and successfully used in multivariate time series forecasting (MTSF). Unlike existing methods that focus on training models from a single modal of time series input, large language models (LLMs) based MTSF methods with cross-modal text and time series input have recently shown great superiority, especially with limited temporal data. However, current LLM-based MTSF methods usually focus on adapting and fine-tuning LLMs, while neglecting the distribution discrepancy between textual and temporal input tokens, thus leading to sub-optimal performance. To address this issue, we propose a novel Cross-Modal LLM Fine-Tuning (CALF) framework for MTSF by reducing the distribution discrepancy between textual and temporal data, which mainly consists of the temporal target branch with temporal input and the textual source branch with aligned textual input. To reduce the distribution discrepancy, we develop the cross-modal match module to first align cross-modal input distributions. Additionally, to minimize the modality distribution gap in both feature and output spaces, feature regularization loss is developed to align the intermediate features between the two branches for better weight updates, while output consistency loss is introduced to allow the output representations of both branches to correspond effectively. Thanks to the modality alignment, CALF establishes state-of-the-art performance for both long-term and short-term forecasting tasks with low computational complexity, and exhibiting favorable few-shot and zero-shot abilities similar to that in LLMs. Code is available at url{https://github.com/Hank0626/LLaTA}.
[ "['Peiyuan Liu' 'Hang Guo' 'Tao Dai' 'Naiqi Li' 'Jigang Bao' 'Xudong Ren'\n 'Yong Jiang' 'Shu-Tao Xia']" ]
null
null
2403.07308
null
null
http://arxiv.org/pdf/2403.07308v1
2024-03-12T04:29:43Z
2024-03-12T04:29:43Z
Verification-Aided Learning of Neural Network Barrier Functions with Termination Guarantees
Barrier functions are a general framework for establishing a safety guarantee for a system. However, there is no general method for finding these functions. To address this shortcoming, recent approaches use self-supervised learning techniques to learn these functions using training data that are periodically generated by a verification procedure, leading to a verification-aided learning framework. Despite its immense potential in automating barrier function synthesis, the verification-aided learning framework does not have termination guarantees and may suffer from a low success rate of finding a valid barrier function in practice. In this paper, we propose a holistic approach to address these drawbacks. With a convex formulation of the barrier function synthesis, we propose to first learn an empirically well-behaved NN basis function and then apply a fine-tuning algorithm that exploits the convexity and counterexamples from the verification failure to find a valid barrier function with finite-step termination guarantees: if there exist valid barrier functions, the fine-tuning algorithm is guaranteed to find one in a finite number of iterations. We demonstrate that our fine-tuning method can significantly boost the performance of the verification-aided learning framework on examples of different scales and using various neural network verifiers.
[ "['Shaoru Chen' 'Lekan Molu' 'Mahyar Fazlyab']" ]
null
null
2403.07309
null
null
http://arxiv.org/pdf/2403.07309v1
2024-03-12T04:36:41Z
2024-03-12T04:36:41Z
Reinforced Sequential Decision-Making for Sepsis Treatment: The POSNEGDM Framework with Mortality Classifier and Transformer
Sepsis, a life-threatening condition triggered by the body's exaggerated response to infection, demands urgent intervention to prevent severe complications. Existing machine learning methods for managing sepsis struggle in offline scenarios, exhibiting suboptimal performance with survival rates below 50%. This paper introduces the POSNEGDM -- ``Reinforcement Learning with Positive and Negative Demonstrations for Sequential Decision-Making" framework utilizing an innovative transformer-based model and a feedback reinforcer to replicate expert actions while considering individual patient characteristics. A mortality classifier with 96.7% accuracy guides treatment decisions towards positive outcomes. The POSNEGDM framework significantly improves patient survival, saving 97.39% of patients, outperforming established machine learning algorithms (Decision Transformer and Behavioral Cloning) with survival rates of 33.4% and 43.5%, respectively. Additionally, ablation studies underscore the critical role of the transformer-based decision maker and the integration of a mortality classifier in enhancing overall survival rates. In summary, our proposed approach presents a promising avenue for enhancing sepsis treatment outcomes, contributing to improved patient care and reduced healthcare costs.
[ "['Dipesh Tamboli' 'Jiayu Chen' 'Kiran Pranesh Jotheeswaran' 'Denny Yu'\n 'Vaneet Aggarwal']" ]
null
null
2403.07310
null
null
http://arxiv.org/pdf/2403.07310v2
2024-03-19T15:20:31Z
2024-03-12T04:38:05Z
How does promoting the minority fraction affect generalization? A theoretical study of the one-hidden-layer neural network on group imbalance
Group imbalance has been a known problem in empirical risk minimization (ERM), where the achieved high average accuracy is accompanied by low accuracy in a minority group. Despite algorithmic efforts to improve the minority group accuracy, a theoretical generalization analysis of ERM on individual groups remains elusive. By formulating the group imbalance problem with the Gaussian Mixture Model, this paper quantifies the impact of individual groups on the sample complexity, the convergence rate, and the average and group-level testing performance. Although our theoretical framework is centered on binary classification using a one-hidden-layer neural network, to the best of our knowledge, we provide the first theoretical analysis of the group-level generalization of ERM in addition to the commonly studied average generalization performance. Sample insights of our theoretical results include that when all group-level co-variance is in the medium regime and all mean are close to zero, the learning performance is most desirable in the sense of a small sample complexity, a fast training rate, and a high average and group-level testing accuracy. Moreover, we show that increasing the fraction of the minority group in the training data does not necessarily improve the generalization performance of the minority group. Our theoretical results are validated on both synthetic and empirical datasets, such as CelebA and CIFAR-10 in image classification.
[ "['Hongkang Li' 'Shuai Zhang' 'Yihua Zhang' 'Meng Wang' 'Sijia Liu'\n 'Pin-Yu Chen']" ]
null
null
2403.07311
null
null
http://arxiv.org/pdf/2403.07311v7
2024-07-02T14:52:16Z
2024-03-12T04:47:29Z
Knowledge Graph Large Language Model (KG-LLM) for Link Prediction
The task of multi-hop link prediction within knowledge graphs (KGs) stands as a challenge in the field of knowledge graph analysis, as it requires the model to reason through and understand all intermediate connections before making a prediction. In this paper, we introduce the Knowledge Graph Large Language Model (KG-LLM), a novel framework that leverages large language models (LLMs) for knowledge graph tasks. We first convert structured knowledge graph data into natural language and then use these natural language prompts to fine-tune LLMs to enhance multi-hop link prediction in KGs. By converting the KG to natural language prompts, our framework is designed to learn the latent representations of entities and their interrelations. To show the efficacy of the KG-LLM Framework, we fine-tune three leading LLMs within this framework, including Flan-T5, LLaMa2 and Gemma. Further, we explore the framework's potential to provide LLMs with zero-shot capabilities for handling previously unseen prompts. Experimental results show that KG-LLM significantly improves the models' generalization capabilities, leading to more accurate predictions in unfamiliar scenarios.
[ "['Dong Shu' 'Tianle Chen' 'Mingyu Jin' 'Chong Zhang' 'Mengnan Du'\n 'Yongfeng Zhang']" ]
null
null
2403.07314
null
null
http://arxiv.org/pdf/2403.07314v1
2024-03-12T05:00:38Z
2024-03-12T05:00:38Z
Customizable Avatars with Dynamic Facial Action Coded Expressions (CADyFACE) for Improved User Engagement
Customizable 3D avatar-based facial expression stimuli may improve user engagement in behavioral biomarker discovery and therapeutic intervention for autism, Alzheimer's disease, facial palsy, and more. However, there is a lack of customizable avatar-based stimuli with Facial Action Coding System (FACS) action unit (AU) labels. Therefore, this study focuses on (1) FACS-labeled, customizable avatar-based expression stimuli for maintaining subjects' engagement, (2) learning-based measurements that quantify subjects' facial responses to such stimuli, and (3) validation of constructs represented by stimulus-measurement pairs. We propose Customizable Avatars with Dynamic Facial Action Coded Expressions (CADyFACE) labeled with AUs by a certified FACS expert. To measure subjects' AUs in response to CADyFACE, we propose a novel Beta-guided Correlation and Multi-task Expression learning neural network (BeCoME-Net) for multi-label AU detection. The beta-guided correlation loss encourages feature correlation with AUs while discouraging correlation with subject identities for improved generalization. We train BeCoME-Net for unilateral and bilateral AU detection and compare with state-of-the-art approaches. To assess construct validity of CADyFACE and BeCoME-Net, twenty healthy adult volunteers complete expression recognition and mimicry tasks in an online feasibility study while webcam-based eye-tracking and video are collected. We test validity of multiple constructs, including face preference during recognition and AUs during mimicry.
[ "['Megan A. Witherow' 'Crystal Butler' 'Winston J. Shields' 'Furkan Ilgin'\n 'Norou Diawara' 'Janice Keener' 'John W. Harrington'\n 'Khan M. Iftekharuddin']" ]
null
null
2403.07320
null
null
http://arxiv.org/pdf/2403.07320v1
2024-03-12T05:09:25Z
2024-03-12T05:09:25Z
Approaching Rate-Distortion Limits in Neural Compression with Lattice Transform Coding
Neural compression has brought tremendous progress in designing lossy compressors with good rate-distortion (RD) performance at low complexity. Thus far, neural compression design involves transforming the source to a latent vector, which is then rounded to integers and entropy coded. While this approach has been shown to be optimal in a one-shot sense on certain sources, we show that it is highly sub-optimal on i.i.d. sequences, and in fact always recovers scalar quantization of the original source sequence. We demonstrate that the sub-optimality is due to the choice of quantization scheme in the latent space, and not the transform design. By employing lattice quantization instead of scalar quantization in the latent space, we demonstrate that Lattice Transform Coding (LTC) is able to recover optimal vector quantization at various dimensions and approach the asymptotically-achievable rate-distortion function at reasonable complexity. On general vector sources, LTC improves upon standard neural compressors in one-shot coding performance. LTC also enables neural compressors that perform block coding on i.i.d. vector sources, which yields coding gain over optimal one-shot coding.
[ "['Eric Lei' 'Hamed Hassani' 'Shirin Saeedi Bidokhti']" ]
null
null
2403.07322
null
null
http://arxiv.org/pdf/2403.07322v3
2024-07-05T16:11:43Z
2024-03-12T05:15:42Z
A Question-centric Multi-experts Contrastive Learning Framework for Improving the Accuracy and Interpretability of Deep Sequential Knowledge Tracing Models
Knowledge tracing (KT) plays a crucial role in predicting students' future performance by analyzing their historical learning processes. Deep neural networks (DNNs) have shown great potential in solving the KT problem. However, there still exist some important challenges when applying deep learning techniques to model the KT process. The first challenge lies in taking the individual information of the question into modeling. This is crucial because, despite questions sharing the same knowledge component (KC), students' knowledge acquisition on homogeneous questions can vary significantly. The second challenge lies in interpreting the prediction results from existing deep learning-based KT models. In real-world applications, while it may not be necessary to have complete transparency and interpretability of the model parameters, it is crucial to present the model's prediction results in a manner that teachers find interpretable. This makes teachers accept the rationale behind the prediction results and utilize them to design teaching activities and tailored learning strategies for students. However, the inherent black-box nature of deep learning techniques often poses a hurdle for teachers to fully embrace the model's prediction results. To address these challenges, we propose a Question-centric Multi-experts Contrastive Learning framework for KT called Q-MCKT. We have provided all the datasets and code on our website at https://github.com/rattlesnakey/Q-MCKT.
[ "['Hengyuan Zhang' 'Zitao Liu' 'Chenming Shang' 'Dawei Li' 'Yong Jiang']" ]
null
null
2403.07329
null
null
http://arxiv.org/pdf/2403.07329v1
2024-03-12T05:29:48Z
2024-03-12T05:29:48Z
Unknown Domain Inconsistency Minimization for Domain Generalization
The objective of domain generalization (DG) is to enhance the transferability of the model learned from a source domain to unobserved domains. To prevent overfitting to a specific domain, Sharpness-Aware Minimization (SAM) reduces source domain's loss sharpness. Although SAM variants have delivered significant improvements in DG, we highlight that there's still potential for improvement in generalizing to unknown domains through the exploration on data space. This paper introduces an objective rooted in both parameter and data perturbed regions for domain generalization, coined Unknown Domain Inconsistency Minimization (UDIM). UDIM reduces the loss landscape inconsistency between source domain and unknown domains. As unknown domains are inaccessible, these domains are empirically crafted by perturbing instances from the source domain dataset. In particular, by aligning the loss landscape acquired in the source domain to the loss landscape of perturbed domains, we expect to achieve generalization grounded on these flat minima for the unknown domains. Theoretically, we validate that merging SAM optimization with the UDIM objective establishes an upper bound for the true objective of the DG task. In an empirical aspect, UDIM consistently outperforms SAM variants across multiple DG benchmark datasets. Notably, UDIM shows statistically significant improvements in scenarios with more restrictive domain information, underscoring UDIM's generalization capability in unseen domains. Our code is available at url{https://github.com/SJShin-AI/UDIM}.
[ "['Seungjae Shin' 'HeeSun Bae' 'Byeonghu Na' 'Yoon-Yeong Kim'\n 'Il-Chul Moon']" ]
null
null
2403.07339
null
null
http://arxiv.org/pdf/2403.07339v1
2024-03-12T05:44:27Z
2024-03-12T05:44:27Z
IM-Unpack: Training and Inference with Arbitrarily Low Precision Integers
GEneral Matrix Multiply (GEMM) is a central operation in deep learning and corresponds to the largest chunk of the compute footprint. Therefore, improving its efficiency is an active topic of ongoing research. A popular strategy is the use of low bit-width integers to approximate the original entries in a matrix. This allows efficiency gains, but often requires sophisticated techniques to control the rounding error incurred. In this work, we first verify/check that when the low bit-width restriction is removed, for a variety of Transformer-based models, whether integers are sufficient for all GEMMs need -- for {em both} training and inference stages, and can achieve parity with floating point counterparts. No sophisticated techniques are needed. We find that while a large majority of entries in matrices (encountered in such models) can be easily represented by {em low} bit-width integers, the existence of a few heavy hitter entries make it difficult to achieve efficiency gains via the exclusive use of low bit-width GEMMs alone. To address this issue, we develop a simple algorithm, Integer Matrix Unpacking (IM-Unpack), to {em unpack} a matrix with large integer entries into a larger matrix whose entries all lie within the representable range of arbitrarily low bit-width integers. This allows {em equivalence} with the original GEMM, i.e., the exact result can be obtained using purely low bit-width integer GEMMs. This comes at the cost of additional operations -- we show that for many popular models, this overhead is quite small.
[ "['Zhanpeng Zeng' 'Karthikeyan Sankaralingam' 'Vikas Singh']" ]
null
null
2403.07353
null
null
http://arxiv.org/pdf/2403.07353v2
2024-03-13T04:43:23Z
2024-03-12T06:22:10Z
Graph Unlearning with Efficient Partial Retraining
Graph Neural Networks (GNNs) have achieved remarkable success in various real-world applications. However, GNNs may be trained on undesirable graph data, which can degrade their performance and reliability. To enable trained GNNs to efficiently unlearn unwanted data, a desirable solution is retraining-based graph unlearning, which partitions the training graph into subgraphs and trains sub-models on them, allowing fast unlearning through partial retraining. However, the graph partition process causes information loss in the training graph, resulting in the low model utility of sub-GNN models. In this paper, we propose GraphRevoker, a novel graph unlearning framework that better maintains the model utility of unlearnable GNNs. Specifically, we preserve the graph property with graph property-aware sharding and effectively aggregate the sub-GNN models for prediction with graph contrastive sub-model aggregation. We conduct extensive experiments to demonstrate the superiority of our proposed approach.
[ "['Jiahao Zhang' 'Lin Wang' 'Shijie Wang' 'Wenqi Fan']" ]
null
null
2403.07356
null
null
http://arxiv.org/pdf/2403.07356v1
2024-03-12T06:29:54Z
2024-03-12T06:29:54Z
Premonition: Using Generative Models to Preempt Future Data Changes in Continual Learning
Continual learning requires a model to adapt to ongoing changes in the data distribution, and often to the set of tasks to be performed. It is rare, however, that the data and task changes are completely unpredictable. Given a description of an overarching goal or data theme, which we call a realm, humans can often guess what concepts are associated with it. We show here that the combination of a large language model and an image generation model can similarly provide useful premonitions as to how a continual learning challenge might develop over time. We use the large language model to generate text descriptions of semantically related classes that might potentially appear in the data stream in future. These descriptions are then rendered using Stable Diffusion to generate new labelled image samples. The resulting synthetic dataset is employed for supervised pre-training, but is discarded prior to commencing continual learning, along with the pre-training classification head. We find that the backbone of our pre-trained networks can learn representations useful for the downstream continual learning problem, thus becoming a valuable input to any existing continual learning method. Although there are complexities arising from the domain gap between real and synthetic images, we show that pre-training models in this manner improves multiple Class Incremenal Learning (CIL) methods on fine-grained image classification benchmarks. Supporting code can be found at https://github.com/cl-premonition/premonition.
[ "['Mark D. McDonnell' 'Dong Gong' 'Ehsan Abbasnejad' 'Anton van den Hengel']" ]
null
null
2403.07362
null
null
http://arxiv.org/pdf/2403.07362v4
2024-07-09T03:59:01Z
2024-03-12T06:50:32Z
Challenging Forgets: Unveiling the Worst-Case Forget Sets in Machine Unlearning
The trustworthy machine learning (ML) community is increasingly recognizing the crucial need for models capable of selectively 'unlearning' data points after training. This leads to the problem of machine unlearning (MU), aiming to eliminate the influence of chosen data points on model performance, while still maintaining the model's utility post-unlearning. Despite various MU methods for data influence erasure, evaluations have largely focused on random data forgetting, ignoring the vital inquiry into which subset should be chosen to truly gauge the authenticity of unlearning performance. To tackle this issue, we introduce a new evaluative angle for MU from an adversarial viewpoint. We propose identifying the data subset that presents the most significant challenge for influence erasure, i.e., pinpointing the worst-case forget set. Utilizing a bi-level optimization principle, we amplify unlearning challenges at the upper optimization level to emulate worst-case scenarios, while simultaneously engaging in standard training and unlearning at the lower level, achieving a balance between data influence erasure and model utility. Our proposal offers a worst-case evaluation of MU's resilience and effectiveness. Through extensive experiments across different datasets (including CIFAR-10, 100, CelebA, Tiny ImageNet, and ImageNet) and models (including both image classifiers and generative models), we expose critical pros and cons in existing (approximate) unlearning strategies. Our results illuminate the complex challenges of MU in practice, guiding the future development of more accurate and robust unlearning algorithms. The code is available at https://github.com/OPTML-Group/Unlearn-WorstCase.
[ "['Chongyu Fan' 'Jiancheng Liu' 'Alfred Hero' 'Sijia Liu']" ]
null
null
2403.07366
null
null
http://arxiv.org/pdf/2403.07366v1
2024-03-12T07:01:57Z
2024-03-12T07:01:57Z
Entropy is not Enough for Test-Time Adaptation: From the Perspective of Disentangled Factors
Test-time adaptation (TTA) fine-tunes pre-trained deep neural networks for unseen test data. The primary challenge of TTA is limited access to the entire test dataset during online updates, causing error accumulation. To mitigate it, TTA methods have utilized the model output's entropy as a confidence metric that aims to determine which samples have a lower likelihood of causing error. Through experimental studies, however, we observed the unreliability of entropy as a confidence metric for TTA under biased scenarios and theoretically revealed that it stems from the neglect of the influence of latent disentangled factors of data on predictions. Building upon these findings, we introduce a novel TTA method named Destroy Your Object (DeYO), which leverages a newly proposed confidence metric named Pseudo-Label Probability Difference (PLPD). PLPD quantifies the influence of the shape of an object on prediction by measuring the difference between predictions before and after applying an object-destructive transformation. DeYO consists of sample selection and sample weighting, which employ entropy and PLPD concurrently. For robust adaptation, DeYO prioritizes samples that dominantly incorporate shape information when making predictions. Our extensive experiments demonstrate the consistent superiority of DeYO over baseline methods across various scenarios, including biased and wild. Project page is publicly available at https://whitesnowdrop.github.io/DeYO/.
[ "['Jonghyun Lee' 'Dahuin Jung' 'Saehyung Lee' 'Junsung Park' 'Juhyeon Shin'\n 'Uiwon Hwang' 'Sungroh Yoon']" ]
null
null
2403.07378
null
null
http://arxiv.org/pdf/2403.07378v4
2024-05-28T13:41:26Z
2024-03-12T07:31:18Z
SVD-LLM: Truncation-aware Singular Value Decomposition for Large Language Model Compression
The advancements in Large Language Models (LLMs) have been hindered by their substantial sizes, which necessitate LLM compression methods for practical deployment. Singular Value Decomposition (SVD) offers a promising solution for LLM compression. However, state-of-the-art SVD-based LLM compression methods have two key limitations: truncating smaller singular values may lead to higher compression loss, and the lack of update on the compressed weight after SVD truncation. In this work, we propose SVD-LLM, a new SVD-based LLM compression method that addresses the limitations of existing methods. SVD-LLM incorporates a truncation-aware data whitening strategy to ensure a direct mapping between singular values and compression loss. Moreover, SVD-LLM adopts a layer-wise closed-form model parameter update strategy to compensate for accuracy degradation under high compression ratios. We evaluate SVD-LLM on a total of 10 datasets and eight models from three different LLM families at four different scales. Our results demonstrate the superiority of SVD-LLM over state-of-the-arts, especially at high model compression ratios.
[ "['Xin Wang' 'Yu Zheng' 'Zhongwei Wan' 'Mi Zhang']" ]
null
null
2403.07379
null
null
http://arxiv.org/pdf/2403.07379v2
2024-06-24T04:53:34Z
2024-03-12T07:32:47Z
Hallmarks of Optimization Trajectories in Neural Networks: Directional Exploration and Redundancy
We propose a fresh take on understanding the mechanisms of neural networks by analyzing the rich directional structure of optimization trajectories, represented by their pointwise parameters. Towards this end, we introduce some natural notions of the complexity of optimization trajectories, both qualitative and quantitative, which hallmark the directional nature of optimization in neural networks: when is there redundancy, and when exploration. We use them to reveal the inherent nuance and interplay involved between various optimization choices, such as momentum and weight decay. Further, the trajectory perspective helps us see the effect of scale on regularizing the directional nature of trajectories, and as a by-product, we also observe an intriguing heterogeneity of Q,K,V dynamics in the middle attention layers in LLMs and which is homogenized by scale. Importantly, we put the significant directional redundancy observed to the test by demonstrating that training only scalar batchnorm parameters some while into training matches the performance of training the entire network, which thus exhibits the potential of hybrid optimization schemes that are geared towards efficiency.
[ "['Sidak Pal Singh' 'Bobby He' 'Thomas Hofmann' 'Bernhard Schölkopf']" ]
null
null
2403.07384
null
null
http://arxiv.org/pdf/2403.07384v1
2024-03-12T07:45:33Z
2024-03-12T07:45:33Z
SmallToLarge (S2L): Scalable Data Selection for Fine-tuning Large Language Models by Summarizing Training Trajectories of Small Models
Despite the effectiveness of data selection for large language models (LLMs) during pretraining and instruction fine-tuning phases, improving data efficiency in supervised fine-tuning (SFT) for specialized domains poses significant challenges due to the complexity of fine-tuning data. To bridge this gap, we introduce an effective and scalable data selection method for SFT, SmallToLarge (S2L), which leverages training trajectories from small models to guide the data selection for larger models. We demonstrate through extensive experiments that S2L significantly improves data efficiency in SFT for mathematical problem-solving, reducing the training data to just 11% of the original MathInstruct dataset (Yue et al., 2023) to match full dataset performance while outperforming state-of-the-art data selection algorithms by an average of 4.7% across 6 in- and out-domain evaluation datasets. Remarkably, selecting only 50K data for SFT, S2L achieves a 32.7% accuracy on the most challenging MATH (Hendrycks et al., 2021) benchmark, improving Phi-2 (Li et al., 2023b) by 16.6%. In clinical text summarization on the MIMIC-III dataset (Johnson et al., 2016), S2L again outperforms training on the full dataset using only 50% of the data. Notably, S2L can perform data selection using a reference model 40x smaller than the target model, proportionally reducing the cost of data selection.
[ "['Yu Yang' 'Siddhartha Mishra' 'Jeffrey N Chiang' 'Baharan Mirzasoleiman']" ]
null
null
2403.07404
null
null
http://arxiv.org/pdf/2403.07404v1
2024-03-12T08:33:26Z
2024-03-12T08:33:26Z
Accelerated Inference and Reduced Forgetting: The Dual Benefits of Early-Exit Networks in Continual Learning
Driven by the demand for energy-efficient employment of deep neural networks, early-exit methods have experienced a notable increase in research attention. These strategies allow for swift predictions by making decisions early in the network, thereby conserving computation time and resources. However, so far the early-exit networks have only been developed for stationary data distributions, which restricts their application in real-world scenarios with continuous non-stationary data. This study aims to explore the continual learning of the early-exit networks. We adapt existing continual learning methods to fit with early-exit architectures and investigate their behavior in the continual setting. We notice that early network layers exhibit reduced forgetting and can outperform standard networks even when using significantly fewer resources. Furthermore, we analyze the impact of task-recency bias on early-exit inference and propose Task-wise Logits Correction (TLC), a simple method that equalizes this bias and improves the network performance for every given compute budget in the class-incremental setting. We assess the accuracy and computational cost of various continual learning techniques enhanced with early-exits and TLC across standard class-incremental learning benchmarks such as 10 split CIFAR100 and ImageNetSubset and show that TLC can achieve the accuracy of the standard methods using less than 70% of their computations. Moreover, at full computational budget, our method outperforms the accuracy of the standard counterparts by up to 15 percentage points. Our research underscores the inherent synergy between early-exit networks and continual learning, emphasizing their practical utility in resource-constrained environments.
[ "['Filip Szatkowski' 'Fei Yang' 'Bartłomiej Twardowski' 'Tomasz Trzciński'\n 'Joost van de Weijer']" ]
null
null
2403.07413
null
null
http://arxiv.org/pdf/2403.07413v1
2024-03-12T08:40:21Z
2024-03-12T08:40:21Z
Learning-Augmented Algorithms with Explicit Predictors
Recent advances in algorithmic design show how to utilize predictions obtained by machine learning models from past and present data. These approaches have demonstrated an enhancement in performance when the predictions are accurate, while also ensuring robustness by providing worst-case guarantees when predictions fail. In this paper we focus on online problems; prior research in this context was focused on a paradigm where the predictor is pre-trained on past data and then used as a black box (to get the predictions it was trained for). In contrast, in this work, we unpack the predictor and integrate the learning problem it gives rise for within the algorithmic challenge. In particular we allow the predictor to learn as it receives larger parts of the input, with the ultimate goal of designing online learning algorithms specifically tailored for the algorithmic task at hand. Adopting this perspective, we focus on a number of fundamental problems, including caching and scheduling, which have been well-studied in the black-box setting. For each of the problems we consider, we introduce new algorithms that take advantage of explicit learning algorithms which we carefully design towards optimizing the overall performance. We demonstrate the potential of our approach by deriving performance bounds which improve over those established in previous work.
[ "['Marek Elias' 'Haim Kaplan' 'Yishay Mansour' 'Shay Moran']" ]
null
null
2403.07431
null
null
http://arxiv.org/pdf/2403.07431v1
2024-03-12T09:15:12Z
2024-03-12T09:15:12Z
Knowledge Transfer across Multiple Principal Component Analysis Studies
Transfer learning has aroused great interest in the statistical community. In this article, we focus on knowledge transfer for unsupervised learning tasks in contrast to the supervised learning tasks in the literature. Given the transferable source populations, we propose a two-step transfer learning algorithm to extract useful information from multiple source principal component analysis (PCA) studies, thereby enhancing estimation accuracy for the target PCA task. In the first step, we integrate the shared subspace information across multiple studies by a proposed method named as Grassmannian barycenter, instead of directly performing PCA on the pooled dataset. The proposed Grassmannian barycenter method enjoys robustness and computational advantages in more general cases. Then the resulting estimator for the shared subspace from the first step is further utilized to estimate the target private subspace in the second step. Our theoretical analysis credits the gain of knowledge transfer between PCA studies to the enlarged eigenvalue gap, which is different from the existing supervised transfer learning tasks where sparsity plays the central role. In addition, we prove that the bilinear forms of the empirical spectral projectors have asymptotic normality under weaker eigenvalue gap conditions after knowledge transfer. When the set of informativesources is unknown, we endow our algorithm with the capability of useful dataset selection by solving a rectified optimization problem on the Grassmann manifold, which in turn leads to a computationally friendly rectified Grassmannian K-means procedure. In the end, extensive numerical simulation results and a real data case concerning activity recognition are reported to support our theoretical claims and to illustrate the empirical usefulness of the proposed transfer learning methods.
[ "['Zeyu Li' 'Kangxiang Qin' 'Yong He' 'Wang Zhou' 'Xinsheng Zhang']" ]
null
null
2403.07442
null
null
http://arxiv.org/pdf/2403.07442v1
2024-03-12T09:32:41Z
2024-03-12T09:32:41Z
Proxy Methods for Domain Adaptation
We study the problem of domain adaptation under distribution shift, where the shift is due to a change in the distribution of an unobserved, latent variable that confounds both the covariates and the labels. In this setting, neither the covariate shift nor the label shift assumptions apply. Our approach to adaptation employs proximal causal learning, a technique for estimating causal effects in settings where proxies of unobserved confounders are available. We demonstrate that proxy variables allow for adaptation to distribution shift without explicitly recovering or modeling latent variables. We consider two settings, (i) Concept Bottleneck: an additional ''concept'' variable is observed that mediates the relationship between the covariates and labels; (ii) Multi-domain: training data from multiple source domains is available, where each source domain exhibits a different distribution over the latent confounder. We develop a two-stage kernel estimation approach to adapt to complex distribution shifts in both settings. In our experiments, we show that our approach outperforms other methods, notably those which explicitly recover the latent confounder.
[ "['Katherine Tsai' 'Stephen R. Pfohl' 'Olawale Salaudeen' 'Nicole Chiou'\n 'Matt J. Kusner' \"Alexander D'Amour\" 'Sanmi Koyejo' 'Arthur Gretton']" ]
null
null
2403.07447
null
null
http://arxiv.org/pdf/2403.07447v1
2024-03-12T09:37:22Z
2024-03-12T09:37:22Z
Ab-initio variational wave functions for the time-dependent many-electron Schrödinger equation
Describing the dynamics of many-electron quantum systems is crucial for applications such as predicting electronic structures in quantum chemistry, the properties of condensed matter systems, and the behaviors of complex materials. However, the real-time evolution of non-equilibrium quantum electronic systems poses a significant challenge for theoretical and computational approaches, due to the system's exploration of a vast configuration space. This work introduces a variational approach for fermionic time-dependent wave functions, surpassing mean-field approximations by capturing many-body correlations. The proposed methodology involves parameterizing the time-evolving quantum state, enabling the approximation of the state's evolution. To account for electron correlations, we employ time-dependent Jastrow factors and backflow transformations. We also show that we can incorporate neural networks to parameterize these functions. The time-dependent variational Monte Carlo technique is employed to efficiently compute the optimal time-dependent parameters. The approach is demonstrated in three distinct systems: the solvable harmonic interaction model, the dynamics of a diatomic molecule in intense laser fields, and a quenched quantum dot. In all cases, we show clear signatures of many-body correlations in the dynamics not captured by mean-field methods. The results showcase the ability of our variational approach to accurately capture the time evolution of quantum states, providing insight into the quantum dynamics of interacting electronic systems, beyond the capabilities of mean-field.
[ "['Jannes Nys' 'Gabriel Pescia' 'Giuseppe Carleo']" ]
null
null
2403.07454
null
null
http://arxiv.org/pdf/2403.07454v3
2024-06-22T09:26:27Z
2024-03-12T09:48:17Z
Fast, accurate and lightweight sequential simulation-based inference using Gaussian locally linear mappings
Bayesian inference for complex models with an intractable likelihood can be tackled using algorithms performing many calls to computer simulators. These approaches are collectively known as "simulation-based inference" (SBI). Recent SBI methods have made use of neural networks (NN) to provide approximate, yet expressive constructs for the unavailable likelihood function and the posterior distribution. However, the trade-off between accuracy and computational demand leaves much space for improvement. In this work, we propose an alternative that provides both approximations to the likelihood and the posterior distribution, using structured mixtures of probability distributions. Our approach produces accurate posterior inference when compared to state-of-the-art NN-based SBI methods, even for multimodal posteriors, while exhibiting a much smaller computational footprint. We illustrate our results on several benchmark models from the SBI literature and on a biological model of the translation kinetics after mRNA transfection.
[ "['Henrik Häggström' 'Pedro L. C. Rodrigues' 'Geoffroy Oudoumanessah'\n 'Florence Forbes' 'Umberto Picchini']" ]
null
null
2403.07456
null
null
http://arxiv.org/pdf/2403.07456v1
2024-03-12T09:51:05Z
2024-03-12T09:51:05Z
A tutorial on multi-view autoencoders using the multi-view-AE library
There has been a growing interest in recent years in modelling multiple modalities (or views) of data to for example, understand the relationship between modalities or to generate missing data. Multi-view autoencoders have gained significant traction for their adaptability and versatility in modelling multi-modal data, demonstrating an ability to tailor their approach to suit the characteristics of the data at hand. However, most multi-view autoencoders have inconsistent notation and are often implemented using different coding frameworks. To address this, we present a unified mathematical framework for multi-view autoencoders, consolidating their formulations. Moreover, we offer insights into the motivation and theoretical advantages of each model. To facilitate accessibility and practical use, we extend the documentation and functionality of the previously introduced texttt{multi-view-AE} library. This library offers Python implementations of numerous multi-view autoencoder models, presented within a user-friendly framework. Through benchmarking experiments, we evaluate our implementations against previous ones, demonstrating comparable or superior performance. This work aims to establish a cohesive foundation for multi-modal modelling, serving as a valuable educational resource in the field.
[ "['Ana Lawry Aguila' 'Andre Altmann']" ]
null
null
2403.07460
null
null
http://arxiv.org/pdf/2403.07460v1
2024-03-12T09:57:45Z
2024-03-12T09:57:45Z
Experimental Comparison of Ensemble Methods and Time-to-Event Analysis Models Through Integrated Brier Score and Concordance Index
Time-to-event analysis is a branch of statistics that has increased in popularity during the last decades due to its many application fields, such as predictive maintenance, customer churn prediction and population lifetime estimation. In this paper, we review and compare the performance of several prediction models for time-to-event analysis. These consist of semi-parametric and parametric statistical models, in addition to machine learning approaches. Our study is carried out on three datasets and evaluated in two different scores (the integrated Brier score and concordance index). Moreover, we show how ensemble methods, which surprisingly have not yet been much studied in time-to-event analysis, can improve the prediction accuracy and enhance the robustness of the prediction performance. We conclude the analysis with a simulation experiment in which we evaluate the factors influencing the performance ranking of the methods using both scores.
[ "['Camila Fernandez' 'Chung Shue Chen' 'Chen Pierre Gaillard'\n 'Alonso Silva']" ]
null
null
2403.07465
null
null
http://arxiv.org/pdf/2403.07465v1
2024-03-12T10:00:06Z
2024-03-12T10:00:06Z
One for All and All for One: GNN-based Control-Flow Attestation for Embedded Devices
Control-Flow Attestation (CFA) is a security service that allows an entity (verifier) to verify the integrity of code execution on a remote computer system (prover). Existing CFA schemes suffer from impractical assumptions, such as requiring access to the prover's internal state (e.g., memory or code), the complete Control-Flow Graph (CFG) of the prover's software, large sets of measurements, or tailor-made hardware. Moreover, current CFA schemes are inadequate for attesting embedded systems due to their high computational overhead and resource usage. In this paper, we overcome the limitations of existing CFA schemes for embedded devices by introducing RAGE, a novel, lightweight CFA approach with minimal requirements. RAGE can detect Code Reuse Attacks (CRA), including control- and non-control-data attacks. It efficiently extracts features from one execution trace and leverages Unsupervised Graph Neural Networks (GNNs) to identify deviations from benign executions. The core intuition behind RAGE is to exploit the correspondence between execution trace, execution graph, and execution embeddings to eliminate the unrealistic requirement of having access to a complete CFG. We evaluate RAGE on embedded benchmarks and demonstrate that (i) it detects 40 real-world attacks on embedded software; (ii) Further, we stress our scheme with synthetic return-oriented programming (ROP) and data-oriented programming (DOP) attacks on the real-world embedded software benchmark Embench, achieving 98.03% (ROP) and 91.01% (DOP) F1-Score while maintaining a low False Positive Rate of 3.19%; (iii) Additionally, we evaluate RAGE on OpenSSL, used by millions of devices and achieve 97.49% and 84.42% F1-Score for ROP and DOP attack detection, with an FPR of 5.47%.
[ "['Marco Chilese' 'Richard Mitev' 'Meni Orenbach' 'Robert Thorburn'\n 'Ahmad Atamli' 'Ahmad-Reza Sadeghi']" ]
null
null
2403.07471
null
null
http://arxiv.org/pdf/2403.07471v1
2024-03-12T10:06:48Z
2024-03-12T10:06:48Z
On the nonconvexity of some push-forward constraints and its consequences in machine learning
The push-forward operation enables one to redistribute a probability measure through a deterministic map. It plays a key role in statistics and optimization: many learning problems (notably from optimal transport, generative modeling, and algorithmic fairness) include constraints or penalties framed as push-forward conditions on the model. However, the literature lacks general theoretical insights on the (non)convexity of such constraints and its consequences on the associated learning problems. This paper aims at filling this gap. In a first part, we provide a range of sufficient and necessary conditions for the (non)convexity of two sets of functions: the maps transporting one probability measure to another; the maps inducing equal output distributions across distinct probability measures. This highlights that for most probability measures, these push-forward constraints are not convex. In a second time, we show how this result implies critical limitations on the design of convex optimization problems for learning generative models or group-fair predictors. This work will hopefully help researchers and practitioners have a better understanding of the critical impact of push-forward conditions onto convexity.
[ "['Lucas de Lara' 'Mathis Deronzier' 'Alberto González-Sanz' 'Virgile Foy']" ]
null
null
2403.07472
null
null
http://arxiv.org/pdf/2403.07472v1
2024-03-12T10:08:36Z
2024-03-12T10:08:36Z
Imbalance-aware Presence-only Loss Function for Species Distribution Modeling
In the face of significant biodiversity decline, species distribution models (SDMs) are essential for understanding the impact of climate change on species habitats by connecting environmental conditions to species occurrences. Traditionally limited by a scarcity of species observations, these models have significantly improved in performance through the integration of larger datasets provided by citizen science initiatives. However, they still suffer from the strong class imbalance between species within these datasets, often resulting in the penalization of rare species--those most critical for conservation efforts. To tackle this issue, this study assesses the effectiveness of training deep learning models using a balanced presence-only loss function on large citizen science-based datasets. We demonstrate that this imbalance-aware loss function outperforms traditional loss functions across various datasets and tasks, particularly in accurately modeling rare species with limited observations.
[ "['Robin Zbinden' 'Nina van Tiel' 'Marc Rußwurm' 'Devis Tuia']" ]
null
null
2403.07478
null
null
http://arxiv.org/pdf/2403.07478v1
2024-03-12T10:12:59Z
2024-03-12T10:12:59Z
Towards Graph Foundation Models for Personalization
In the realm of personalization, integrating diverse information sources such as consumption signals and content-based representations is becoming increasingly critical to build state-of-the-art solutions. In this regard, two of the biggest trends in research around this subject are Graph Neural Networks (GNNs) and Foundation Models (FMs). While GNNs emerged as a popular solution in industry for powering personalization at scale, FMs have only recently caught attention for their promising performance in personalization tasks like ranking and retrieval. In this paper, we present a graph-based foundation modeling approach tailored to personalization. Central to this approach is a Heterogeneous GNN (HGNN) designed to capture multi-hop content and consumption relationships across a range of recommendable item types. To ensure the generality required from a Foundation Model, we employ a Large Language Model (LLM) text-based featurization of nodes that accommodates all item types, and construct the graph using co-interaction signals, which inherently transcend content specificity. To facilitate practical generalization, we further couple the HGNN with an adaptation mechanism based on a two-tower (2T) architecture, which also operates agnostically to content type. This multi-stage approach ensures high scalability; while the HGNN produces general purpose embeddings, the 2T component models in a continuous space the sheer size of user-item interaction data. Our comprehensive approach has been rigorously tested and proven effective in delivering recommendations across a diverse array of products within a real-world, industrial audio streaming platform.
[ "['Andreas Damianou' 'Francesco Fabbri' 'Paul Gigioli' 'Marco De Nadai'\n 'Alice Wang' 'Enrico Palumbo' 'Mounia Lalmas']" ]
null
null
2403.07483
null
null
http://arxiv.org/pdf/2403.07483v1
2024-03-12T10:18:59Z
2024-03-12T10:18:59Z
A Deep Learning Approach to Diabetes Diagnosis
Diabetes, resulting from inadequate insulin production or utilization, causes extensive harm to the body. Existing diagnostic methods are often invasive and come with drawbacks, such as cost constraints. Although there are machine learning models like Classwise k Nearest Neighbor (CkNN) and General Regression Neural Network (GRNN), they struggle with imbalanced data and result in under-performance. Leveraging advancements in sensor technology and machine learning, we propose a non-invasive diabetes diagnosis using a Back Propagation Neural Network (BPNN) with batch normalization, incorporating data re-sampling and normalization for class balancing. Our method addresses existing challenges such as limited performance associated with traditional machine learning. Experimental results on three datasets show significant improvements in overall accuracy, sensitivity, and specificity compared to traditional methods. Notably, we achieve accuracies of 89.81% in Pima diabetes dataset, 75.49% in CDC BRFSS2015 dataset, and 95.28% in Mesra Diabetes dataset. This underscores the potential of deep learning models for robust diabetes diagnosis. See project website https://steve-zeyu-zhang.github.io/DiabetesDiagnosis/
[ "['Zeyu Zhang' 'Khandaker Asif Ahmed' 'Md Rakibul Hasan' 'Tom Gedeon'\n 'Md Zakir Hossain']" ]
null
null
2403.07485
null
null
http://arxiv.org/pdf/2403.07485v1
2024-03-12T10:21:21Z
2024-03-12T10:21:21Z
PMBO: Enhancing Black-Box Optimization through Multivariate Polynomial Surrogates
We introduce a surrogate-based black-box optimization method, termed Polynomial-model-based optimization (PMBO). The algorithm alternates polynomial approximation with Bayesian optimization steps, using Gaussian processes to model the error between the objective and its polynomial fit. We describe the algorithmic design of PMBO and compare the results of the performance of PMBO with several optimization methods for a set of analytic test functions. The results show that PMBO outperforms the classic Bayesian optimization and is robust with respect to the choice of its correlation function family and its hyper-parameter setting, which, on the contrary, need to be carefully tuned in classic Bayesian optimization. Remarkably, PMBO performs comparably with state-of-the-art evolutionary algorithms such as the Covariance Matrix Adaptation -- Evolution Strategy (CMA-ES). This finding suggests that PMBO emerges as the pivotal choice among surrogate-based optimization methods when addressing low-dimensional optimization problems. Hereby, the simple nature of polynomials opens the opportunity for interpretation and analysis of the inferred surrogate model, providing a macroscopic perspective on the landscape of the objective function.
[ "['Janina Schreiber' 'Pau Batlle' 'Damar Wicaksono' 'Michael Hecht']" ]
null
null
2403.07486
null
null
http://arxiv.org/pdf/2403.07486v1
2024-03-12T10:21:31Z
2024-03-12T10:21:31Z
XpertAI: uncovering model strategies for sub-manifolds
In recent years, Explainable AI (XAI) methods have facilitated profound validation and knowledge extraction from ML models. While extensively studied for classification, few XAI solutions have addressed the challenges specific to regression models. In regression, explanations need to be precisely formulated to address specific user queries (e.g. distinguishing between `Why is the output above 0?' and `Why is the output above 50?'). They should furthermore reflect the model's behavior on the relevant data sub-manifold. In this paper, we introduce XpertAI, a framework that disentangles the prediction strategy into multiple range-specific sub-strategies and allows the formulation of precise queries about the model (the `explanandum') as a linear combination of those sub-strategies. XpertAI is formulated generally to work alongside popular XAI attribution techniques, based on occlusion, gradient integration, or reverse propagation. Qualitative and quantitative results, demonstrate the benefits of our approach.
[ "['Simon Letzgus' 'Klaus-Robert Müller' 'Grégoire Montavon']" ]
null
null
2403.07493
null
null
http://arxiv.org/pdf/2403.07493v1
2024-03-12T10:32:35Z
2024-03-12T10:32:35Z
Signed graphs in data sciences via communicability geometry
Signed graphs are an emergent way of representing data in a variety of contexts were conflicting interactions exist. These include data from biological, ecological, and social systems. Here we propose the concept of communicability geometry for signed graphs, proving that metrics in this space, such as the communicability distance and angles, are Euclidean and spherical. We then apply these metrics to solve several problems in data analysis of signed graphs in a unified way. They include the partitioning of signed graphs, dimensionality reduction, finding hierarchies of alliances in signed networks as well as the quantification of the degree of polarization between the existing factions in systems represented by this type of graphs.
[ "['Fernando Diaz-Diaz' 'Ernesto Estrada']" ]
null
null
2403.07501
null
null
http://arxiv.org/abs/2403.07501v1
2024-03-12T10:38:54Z
2024-03-12T10:38:54Z
Detecting Security-Relevant Methods using Multi-label Machine Learning
To detect security vulnerabilities, static analysis tools need to be configured with security-relevant methods. Current approaches can automatically identify such methods using binary relevance machine learning approaches. However, they ignore dependencies among security-relevant methods, over-generalize and perform poorly in practice. Additionally, users have to nevertheless manually configure static analysis tools using the detected methods. Based on feedback from users and our observations, the excessive manual steps can often be tedious, error-prone and counter-intuitive. In this paper, we present Dev-Assist, an IntelliJ IDEA plugin that detects security-relevant methods using a multi-label machine learning approach that considers dependencies among labels. The plugin can automatically generate configurations for static analysis tools, run the static analysis, and show the results in IntelliJ IDEA. Our experiments reveal that Dev-Assist's machine learning approach has a higher F1-Measure than related approaches. Moreover, the plugin reduces and simplifies the manual effort required when configuring and using static analysis tools.
[ "['Oshando Johnson' 'Goran Piskachev' 'Ranjith Krishnamurthy' 'Eric Bodden']" ]
null
null
2403.07503
null
null
http://arxiv.org/pdf/2403.07503v2
2024-04-02T11:20:22Z
2024-03-12T10:42:32Z
Constrained Optimal Fuel Consumption of HEV: A Constrained Reinforcement Learning Approach
Hybrid electric vehicles (HEVs) are becoming increasingly popular because they can better combine the working characteristics of internal combustion engines and electric motors. However, the minimum fuel consumption of an HEV for a battery electrical balance case under a specific assembly condition and a specific speed curve still needs to be clarified in academia and industry. Regarding this problem, this work provides the mathematical expression of constrained optimal fuel consumption (COFC) from the perspective of constrained reinforcement learning (CRL) for the first time globally. Also, two mainstream approaches of CRL, constrained variational policy optimization (CVPO) and Lagrangian-based approaches, are utilized for the first time to obtain the vehicle's minimum fuel consumption under the battery electrical balance condition. We conduct case studies on the well-known Prius TOYOTA hybrid system (THS) under the NEDC condition; we give vital steps to implement CRL approaches and compare the performance between the CVPO and Lagrangian-based approaches. Our case study found that CVPO and Lagrangian-based approaches can obtain the lowest fuel consumption while maintaining the SOC balance constraint. The CVPO approach converges stable, but the Lagrangian-based approach can obtain the lowest fuel consumption at 3.95 L/100km, though with more significant oscillations. This result verifies the effectiveness of our proposed CRL approaches to the COFC problem.
[ "['Shuchang Yan']" ]
null
null
2403.07507
null
null
http://arxiv.org/pdf/2403.07507v2
2024-05-03T14:00:58Z
2024-03-12T10:43:52Z
Reconstructions of Jupiter's magnetic field using physics informed neural networks
Magnetic sounding using data collected from the Juno mission can be used to provide constraints on Jupiter's interior. However, inwards continuation of reconstructions assuming zero electrical conductivity and a representation in spherical harmonics are limited by the enhancement of noise at small scales. Here we describe new reconstructions of Jupiter's internal magnetic field based on physics-informed neural networks and either the first 33 (PINN33) or the first 50 (PINN50) of Juno's orbits. The method can resolve local structures, and allows for weak ambient electrical currents. Our models are not hampered by noise amplification at depth, and offer a much clearer picture of the interior structure. We estimate that the dynamo boundary is at a fractional radius of 0.8. At this depth, the magnetic field is arranged into longitudinal bands, and strong local features such as the great blue spot appear to be rooted in neighbouring structures of oppositely signed flux.
[ "['Philip W. Livermore' 'Leyuan Wu' 'Longwei Chen' 'Sjoerd A. L. de Ridder']" ]
null
null
2403.07526
null
null
http://arxiv.org/pdf/2403.07526v1
2024-03-12T11:05:05Z
2024-03-12T11:05:05Z
Physics-Transfer Learning for Material Strength Screening
The strength of materials, like many problems in the natural sciences, spans multiple length and time scales, and the solution has to balance accuracy and performance. Peierls stress is one of the central concepts in crystal plasticity that measures the strength through the resistance of a dislocation to plastic flow. The determination of Peierls stress involves a multiscale nature depending on both elastic lattice responses and the energy landscape of crystal slips. Material screening by strength via the Peierls stress from first-principles calculations is computationally intractable for the nonlocal characteristics of dislocations, and not included in the state-of-the-art computational material databases. In this work, we propose a physics-transfer framework to learn the physics of crystal plasticity from empirical atomistic simulations and then predict the Peierls stress from chemically accurate density functional theory-based calculations of material parameters. Notably, the strengths of single-crystalline metals can be predicted from a few single-point calculations for the deformed lattice and on the {gamma} surface, allowing efficient, high-throughput screening for material discovery. Uncertainty quantification is carried out to assess the accuracy of models and sources of errors, showing reduced physical and system uncertainties in the predictions by elevating the fidelity of training models. This physics-transfer framework can be generalized to other problems facing the accuracy-performance dilemma, by harnessing the hierarchy of physics in the multiscale models of materials science.
[ "['Yingjie Zhao' 'Zian Zhang' 'Zhiping Xu']" ]
null
null
2403.07536
null
null
http://arxiv.org/pdf/2403.07536v1
2024-03-12T11:19:46Z
2024-03-12T11:19:46Z
LaB-GATr: geometric algebra transformers for large biomedical surface and volume meshes
Many anatomical structures can be described by surface or volume meshes. Machine learning is a promising tool to extract information from these 3D models. However, high-fidelity meshes often contain hundreds of thousands of vertices, which creates unique challenges in building deep neural network architectures. Furthermore, patient-specific meshes may not be canonically aligned which limits the generalisation of machine learning algorithms. We propose LaB-GATr, a transfomer neural network with geometric tokenisation that can effectively learn with large-scale (bio-)medical surface and volume meshes through sequence compression and interpolation. Our method extends the recently proposed geometric algebra transformer (GATr) and thus respects all Euclidean symmetries, i.e. rotation, translation and reflection, effectively mitigating the problem of canonical alignment between patients. LaB-GATr achieves state-of-the-art results on three tasks in cardiovascular hemodynamics modelling and neurodevelopmental phenotype prediction, featuring meshes of up to 200,000 vertices. Our results demonstrate that LaB-GATr is a powerful architecture for learning with high-fidelity meshes which has the potential to enable interesting downstream applications. Our implementation is publicly available.
[ "['Julian Suk' 'Baris Imre' 'Jelmer M. Wolterink']" ]
null
null
2403.07542
null
null
http://arxiv.org/pdf/2403.07542v1
2024-03-12T11:29:40Z
2024-03-12T11:29:40Z
A Survey of Vision Transformers in Autonomous Driving: Current Trends and Future Directions
This survey explores the adaptation of visual transformer models in Autonomous Driving, a transition inspired by their success in Natural Language Processing. Surpassing traditional Recurrent Neural Networks in tasks like sequential image processing and outperforming Convolutional Neural Networks in global context capture, as evidenced in complex scene recognition, Transformers are gaining traction in computer vision. These capabilities are crucial in Autonomous Driving for real-time, dynamic visual scene processing. Our survey provides a comprehensive overview of Vision Transformer applications in Autonomous Driving, focusing on foundational concepts such as self-attention, multi-head attention, and encoder-decoder architecture. We cover applications in object detection, segmentation, pedestrian detection, lane detection, and more, comparing their architectural merits and limitations. The survey concludes with future research directions, highlighting the growing role of Vision Transformers in Autonomous Driving.
[ "['Quoc-Vinh Lai-Dang']" ]
null
null
2403.07548
null
null
http://arxiv.org/pdf/2403.07548v2
2024-03-13T02:31:47Z
2024-03-12T11:33:48Z
Online Continual Learning For Interactive Instruction Following Agents
In learning an embodied agent executing daily tasks via language directives, the literature largely assumes that the agent learns all training data at the beginning. We argue that such a learning scenario is less realistic since a robotic agent is supposed to learn the world continuously as it explores and perceives it. To take a step towards a more realistic embodied agent learning scenario, we propose two continual learning setups for embodied agents; learning new behaviors (Behavior Incremental Learning, Behavior-IL) and new environments (Environment Incremental Learning, Environment-IL) For the tasks, previous 'data prior' based continual learning methods maintain logits for the past tasks. However, the stored information is often insufficiently learned information and requires task boundary information, which might not always be available. Here, we propose to update them based on confidence scores without task boundary information during training (i.e., task-free) in a moving average fashion, named Confidence-Aware Moving Average (CAMA). In the proposed Behavior-IL and Environment-IL setups, our simple CAMA outperforms prior state of the art in our empirical validations by noticeable margins. The project page including codes is https://github.com/snumprlab/cl-alfred.
[ "['Byeonghwi Kim' 'Minhyuk Seo' 'Jonghyun Choi']" ]
null
null
2403.07557
null
null
http://arxiv.org/pdf/2403.07557v1
2024-03-12T11:41:51Z
2024-03-12T11:41:51Z
SIFiD: Reassess Summary Factual Inconsistency Detection with LLM
Ensuring factual consistency between the summary and the original document is paramount in summarization tasks. Consequently, considerable effort has been dedicated to detecting inconsistencies. With the advent of Large Language Models (LLMs), recent studies have begun to leverage their advanced language understanding capabilities for inconsistency detection. However, early attempts have shown that LLMs underperform traditional models due to their limited ability to follow instructions and the absence of an effective detection methodology. In this study, we reassess summary inconsistency detection with LLMs, comparing the performances of GPT-3.5 and GPT-4. To advance research in LLM-based inconsistency detection, we propose SIFiD (Summary Inconsistency Detection with Filtered Document) that identify key sentences within documents by either employing natural language inference or measuring semantic similarity between summaries and documents.
[ "['Jiuding Yang' 'Hui Liu' 'Weidong Guo' 'Zhuwei Rao' 'Yu Xu' 'Di Niu']" ]
null
null
2403.07559
null
null
http://arxiv.org/pdf/2403.07559v2
2024-07-10T08:36:48Z
2024-03-12T11:47:12Z
Ensembling Prioritized Hybrid Policies for Multi-agent Pathfinding
Multi-Agent Reinforcement Learning (MARL) based Multi-Agent Path Finding (MAPF) has recently gained attention due to its efficiency and scalability. Several MARL-MAPF methods choose to use communication to enrich the information one agent can perceive. However, existing works still struggle in structured environments with high obstacle density and a high number of agents. To further improve the performance of the communication-based MARL-MAPF solvers, we propose a new method, Ensembling Prioritized Hybrid Policies (EPH). We first propose a selective communication block to gather richer information for better agent coordination within multi-agent environments and train the model with a Q learning-based algorithm. We further introduce three advanced inference strategies aimed at bolstering performance during the execution phase. First, we hybridize the neural policy with single-agent expert guidance for navigating conflict-free zones. Secondly, we propose Q value-based methods for prioritized resolution of conflicts as well as deadlock situations. Finally, we introduce a robust ensemble method that can efficiently collect the best out of multiple possible solutions. We empirically evaluate EPH in complex multi-agent environments and demonstrate competitive performance against state-of-the-art neural methods for MAPF. We open-source our code at https://github.com/ai4co/eph-mapf.
[ "['Huijie Tang' 'Federico Berto' 'Jinkyoo Park']" ]
null
null
2403.07562
null
null
http://arxiv.org/pdf/2403.07562v1
2024-03-12T11:50:47Z
2024-03-12T11:50:47Z
A Flexible Cell Classification for ML Projects in Jupyter Notebooks
Jupyter Notebook is an interactive development environment commonly used for rapid experimentation of machine learning (ML) solutions. Describing the ML activities performed along code cells improves the readability and understanding of Notebooks. Manual annotation of code cells is time-consuming and error-prone. Therefore, tools have been developed that classify the cells of a notebook concerning the ML activity performed in them. However, the current tools are not flexible, as they work based on look-up tables that have been created, which map function calls of commonly used ML libraries to ML activities. These tables must be manually adjusted to account for new or changed libraries. This paper presents a more flexible approach to cell classification based on a hybrid classification approach that combines a rule-based and a decision tree classifier. We discuss the design rationales and describe the developed classifiers in detail. We implemented the new flexible cell classification approach in a tool called JupyLabel. Its evaluation and the obtained metric scores regarding precision, recall, and F1-score are discussed. Additionally, we compared JupyLabel with HeaderGen, an existing cell classification tool. We were able to show that the presented flexible cell classification approach outperforms this tool significantly.
[ "['Miguel Perez' 'Selin Aydin' 'Horst Lichter']" ]
null
null
2403.07563
null
null
http://arxiv.org/pdf/2403.07563v1
2024-03-12T11:51:55Z
2024-03-12T11:51:55Z
Learning Generalizable Feature Fields for Mobile Manipulation
An open problem in mobile manipulation is how to represent objects and scenes in a unified manner, so that robots can use it both for navigating in the environment and manipulating objects. The latter requires capturing intricate geometry while understanding fine-grained semantics, whereas the former involves capturing the complexity inherit to an expansive physical scale. In this work, we present GeFF (Generalizable Feature Fields), a scene-level generalizable neural feature field that acts as a unified representation for both navigation and manipulation that performs in real-time. To do so, we treat generative novel view synthesis as a pre-training task, and then align the resulting rich scene priors with natural language via CLIP feature distillation. We demonstrate the effectiveness of this approach by deploying GeFF on a quadrupedal robot equipped with a manipulator. We evaluate GeFF's ability to generalize to open-set objects as well as running time, when performing open-vocabulary mobile manipulation in dynamic scenes.
[ "['Ri-Zhao Qiu' 'Yafei Hu' 'Ge Yang' 'Yuchen Song' 'Yang Fu' 'Jianglong Ye'\n 'Jiteng Mu' 'Ruihan Yang' 'Nikolay Atanasov' 'Sebastian Scherer'\n 'Xiaolong Wang']" ]
null
null
2403.07569
null
null
http://arxiv.org/pdf/2403.07569v1
2024-03-12T11:56:50Z
2024-03-12T11:56:50Z
Exploring Challenges in Deep Learning of Single-Station Ground Motion Records
Contemporary deep learning models have demonstrated promising results across various applications within seismology and earthquake engineering. These models rely primarily on utilizing ground motion records for tasks such as earthquake event classification, localization, earthquake early warning systems, and structural health monitoring. However, the extent to which these models effectively learn from these complex time-series signals has not been thoroughly analyzed. In this study, our objective is to evaluate the degree to which auxiliary information, such as seismic phase arrival times or seismic station distribution within a network, dominates the process of deep learning from ground motion records, potentially hindering its effectiveness. We perform a hyperparameter search on two deep learning models to assess their effectiveness in deep learning from ground motion records while also examining the impact of auxiliary information on model performance. Experimental results reveal a strong reliance on the highly correlated P and S phase arrival information. Our observations highlight a potential gap in the field, indicating an absence of robust methodologies for deep learning of single-station ground motion recordings independent of any auxiliary information.
[ "['Ümit Mert Çağlar' 'Baris Yilmaz' 'Melek Türkmen' 'Erdem Akagündüz'\n 'Salih Tileylioglu']" ]
null
null
2403.07573
null
null
http://arxiv.org/pdf/2403.07573v1
2024-03-12T12:03:16Z
2024-03-12T12:03:16Z
Towards a Dynamic Future with Adaptable Computing and Network Convergence (ACNC)
In the context of advancing 6G, a substantial paradigm shift is anticipated, highlighting comprehensive everything-to-everything interactions characterized by numerous connections and stringent adherence to Quality of Service/Experience (QoS/E) prerequisites. The imminent challenge stems from resource scarcity, prompting a deliberate transition to Computing-Network Convergence (CNC) as an auspicious approach for joint resource orchestration. While CNC-based mechanisms have garnered attention, their effectiveness in realizing future services, particularly in use cases like the Metaverse, may encounter limitations due to the continually changing nature of users, services, and resources. Hence, this paper presents the concept of Adaptable CNC (ACNC) as an autonomous Machine Learning (ML)-aided mechanism crafted for the joint orchestration of computing and network resources, catering to dynamic and voluminous user requests with stringent requirements. ACNC encompasses two primary functionalities: state recognition and context detection. Given the intricate nature of the user-service-computing-network space, the paper employs dimension reduction to generate live, holistic, abstract system states in a hierarchical structure. To address the challenges posed by dynamic changes, Continual Learning (CL) is employed, classifying the system state into contexts controlled by dedicated ML agents, enabling them to operate efficiently. These two functionalities are intricately linked within a closed loop overseen by the End-to-End (E2E) orchestrator to allocate resources. The paper introduces the components of ACNC, proposes a Metaverse scenario to exemplify ACNC's role in resource provisioning with Segment Routing v6 (SRv6), outlines ACNC's workflow, details a numerical analysis for efficiency assessment, and concludes with discussions on relevant challenges and potential avenues for future research.
[ "['Masoud Shokrnezhad' 'Hao Yu' 'Tarik Taleb' 'Richard Li' 'Kyunghan Lee'\n 'Jaeseung Song' 'Cedric Westphal']" ]
null
null
2403.07585
null
null
http://arxiv.org/pdf/2403.07585v1
2024-03-12T12:15:57Z
2024-03-12T12:15:57Z
Communication Optimization for Distributed Training: Architecture, Advances, and Opportunities
The past few years have witnessed the flourishing of large-scale deep neural network models with ever-growing parameter numbers. Training such large-scale models typically requires massive memory and computing resources that exceed those of a single GPU, necessitating distributed training. As GPU performance has rapidly evolved in recent years, computation time has shrunk, thereby increasing the proportion of communication in the overall training time. Therefore, optimizing communication for distributed training has become an urgent issue. In this article, we briefly introduce the general architecture of distributed deep neural network training and analyze relationships among Parallelization Strategy, Collective Communication Library, and Network from the perspective of communication optimization, which forms a three-layer paradigm. We then review current representative research advances with this three-layer paradigm. We find that layers in the current three-layer paradigm are relatively independent, but there is a rich design space for cross-layer collaborative optimization in distributed training scenarios. Therefore, we further advocate a communication-efficient five-layer paradigm underlining opportunities for collaboration designs and look forward to the perspectives of "Vertical", "Horizontal", "Intra-Inter" and "Host-Net" collaboration designs. We hope this article can shed some light on future research on communication optimization for distributed training.
[ "['Yunze Wei' 'Tianshuo Hu' 'Cong Liang' 'Yong Cui']" ]
null
null
2403.07586
null
null
http://arxiv.org/pdf/2403.07586v1
2024-03-12T12:16:40Z
2024-03-12T12:16:40Z
Federated Learning of Socially Appropriate Agent Behaviours in Simulated Home Environments
As social robots become increasingly integrated into daily life, ensuring their behaviours align with social norms is crucial. For their widespread open-world application, it is important to explore Federated Learning (FL) settings where individual robots can learn about their unique environments while also learning from each others' experiences. In this paper, we present a novel FL benchmark that evaluates different strategies, using multi-label regression objectives, where each client individually learns to predict the social appropriateness of different robot actions while also sharing their learning with others. Furthermore, splitting the training data by different contexts such that each client incrementally learns across contexts, we present a novel Federated Continual Learning (FCL) benchmark that adapts FL-based methods to use state-of-the-art Continual Learning (CL) methods to continually learn socially appropriate agent behaviours under different contextual settings. Federated Averaging (FedAvg) of weights emerges as a robust FL strategy while rehearsal-based FCL enables incrementally learning the social appropriateness of robot actions, across contextual splits.
[ "['Saksham Checker' 'Nikhil Churamani' 'Hatice Gunes']" ]
null
null
2403.07588
null
null
http://arxiv.org/pdf/2403.07588v1
2024-03-12T12:18:55Z
2024-03-12T12:18:55Z
Visual Privacy Auditing with Diffusion Models
Image reconstruction attacks on machine learning models pose a significant risk to privacy by potentially leaking sensitive information. Although defending against such attacks using differential privacy (DP) has proven effective, determining appropriate DP parameters remains challenging. Current formal guarantees on data reconstruction success suffer from overly theoretical assumptions regarding adversary knowledge about the target data, particularly in the image domain. In this work, we empirically investigate this discrepancy and find that the practicality of these assumptions strongly depends on the domain shift between the data prior and the reconstruction target. We propose a reconstruction attack based on diffusion models (DMs) that assumes adversary access to real-world image priors and assess its implications on privacy leakage under DP-SGD. We show that (1) real-world data priors significantly influence reconstruction success, (2) current reconstruction bounds do not model the risk posed by data priors well, and (3) DMs can serve as effective auditing tools for visualizing privacy leakage.
[ "['Kristian Schwethelm' 'Johannes Kaiser' 'Moritz Knolle' 'Daniel Rueckert'\n 'Georgios Kaissis' 'Alexander Ziller']" ]
null
null
2403.07591
null
null
http://arxiv.org/pdf/2403.07591v1
2024-03-12T12:24:11Z
2024-03-12T12:24:11Z
Robustifying and Boosting Training-Free Neural Architecture Search
Neural architecture search (NAS) has become a key component of AutoML and a standard tool to automate the design of deep neural networks. Recently, training-free NAS as an emerging paradigm has successfully reduced the search costs of standard training-based NAS by estimating the true architecture performance with only training-free metrics. Nevertheless, the estimation ability of these metrics typically varies across different tasks, making it challenging to achieve robust and consistently good search performance on diverse tasks with only a single training-free metric. Meanwhile, the estimation gap between training-free metrics and the true architecture performances limits training-free NAS to achieve superior performance. To address these challenges, we propose the robustifying and boosting training-free NAS (RoBoT) algorithm which (a) employs the optimized combination of existing training-free metrics explored from Bayesian optimization to develop a robust and consistently better-performing metric on diverse tasks, and (b) applies greedy search, i.e., the exploitation, on the newly developed metric to bridge the aforementioned gap and consequently to boost the search performance of standard training-free NAS further. Remarkably, the expected performance of our RoBoT can be theoretically guaranteed, which improves over the existing training-free NAS under mild conditions with additional interesting insights. Our extensive experiments on various NAS benchmark tasks yield substantial empirical evidence to support our theoretical results.
[ "['Zhenfeng He' 'Yao Shu' 'Zhongxiang Dai' 'Bryan Kian Hsiang Low']" ]
null
null
2403.07603
null
null
http://arxiv.org/pdf/2403.07603v1
2024-03-12T12:40:23Z
2024-03-12T12:40:23Z
ProPML: Probability Partial Multi-label Learning
Partial Multi-label Learning (PML) is a type of weakly supervised learning where each training instance corresponds to a set of candidate labels, among which only some are true. In this paper, we introduce our{}, a novel probabilistic approach to this problem that extends the binary cross entropy to the PML setup. In contrast to existing methods, it does not require suboptimal disambiguation and, as such, can be applied to any deep architecture. Furthermore, experiments conducted on artificial and real-world datasets indicate that our{} outperforms existing approaches, especially for high noise in a candidate set.
[ "['Łukasz Struski' 'Adam Pardyl' 'Jacek Tabor' 'Bartosz Zieliński']" ]
null
null
2403.07605
null
null
http://arxiv.org/pdf/2403.07605v2
2024-07-08T21:37:03Z
2024-03-12T12:44:34Z
Optimizing Negative Prompts for Enhanced Aesthetics and Fidelity in Text-To-Image Generation
In text-to-image generation, using negative prompts, which describe undesirable image characteristics, can significantly boost image quality. However, producing good negative prompts is manual and tedious. To address this, we propose NegOpt, a novel method for optimizing negative prompt generation toward enhanced image generation, using supervised fine-tuning and reinforcement learning. Our combined approach results in a substantial increase of 25% in Inception Score compared to other approaches and surpasses ground-truth negative prompts from the test set. Furthermore, with NegOpt we can preferentially optimize the metrics most important to us. Finally, we construct Negative Prompts DB (https://github.com/mikeogezi/negopt), a publicly available dataset of negative prompts.
[ "['Michael Ogezi' 'Ning Shi']" ]