categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
sequence
null
null
2406.19556
null
null
http://arxiv.org/pdf/2406.19556v1
2024-06-27T22:16:53Z
2024-06-27T22:16:53Z
BOrg: A Brain Organoid-Based Mitosis Dataset for Automatic Analysis of Brain Diseases
Recent advances have enabled the study of human brain development using brain organoids derived from stem cells. Quantifying cellular processes like mitosis in these organoids offers insights into neurodevelopmental disorders, but the manual analysis is time-consuming, and existing datasets lack specific details for brain organoid studies. We introduce BOrg, a dataset designed to study mitotic events in the embryonic development of the brain using confocal microscopy images of brain organoids. BOrg utilizes an efficient annotation pipeline with sparse point annotations and techniques that minimize expert effort, overcoming limitations of standard deep learning approaches on sparse data. We adapt and benchmark state-of-the-art object detection and cell counting models on BOrg for detecting and analyzing mitotic cells across prophase, metaphase, anaphase, and telophase stages. Our results demonstrate these adapted models significantly improve mitosis analysis efficiency and accuracy for brain organoid research compared to existing methods. BOrg facilitates the development of automated tools to quantify statistics like mitosis rates, aiding mechanistic studies of neurodevelopmental processes and disorders. Data and code are available at https://github.com/awaisrauf/borg.
[ "['Muhammad Awais' 'Mehaboobathunnisa Sahul Hameed' 'Bidisha Bhattacharya'\n 'Orly Reiner' 'Rao Muhammad Anwer']" ]
null
null
2406.19560
null
null
http://arxiv.org/pdf/2406.19560v1
2024-06-27T22:19:19Z
2024-06-27T22:19:19Z
Cost-efficient Active Illumination Camera For Hyper-spectral Reconstruction
Hyper-spectral imaging has recently gained increasing attention for use in different applications, including agricultural investigation, ground tracking, remote sensing and many other. However, the high cost, large physical size and complicated operation process stop hyperspectral cameras from being employed for various applications and research fields. In this paper, we introduce a cost-efficient, compact and easy to use active illumination camera that may benefit many applications. We developed a fully functional prototype of such camera. With the hope of helping with agricultural research, we tested our camera for plant root imaging. In addition, a U-Net model for spectral reconstruction was trained by using a reference hyperspectral camera's data as ground truth and our camera's data as input. We demonstrated our camera's ability to obtain additional information over a typical RGB camera. In addition, the ability to reconstruct hyperspectral data from multi-spectral input makes our device compatible to models and algorithms developed for hyperspectral applications with no modifications required.
[ "['Yuxuan Zhang' 'T. M. Sazzad' 'Yangyang Song' 'Spencer J. Chang'\n 'Ritesh Chowdhry' 'Tomas Mejia' 'Anna Hampton' 'Shelby Kucharski'\n 'Stefan Gerber' 'Barry Tillman' 'Marcio F. R. Resende'\n 'William M. Hammond' 'Chris H. Wilson' 'Alina Zare' 'Sanjeev J. Koppal']" ]
null
null
2406.19561
null
null
http://arxiv.org/pdf/2406.19561v1
2024-06-27T22:24:46Z
2024-06-27T22:24:46Z
Meta-Gradient Search Control: A Method for Improving the Efficiency of Dyna-style Planning
We study how a Reinforcement Learning (RL) system can remain sample-efficient when learning from an imperfect model of the environment. This is particularly challenging when the learning system is resource-constrained and in continual settings, where the environment dynamics change. To address these challenges, our paper introduces an online, meta-gradient algorithm that tunes a probability with which states are queried during Dyna-style planning. Our study compares the aggregate, empirical performance of this meta-gradient method to baselines that employ conventional sampling strategies. Results indicate that our method improves efficiency of the planning process, which, as a consequence, improves the sample-efficiency of the overall learning process. On the whole, we observe that our meta-learned solutions avoid several pathologies of conventional planning approaches, such as sampling inaccurate transitions and those that stall credit assignment. We believe these findings could prove useful, in future work, for designing model-based RL systems at scale.
[ "['Bradley Burega' 'John D. Martin' 'Luke Kapeluck' 'Michael Bowling']" ]
null
null
2406.19566
null
null
http://arxiv.org/pdf/2406.19566v1
2024-06-27T22:51:06Z
2024-06-27T22:51:06Z
Instance-Optimal Private Density Estimation in the Wasserstein Distance
Estimating the density of a distribution from samples is a fundamental problem in statistics. In many practical settings, the Wasserstein distance is an appropriate error metric for density estimation. For example, when estimating population densities in a geographic region, a small Wasserstein distance means that the estimate is able to capture roughly where the population mass is. In this work we study differentially private density estimation in the Wasserstein distance. We design and analyze instance-optimal algorithms for this problem that can adapt to easy instances. For distributions $P$ over $mathbb{R}$, we consider a strong notion of instance-optimality: an algorithm that uniformly achieves the instance-optimal estimation rate is competitive with an algorithm that is told that the distribution is either $P$ or $Q_P$ for some distribution $Q_P$ whose probability density function (pdf) is within a factor of 2 of the pdf of $P$. For distributions over $mathbb{R}^2$, we use a different notion of instance optimality. We say that an algorithm is instance-optimal if it is competitive with an algorithm that is given a constant-factor multiplicative approximation of the density of the distribution. We characterize the instance-optimal estimation rates in both these settings and show that they are uniformly achievable (up to polylogarithmic factors). Our approach for $mathbb{R}^2$ extends to arbitrary metric spaces as it goes via hierarchically separated trees. As a special case our results lead to instance-optimal private learning in TV distance for discrete distributions.
[ "['Vitaly Feldman' 'Audra McMillan' 'Satchit Sivakumar' 'Kunal Talwar']" ]
null
null
2406.19573
null
null
http://arxiv.org/pdf/2406.19573v1
2024-06-27T23:25:57Z
2024-06-27T23:25:57Z
On Counterfactual Interventions in Vector Autoregressive Models
Counterfactual reasoning allows us to explore hypothetical scenarios in order to explain the impacts of our decisions. However, addressing such inquires is impossible without establishing the appropriate mathematical framework. In this work, we introduce the problem of counterfactual reasoning in the context of vector autoregressive (VAR) processes. We also formulate the inference of a causal model as a joint regression task where for inference we use both data with and without interventions. After learning the model, we exploit linearity of the VAR model to make exact predictions about the effects of counterfactual interventions. Furthermore, we quantify the total causal effects of past counterfactual interventions. The source code for this project is freely available at https://github.com/KurtButler/counterfactual_interventions.
[ "['Kurt Butler' 'Marija Iloska' 'Petar M. Djuric']" ]
null
null
2406.19574
null
null
http://arxiv.org/pdf/2406.19574v2
2024-07-04T01:58:27Z
2024-06-27T23:26:57Z
Deep Temporal Sequence Classification and Mathematical Modeling for Cell Tracking in Dense 3D Microscopy Videos of Bacterial Biofilms
Automatic cell tracking in dense environments is plagued by inaccurate correspondences and misidentification of parent-offspring relationships. In this paper, we introduce a novel cell tracking algorithm named DenseTrack, which integrates deep learning with mathematical model-based strategies to effectively establish correspondences between consecutive frames and detect cell division events in crowded scenarios. We formulate the cell tracking problem as a deep learning-based temporal sequence classification task followed by solving a constrained one-to-one matching optimization problem exploiting the classifier's confidence scores. Additionally, we present an eigendecomposition-based cell division detection strategy that leverages knowledge of cellular geometry. The performance of the proposed approach has been evaluated by tracking densely packed cells in 3D time-lapse image sequences of bacterial biofilm development. The experimental results on simulated as well as experimental fluorescence image sequences suggest that the proposed tracking method achieves superior performance in terms of both qualitative and quantitative evaluation measures compared to recent state-of-the-art cell tracking approaches.
[ "['Tanjin Taher Toma' 'Yibo Wang' 'Andreas Gahlmann' 'Scott T. Acton']" ]
null
null
2406.19578
null
null
http://arxiv.org/pdf/2406.19578v1
2024-06-27T23:43:36Z
2024-06-27T23:43:36Z
PathAlign: A vision-language model for whole slide images in histopathology
Microscopic interpretation of histopathology images underlies many important diagnostic and treatment decisions. While advances in vision-language modeling raise new opportunities for analysis of such images, the gigapixel-scale size of whole slide images (WSIs) introduces unique challenges. Additionally, pathology reports simultaneously highlight key findings from small regions while also aggregating interpretation across multiple slides, often making it difficult to create robust image-text pairs. As such, pathology reports remain a largely untapped source of supervision in computational pathology, with most efforts relying on region-of-interest annotations or self-supervision at the patch-level. In this work, we develop a vision-language model based on the BLIP-2 framework using WSIs paired with curated text from pathology reports. This enables applications utilizing a shared image-text embedding space, such as text or image retrieval for finding cases of interest, as well as integration of the WSI encoder with a frozen large language model (LLM) for WSI-based generative text capabilities such as report generation or AI-in-the-loop interactions. We utilize a de-identified dataset of over 350,000 WSIs and diagnostic text pairs, spanning a wide range of diagnoses, procedure types, and tissue types. We present pathologist evaluation of text generation and text retrieval using WSI embeddings, as well as results for WSI classification and workflow prioritization (slide-level triaging). Model-generated text for WSIs was rated by pathologists as accurate, without clinically significant error or omission, for 78% of WSIs on average. This work demonstrates exciting potential capabilities for language-aligned WSI embeddings.
[ "['Faruk Ahmed' 'Andrew Sellergren' 'Lin Yang' 'Shawn Xu' 'Boris Babenko'\n 'Abbi Ward' 'Niels Olson' 'Arash Mohtashamian' 'Yossi Matias'\n 'Greg S. Corrado' 'Quang Duong' 'Dale R. Webster' 'Shravya Shetty'\n 'Daniel Golden' 'Yun Liu' 'David F. Steiner' 'Ellery Wulczyn']" ]
null
null
2406.19579
null
null
http://arxiv.org/pdf/2406.19579v1
2024-06-27T23:57:01Z
2024-06-27T23:57:01Z
Private Zeroth-Order Nonsmooth Nonconvex Optimization
We introduce a new zeroth-order algorithm for private stochastic optimization on nonconvex and nonsmooth objectives. Given a dataset of size $M$, our algorithm ensures $(alpha,alpharho^2/2)$-R'enyi differential privacy and finds a $(delta,epsilon)$-stationary point so long as $M=tildeOmegaleft(frac{d}{deltaepsilon^3} + frac{d^{3/2}}{rhodeltaepsilon^2}right)$. This matches the optimal complexity of its non-private zeroth-order analog. Notably, although the objective is not smooth, we have privacy ``for free'' whenever $rho ge sqrt{d}epsilon$.
[ "['Qinzi Zhang' 'Hoang Tran' 'Ashok Cutkosky']" ]
null
null
2406.19580
null
null
http://arxiv.org/pdf/2406.19580v1
2024-06-28T00:05:53Z
2024-06-28T00:05:53Z
FRED: Flexible REduction-Distribution Interconnect and Communication Implementation for Wafer-Scale Distributed Training of DNN Models
Distributed Deep Neural Network (DNN) training is a technique to reduce the training overhead by distributing the training tasks into multiple accelerators, according to a parallelization strategy. However, high-performance compute and interconnects are needed for maximum speed-up and linear scaling of the system. Wafer-scale systems are a promising technology that allows for tightly integrating high-end accelerators with high-speed wafer-scale interconnects, making it an attractive platform for distributed training. However, the wafer-scale interconnect should offer high performance and flexibility for various parallelization strategies to enable maximum optimizations for compute and memory usage. In this paper, we propose FRED, a wafer-scale interconnect that is tailored for the high-BW requirements of wafer-scale networks and can efficiently execute communication patterns of different parallelization strategies. Furthermore, FRED supports in-switch collective communication execution that reduces the network traffic by approximately 2X. Our results show that FRED can improve the average end-to-end training time of ResNet-152, Transformer-17B, GPT-3, and Transformer-1T by 1.76X, 1.87X, 1.34X, and 1.4X, respectively when compared to a baseline waferscale 2D-Mesh fabric.
[ "['Saeed Rashidi' 'William Won' 'Sudarshan Srinivasan' 'Puneet Gupta'\n 'Tushar Krishna']" ]
null
null
2406.19581
null
null
http://arxiv.org/pdf/2406.19581v1
2024-06-28T00:08:13Z
2024-06-28T00:08:13Z
HarmonICA: Neural non-stationarity correction and source separation for motor neuron interfaces
A major outstanding problem when interfacing with spinal motor neurons is how to accurately compensate for non-stationary effects in the signal during source separation routines, particularly when they cannot be estimated in advance. This forces current systems to instead use undifferentiated bulk signal, which limits the potential degrees of freedom for control. In this study we propose a potential solution, using an unsupervised learning algorithm to blindly correct for the effects of latent processes which drive the signal non-stationarities. We implement this methodology within the theoretical framework of a quasilinear version of independent component analysis (ICA). The proposed design, HarmonICA, sidesteps the identifiability problems of nonlinear ICA, allowing for equivalent predictability to linear ICA whilst retaining the ability to learn complex nonlinear relationships between non-stationary latents and their effects on the signal. We test HarmonICA on both invasive and non-invasive recordings both simulated and real, demonstrating an ability to blindly compensate for the non-stationary effects specific to each, and thus to significantly enhance the quality of a source separation routine.
[ "['Alexander Kenneth Clarke' 'Agnese Grison' 'Irene Mendez Guerra'\n 'Pranav Mamidanna' 'Shihan Ma' 'Silvia Muceli' 'Dario Farina']" ]
null
null
2406.19589
null
null
http://arxiv.org/pdf/2406.19589v1
2024-06-28T00:39:17Z
2024-06-28T00:39:17Z
Network Bending of Diffusion Models for Audio-Visual Generation
In this paper we present the first steps towards the creation of a tool which enables artists to create music visualizations using pre-trained, generative, machine learning models. First, we investigate the application of network bending, the process of applying transforms within the layers of a generative network, to image generation diffusion models by utilizing a range of point-wise, tensor-wise, and morphological operators. We identify a number of visual effects that result from various operators, including some that are not easily recreated with standard image editing tools. We find that this process allows for continuous, fine-grain control of image generation which can be helpful for creative applications. Next, we generate music-reactive videos using Stable Diffusion by passing audio features as parameters to network bending operators. Finally, we comment on certain transforms which radically shift the image and the possibilities of learning more about the latent space of Stable Diffusion based on these transforms.
[ "['Luke Dzwonczyk' 'Carmine Emanuele Cella' 'David Ban']" ]
null
null
2406.19596
null
null
http://arxiv.org/pdf/2406.19596v1
2024-06-28T01:37:46Z
2024-06-28T01:37:46Z
Optimizing Cyber Defense in Dynamic Active Directories through Reinforcement Learning
This paper addresses a significant gap in Autonomous Cyber Operations (ACO) literature: the absence of effective edge-blocking ACO strategies in dynamic, real-world networks. It specifically targets the cybersecurity vulnerabilities of organizational Active Directory (AD) systems. Unlike the existing literature on edge-blocking defenses which considers AD systems as static entities, our study counters this by recognizing their dynamic nature and developing advanced edge-blocking defenses through a Stackelberg game model between attacker and defender. We devise a Reinforcement Learning (RL)-based attack strategy and an RL-assisted Evolutionary Diversity Optimization-based defense strategy, where the attacker and defender improve each other strategy via parallel gameplay. To address the computational challenges of training attacker-defender strategies on numerous dynamic AD graphs, we propose an RL Training Facilitator that prunes environments and neural networks to eliminate irrelevant elements, enabling efficient and scalable training for large graphs. We extensively train the attacker strategy, as a sophisticated attacker model is essential for a robust defense. Our empirical results successfully demonstrate that our proposed approach enhances defender's proficiency in hardening dynamic AD graphs while ensuring scalability for large-scale AD.
[ "['Diksha Goel' 'Kristen Moore' 'Mingyu Guo' 'Derui Wang' 'Minjune Kim'\n 'Seyit Camtepe']" ]
null
null
2406.19602
null
null
http://arxiv.org/pdf/2406.19602v2
2024-07-01T02:10:16Z
2024-06-28T02:18:16Z
A Survey on Deep Clustering: From the Prior Perspective
Facilitated by the powerful feature extraction ability of neural networks, deep clustering has achieved great success in analyzing high-dimensional and complex real-world data. The performance of deep clustering methods is affected by various factors such as network structures and learning objectives. However, as pointed out in this survey, the essence of deep clustering lies in the incorporation and utilization of prior knowledge, which is largely ignored by existing works. From pioneering deep clustering methods based on data structure assumptions to recent contrastive clustering methods based on data augmentation invariances, the development of deep clustering intrinsically corresponds to the evolution of prior knowledge. In this survey, we provide a comprehensive review of deep clustering methods by categorizing them into six types of prior knowledge. We find that in general the prior innovation follows two trends, namely, i) from mining to constructing, and ii) from internal to external. Besides, we provide a benchmark on five widely-used datasets and analyze the performance of methods with diverse priors. By providing a novel prior knowledge perspective, we hope this survey could provide some novel insights and inspire future research in the deep clustering community.
[ "['Yiding Lu' 'Haobin Li' 'Yunfan Li' 'Yijie Lin' 'Xi Peng']" ]
null
null
2406.19614
null
null
http://arxiv.org/pdf/2406.19614v1
2024-06-28T02:41:33Z
2024-06-28T02:41:33Z
A Survey on Data Quality Dimensions and Tools for Machine Learning
Machine learning (ML) technologies have become substantial in practically all aspects of our society, and data quality (DQ) is critical for the performance, fairness, robustness, safety, and scalability of ML models. With the large and complex data in data-centric AI, traditional methods like exploratory data analysis (EDA) and cross-validation (CV) face challenges, highlighting the importance of mastering DQ tools. In this survey, we review 17 DQ evaluation and improvement tools in the last 5 years. By introducing the DQ dimensions, metrics, and main functions embedded in these tools, we compare their strengths and limitations and propose a roadmap for developing open-source DQ tools for ML. Based on the discussions on the challenges and emerging trends, we further highlight the potential applications of large language models (LLMs) and generative AI in DQ evaluation and improvement for ML. We believe this comprehensive survey can enhance understanding of DQ in ML and could drive progress in data-centric AI. A complete list of the literature investigated in this survey is available on GitHub at: https://github.com/haihua0913/awesome-dq4ml.
[ "['Yuhan Zhou' 'Fengjiao Tu' 'Kewei Sha' 'Junhua Ding' 'Haihua Chen']" ]
null
null
2406.19615
null
null
http://arxiv.org/pdf/2406.19615v1
2024-06-28T02:42:30Z
2024-06-28T02:42:30Z
VarteX: Enhancing Weather Forecast through Distributed Variable Representation
Weather forecasting is essential for various human activities. Recent data-driven models have outperformed numerical weather prediction by utilizing deep learning in forecasting performance. However, challenges remain in efficiently handling multiple meteorological variables. This study proposes a new variable aggregation scheme and an efficient learning framework for that challenge. Experiments show that VarteX outperforms the conventional model in forecast performance, requiring significantly fewer parameters and resources. The effectiveness of learning through multiple aggregations and regional split training is demonstrated, enabling more efficient and accurate deep learning-based weather forecasting.
[ "['Ayumu Ueyama' 'Kazuhiko Kawamoto' 'Hiroshi Kera']" ]
null
null
2406.19617
null
null
http://arxiv.org/pdf/2406.19617v1
2024-06-28T02:56:22Z
2024-06-28T02:56:22Z
Stochastic Zeroth-Order Optimization under Strongly Convexity and Lipschitz Hessian: Minimax Sample Complexity
Optimization of convex functions under stochastic zeroth-order feedback has been a major and challenging question in online learning. In this work, we consider the problem of optimizing second-order smooth and strongly convex functions where the algorithm is only accessible to noisy evaluations of the objective function it queries. We provide the first tight characterization for the rate of the minimax simple regret by developing matching upper and lower bounds. We propose an algorithm that features a combination of a bootstrapping stage and a mirror-descent stage. Our main technical innovation consists of a sharp characterization for the spherical-sampling gradient estimator under higher-order smoothness conditions, which allows the algorithm to optimally balance the bias-variance tradeoff, and a new iterative method for the bootstrapping stage, which maintains the performance for unbounded Hessian.
[ "['Qian Yu' 'Yining Wang' 'Baihe Huang' 'Qi Lei' 'Jason D. Lee']" ]
null
null
2406.19619
null
null
http://arxiv.org/pdf/2406.19619v1
2024-06-28T03:02:25Z
2024-06-28T03:02:25Z
ScoreFusion: fusing score-based generative models via Kullback-Leibler barycenters
We study the problem of fusing pre-trained (auxiliary) generative models to enhance the training of a target generative model. We propose using KL-divergence weighted barycenters as an optimal fusion mechanism, in which the barycenter weights are optimally trained to minimize a suitable loss for the target population. While computing the optimal KL-barycenter weights can be challenging, we demonstrate that this process can be efficiently executed using diffusion score training when the auxiliary generative models are also trained based on diffusion score methods. Moreover, we show that our fusion method has a dimension-free sample complexity in total variation distance provided that the auxiliary models are well fitted for their own task and the auxiliary tasks combined capture the target well. The main takeaway of our method is that if the auxiliary models are well-trained and can borrow features from each other that are present in the target, our fusion method significantly improves the training of generative models. We provide a concise computational implementation of the fusion algorithm, and validate its efficiency in the low-data regime with numerical experiments involving mixtures models and image datasets.
[ "['Hao Liu' 'Junze' 'Ye' 'Jose Blanchet' 'Nian Si']" ]
null
null
2406.19621
null
null
http://arxiv.org/pdf/2406.19621v1
2024-06-28T03:07:53Z
2024-06-28T03:07:53Z
Machine-Learning-Driven Runtime Optimization of BLAS Level 3 on Modern Multi-Core Systems
BLAS Level 3 operations are essential for scientific computing, but finding the optimal number of threads for multi-threaded implementations on modern multi-core systems is challenging. We present an extension to the Architecture and Data-Structure Aware Linear Algebra (ADSALA) library that uses machine learning to optimize the runtime of all BLAS Level 3 operations. Our method predicts the best number of threads for each operation based on the matrix dimensions and the system architecture. We test our method on two HPC platforms with Intel and AMD processors, using MKL and BLIS as baseline BLAS implementations. We achieve speedups of 1.5 to 3.0 for all operations, compared to using the maximum number of threads. We also analyze the runtime patterns of different BLAS operations and explain the sources of speedup. Our work shows the effectiveness and generality of the ADSALA approach for optimizing BLAS routines on modern multi-core systems.
[ "['Yufan Xia' 'Giuseppe Maria Junior Barca']" ]
null
null
2406.19622
null
null
http://arxiv.org/pdf/2406.19622v1
2024-06-28T03:10:36Z
2024-06-28T03:10:36Z
Data-Driven Lipschitz Continuity: A Cost-Effective Approach to Improve Adversarial Robustness
The security and robustness of deep neural networks (DNNs) have become increasingly concerning. This paper aims to provide both a theoretical foundation and a practical solution to ensure the reliability of DNNs. We explore the concept of Lipschitz continuity to certify the robustness of DNNs against adversarial attacks, which aim to mislead the network with adding imperceptible perturbations into inputs. We propose a novel algorithm that remaps the input domain into a constrained range, reducing the Lipschitz constant and potentially enhancing robustness. Unlike existing adversarially trained models, where robustness is enhanced by introducing additional examples from other datasets or generative models, our method is almost cost-free as it can be integrated with existing models without requiring re-training. Experimental results demonstrate the generalizability of our method, as it can be combined with various models and achieve enhancements in robustness. Furthermore, our method achieves the best robust accuracy for CIFAR10, CIFAR100, and ImageNet datasets on the RobustBench leaderboard.
[ "['Erh-Chung Chen' 'Pin-Yu Chen' 'I-Hsin Chung' 'Che-Rung Lee']" ]
null
null
2406.19631
null
null
http://arxiv.org/pdf/2406.19631v1
2024-06-28T03:39:45Z
2024-06-28T03:39:45Z
Personalized Interpretation on Federated Learning: A Virtual Concepts approach
Tackling non-IID data is an open challenge in federated learning research. Existing FL methods, including robust FL and personalized FL, are designed to improve model performance without consideration of interpreting non-IID across clients. This paper aims to design a novel FL method to robust and interpret the non-IID data across clients. Specifically, we interpret each client's dataset as a mixture of conceptual vectors that each one represents an interpretable concept to end-users. These conceptual vectors could be pre-defined or refined in a human-in-the-loop process or be learnt via the optimization procedure of the federated learning system. In addition to the interpretability, the clarity of client-specific personalization could also be applied to enhance the robustness of the training process on FL system. The effectiveness of the proposed method have been validated on benchmark datasets.
[ "['Peng Yan' 'Guodong Long' 'Jing Jiang' 'Michael Blumenstein']" ]
null
null
2406.19635
null
null
http://arxiv.org/pdf/2406.19635v1
2024-06-28T03:46:53Z
2024-06-28T03:46:53Z
Model Predictive Simulation Using Structured Graphical Models and Transformers
We propose an approach to simulating trajectories of multiple interacting agents (road users) based on transformers and probabilistic graphical models (PGMs), and apply it to the Waymo SimAgents challenge. The transformer baseline is based on the MTR model, which predicts multiple future trajectories conditioned on the past trajectories and static road layout features. We then improve upon these generated trajectories using a PGM, which contains factors which encode prior knowledge, such as a preference for smooth trajectories, and avoidance of collisions with static obstacles and other moving agents. We perform (approximate) MAP inference in this PGM using the Gauss-Newton method. Finally we sample $K=32$ trajectories for each of the $N sim 100$ agents for the next $T=8 Delta$ time steps, where $Delta=10$ is the sampling rate per second. Following the Model Predictive Control (MPC) paradigm, we only return the first element of our forecasted trajectories at each step, and then we replan, so that the simulation can constantly adapt to its changing environment. We therefore call our approach "Model Predictive Simulation" or MPS. We show that MPS improves upon the MTR baseline, especially in safety critical metrics such as collision rate. Furthermore, our approach is compatible with any underlying forecasting model, and does not require extra training, so we believe it is a valuable contribution to the community.
[ "['Xinghua Lou' 'Meet Dave' 'Shrinu Kushagra' 'Miguel Lazaro-Gredilla'\n 'Kevin Murphy']" ]
null
null
2406.19636
null
null
http://arxiv.org/pdf/2406.19636v1
2024-06-28T03:47:54Z
2024-06-28T03:47:54Z
Enforcing Equity in Neural Climate Emulators
Neural network emulators have become an invaluable tool for a wide variety of climate and weather prediction tasks. While showing incredibly promising results, these networks do not have an inherent ability to produce equitable predictions. That is, they are not guaranteed to provide a uniform quality of prediction along any particular class or group of people. This potential for inequitable predictions motivates the need for explicit representations of fairness in these neural networks. To that end, we draw on methods for enforcing analytical physical constraints in neural networks to bias networks towards more equitable predictions. We demonstrate the promise of this methodology using the task of climate model emulation. Specifically, we propose a custom loss function which punishes emulators with unequal quality of predictions across any prespecified regions or category, here defined using human development index (HDI). This loss function weighs a standard loss metric such as mean squared error against another metric which captures inequity along the equity category (HDI), allowing us to adjust the priority of each term before training. Importantly, the loss function does not specify a particular definition of equity to bias the neural network towards, opening the door for custom fairness metrics. Our results show that neural climate emulators trained with our loss function provide more equitable predictions and that the equity metric improves with greater weighting in the loss function. We empirically demonstrate that while there is a tradeoff between accuracy and equity when prioritizing the latter during training, an appropriate selection of the equity priority hyperparameter can minimize loss of performance.
[ "['William Yik' 'Sam J. Silva']" ]
null
null
2406.19642
null
null
http://arxiv.org/pdf/2406.19642v1
2024-06-28T04:14:35Z
2024-06-28T04:14:35Z
IDT: Dual-Task Adversarial Attacks for Privacy Protection
Natural language processing (NLP) models may leak private information in different ways, including membership inference, reconstruction or attribute inference attacks. Sensitive information may not be explicit in the text, but hidden in underlying writing characteristics. Methods to protect privacy can involve using representations inside models that are demonstrated not to detect sensitive attributes or -- for instance, in cases where users might not trust a model, the sort of scenario of interest here -- changing the raw text before models can have access to it. The goal is to rewrite text to prevent someone from inferring a sensitive attribute (e.g. the gender of the author, or their location by the writing style) whilst keeping the text useful for its original intention (e.g. the sentiment of a product review). The few works tackling this have focused on generative techniques. However, these often create extensively different texts from the original ones or face problems such as mode collapse. This paper explores a novel adaptation of adversarial attack techniques to manipulate a text to deceive a classifier w.r.t one task (privacy) whilst keeping the predictions of another classifier trained for another task (utility) unchanged. We propose IDT, a method that analyses predictions made by auxiliary and interpretable models to identify which tokens are important to change for the privacy task, and which ones should be kept for the utility task. We evaluate different datasets for NLP suitable for different tasks. Automatic and human evaluations show that IDT retains the utility of text, while also outperforming existing methods when deceiving a classifier w.r.t privacy task.
[ "['Pedro Faustini' 'Shakila Mahjabin Tonni' 'Annabelle McIver'\n 'Qiongkai Xu' 'Mark Dras']" ]
null
null
2406.19653
null
null
http://arxiv.org/pdf/2406.19653v1
2024-06-28T04:48:05Z
2024-06-28T04:48:05Z
ACES: Automatic Cohort Extraction System for Event-Stream Datasets
Reproducibility remains a significant challenge in machine learning (ML) for healthcare. In this field, datasets, model pipelines, and even task/cohort definitions are often private, leading to a significant barrier in sharing, iterating, and understanding ML results on electronic health record (EHR) datasets. In this paper, we address a significant part of this problem by introducing the Automatic Cohort Extraction System for Event-Stream Datasets (ACES). This tool is designed to simultaneously simplify the development of task/cohorts for ML in healthcare and enable the reproduction of these cohorts, both at an exact level for single datasets and at a conceptual level across datasets. To accomplish this, ACES provides (1) a highly intuitive and expressive configuration language for defining both dataset-specific concepts and dataset-agnostic inclusion/exclusion criteria, and (2) a pipeline to automatically extract patient records that meet these defined criteria from real-world data. ACES can be automatically applied to any dataset in either the Medical Event Data Standard (MEDS) or EventStreamGPT (ESGPT) formats, or to *any* dataset for which the necessary task-specific predicates can be extracted in an event-stream form. ACES has the potential to significantly lower the barrier to entry for defining ML tasks, redefine the way researchers interact with EHR datasets, and significantly improve the state of reproducibility for ML studies in this modality. ACES is available at https://github.com/justin13601/aces.
[ "['Justin Xu' 'Jack Gallifant' 'Alistair E. W. Johnson'\n 'Matthew B. A. McDermott']" ]
null
null
2406.19657
null
null
http://arxiv.org/pdf/2406.19657v2
2024-07-02T20:34:50Z
2024-06-28T04:56:53Z
LLMEasyQuant -- An Easy to Use Toolkit for LLM Quantization
Currently, there are many quantization methods appeared for LLM quantization, yet few are user-friendly and easy to be deployed locally. Packages like TensorRT and Quantohave many underlying structures and self-invoking internal functions, which are not conducive to developers' personalized development and learning for deployment. Therefore, we develop LLMEasyQuant, it is a package aiming to for easy quantization deployment which is user-friendly and suitable for beginners' learning.
[ "['Dong Liu' 'Meng Jiang' 'Kaiser Pister']" ]
null
null
2406.19662
null
null
http://arxiv.org/pdf/2406.19662v1
2024-06-28T05:13:43Z
2024-06-28T05:13:43Z
Finite basis Kolmogorov-Arnold networks: domain decomposition for data-driven and physics-informed problems
Kolmogorov-Arnold networks (KANs) have attracted attention recently as an alternative to multilayer perceptrons (MLPs) for scientific machine learning. However, KANs can be expensive to train, even for relatively small networks. Inspired by finite basis physics-informed neural networks (FBPINNs), in this work, we develop a domain decomposition method for KANs that allows for several small KANs to be trained in parallel to give accurate solutions for multiscale problems. We show that finite basis KANs (FBKANs) can provide accurate results with noisy data and for physics-informed training.
[ "['Amanda A. Howard' 'Bruno Jacob' 'Sarah H. Murphy' 'Alexander Heinlein'\n 'Panos Stinis']" ]
null
null
2406.19670
null
null
http://arxiv.org/abs/2406.19670v2
2024-07-08T08:28:34Z
2024-06-28T05:44:47Z
Function+Data Flow: A Framework to Specify Machine Learning Pipelines for Digital Twinning
The development of digital twins (DTs) for physical systems increasingly leverages artificial intelligence (AI), particularly for combining data from different sources or for creating computationally efficient, reduced-dimension models. Indeed, even in very different application domains, twinning employs common techniques such as model order reduction and modelization with hybrid data (that is, data sourced from both physics-based models and sensors). Despite this apparent generality, current development practices are ad-hoc, making the design of AI pipelines for digital twinning complex and time-consuming. Here we propose Function+Data Flow (FDF), a domain-specific language (DSL) to describe AI pipelines within DTs. FDF aims to facilitate the design and validation of digital twins. Specifically, FDF treats functions as first-class citizens, enabling effective manipulation of models learned with AI. We illustrate the benefits of FDF on two concrete use cases from different domains: predicting the plastic strain of a structure and modeling the electromagnetic behavior of a bearing.
[ "['Eduardo de Conto' 'Blaise Genest' 'Arvind Easwaran']" ]
null
null
2406.19674
null
null
http://arxiv.org/pdf/2406.19674v1
2024-06-28T06:22:23Z
2024-06-28T06:22:23Z
Less is More: Accurate Speech Recognition & Translation without Web-Scale Data
Recent advances in speech recognition and translation rely on hundreds of thousands of hours of Internet speech data. We argue that state-of-the art accuracy can be reached without relying on web-scale data. Canary - multilingual ASR and speech translation model, outperforms current state-of-the-art models - Whisper, OWSM, and Seamless-M4T on English, French, Spanish, and German languages, while being trained on an order of magnitude less data than these models. Three key factors enables such data-efficient model: (1) a FastConformer-based attention encoder-decoder architecture (2) training on synthetic data generated with machine translation and (3) advanced training techniques: data-balancing, dynamic data blending, dynamic bucketing and noise-robust fine-tuning. The model, weights, and training code will be open-sourced.
[ "['Krishna C. Puvvada' 'Piotr Żelasko' 'He Huang' 'Oleksii Hrinchuk'\n 'Nithin Rao Koluguri' 'Kunal Dhawan' 'Somshubra Majumdar'\n 'Elena Rastorgueva' 'Zhehuai Chen' 'Vitaly Lavrukhin' 'Jagadeesh Balam'\n 'Boris Ginsburg']" ]
null
null
2406.19707
null
null
http://arxiv.org/pdf/2406.19707v1
2024-06-28T07:41:26Z
2024-06-28T07:41:26Z
InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management
Transformer-based large language models (LLMs) demonstrate impressive performance across various natural language processing tasks. Serving LLM inference for generating long contents, however, poses a challenge due to the enormous memory footprint of the transient state, known as the key-value (KV) cache, which scales with the sequence length and batch size. In this paper, we present InfiniGen, a novel KV cache management framework tailored for long-text generation, which synergistically works with modern offloading-based inference systems. InfiniGen leverages the key insight that a few important tokens that are essential for computing the subsequent attention layer in the Transformer can be speculated by performing a minimal rehearsal with the inputs of the current layer and part of the query weight and key cache of the subsequent layer. This allows us to prefetch only the essential KV cache entries (without fetching them all), thereby mitigating the fetch overhead from the host memory in offloading-based LLM serving systems. Our evaluation on several representative LLMs shows that InfiniGen improves the overall performance of a modern offloading-based system by up to 3.00x compared to prior KV cache management methods while offering substantially better model accuracy.
[ "['Wonbeom Lee' 'Jungi Lee' 'Junghwan Seo' 'Jaewoong Sim']" ]
null
null
2406.19711
null
null
http://arxiv.org/pdf/2406.19711v1
2024-06-28T07:46:51Z
2024-06-28T07:46:51Z
CHASE: A Causal Heterogeneous Graph based Framework for Root Cause Analysis in Multimodal Microservice Systems
In recent years, the widespread adoption of distributed microservice architectures within the industry has significantly increased the demand for enhanced system availability and robustness. Due to the complex service invocation paths and dependencies at enterprise-level microservice systems, it is challenging to locate the anomalies promptly during service invocations, thus causing intractable issues for normal system operations and maintenance. In this paper, we propose a Causal Heterogeneous grAph baSed framEwork for root cause analysis, namely CHASE, for microservice systems with multimodal data, including traces, logs, and system monitoring metrics. Specifically, related information is encoded into representative embeddings and further modeled by a multimodal invocation graph. Following that, anomaly detection is performed on each instance node with attentive heterogeneous message passing from its adjacent metric and log nodes. Finally, CHASE learns from the constructed hypergraph with hyperedges representing the flow of causality and performs root cause localization. We evaluate the proposed framework on two public microservice datasets with distinct attributes and compare with the state-of-the-art methods. The results show that CHASE achieves the average performance gain up to 36.2%(A@1) and 29.4%(Percentage@1), respectively to its best counterpart.
[ "['Ziming Zhao' 'Tiehua Zhang' 'Zhishu Shen' 'Hai Dong' 'Xingjun Ma'\n 'Xianhui Liu' 'Yun Yang']" ]
null
null
2406.19714
null
null
http://arxiv.org/pdf/2406.19714v1
2024-06-28T07:56:35Z
2024-06-28T07:56:35Z
State Matching and Multiple References in Adaptive Active Automata Learning
Active automata learning (AAL) is a method to infer state machines by interacting with black-box systems. Adaptive AAL aims to reduce the sample complexity of AAL by incorporating domain specific knowledge in the form of (similar) reference models. Such reference models appear naturally when learning multiple versions or variants of a software system. In this paper, we present state matching, which allows flexible use of the structure of these reference models by the learner. State matching is the main ingredient of adaptive L#, a novel framework for adaptive learning, built on top of L#. Our empirical evaluation shows that adaptive L# improves the state of the art by up to two orders of magnitude.
[ "['Loes Kruger' 'Sebastian Junges' 'Jurriaan Rot']" ]
null
null
2406.19726
null
null
http://arxiv.org/pdf/2406.19726v1
2024-06-28T08:16:54Z
2024-06-28T08:16:54Z
EPOCH: Jointly Estimating the 3D Pose of Cameras and Humans
Monocular Human Pose Estimation (HPE) aims at determining the 3D positions of human joints from a single 2D image captured by a camera. However, a single 2D point in the image may correspond to multiple points in 3D space. Typically, the uniqueness of the 2D-3D relationship is approximated using an orthographic or weak-perspective camera model. In this study, instead of relying on approximations, we advocate for utilizing the full perspective camera model. This involves estimating camera parameters and establishing a precise, unambiguous 2D-3D relationship. To do so, we introduce the EPOCH framework, comprising two main components: the pose lifter network (LiftNet) and the pose regressor network (RegNet). LiftNet utilizes the full perspective camera model to precisely estimate the 3D pose in an unsupervised manner. It takes a 2D pose and camera parameters as inputs and produces the corresponding 3D pose estimation. These inputs are obtained from RegNet, which starts from a single image and provides estimates for the 2D pose and camera parameters. RegNet utilizes only 2D pose data as weak supervision. Internally, RegNet predicts a 3D pose, which is then projected to 2D using the estimated camera parameters. This process enables RegNet to establish the unambiguous 2D-3D relationship. Our experiments show that modeling the lifting as an unsupervised task with a camera in-the-loop results in better generalization to unseen data. We obtain state-of-the-art results for the 3D HPE on the Human3.6M and MPI-INF-3DHP datasets. Our code is available at: [Github link upon acceptance, see supplementary materials].
[ "['Nicola Garau' 'Giulia Martinelli' 'Niccolò Bisagno' 'Denis Tomè'\n 'Carsten Stoll']" ]
null
null
2406.19736
null
null
http://arxiv.org/pdf/2406.19736v1
2024-06-28T08:25:27Z
2024-06-28T08:25:27Z
MM-Instruct: Generated Visual Instructions for Large Multimodal Model Alignment
This paper introduces MM-Instruct, a large-scale dataset of diverse and high-quality visual instruction data designed to enhance the instruction-following capabilities of large multimodal models (LMMs). While existing visual instruction datasets often focus on question-answering, they struggle to generalize to broader application scenarios such as creative writing, summarization, or image analysis. To address these limitations, we propose a novel approach to constructing MM-Instruct that leverages the strong instruction-following capabilities of existing LLMs to generate novel visual instruction data from large-scale but conventional image captioning datasets. MM-Instruct first leverages ChatGPT to automatically generate diverse instructions from a small set of seed instructions through augmenting and summarization. It then matches these instructions with images and uses an open-sourced large language model (LLM) to generate coherent answers to the instruction-image pairs. The LLM is grounded by the detailed text descriptions of images in the whole answer generation process to guarantee the alignment of the instruction data. Moreover, we introduce a benchmark based on the generated instruction data to evaluate the instruction-following capabilities of existing LMMs. We demonstrate the effectiveness of MM-Instruct by training a LLaVA-1.5 model on the generated data, denoted as LLaVA-Instruct, which exhibits significant improvements in instruction-following capabilities compared to LLaVA-1.5 models. The MM-Instruct dataset, benchmark, and pre-trained models are available at https://github.com/jihaonew/MM-Instruct.
[ "['Jihao Liu' 'Xin Huang' 'Jinliang Zheng' 'Boxiao Liu' 'Jia Wang'\n 'Osamu Yoshie' 'Yu Liu' 'Hongsheng Li']" ]
null
null
2406.19738
null
null
http://arxiv.org/pdf/2406.19738v1
2024-06-28T08:26:47Z
2024-06-28T08:26:47Z
Classical Bandit Algorithms for Entanglement Detection in Parameterized Qubit States
Entanglement is a key resource for a wide range of tasks in quantum information and computing. Thus, verifying availability of this quantum resource is essential. Extensive research on entanglement detection has led to no-go theorems (Lu et al. [Phys. Rev. Lett., 116, 230501 (2016)]) that highlight the need for full state tomography (FST) in the absence of adaptive or joint measurements. Recent advancements, as proposed by Zhu, Teo, and Englert [Phys. Rev. A, 81, 052339, 2010], introduce a single-parameter family of entanglement witness measurements which are capable of conclusively detecting certain entangled states and only resort to FST when all witness measurements are inconclusive. We find a variety of realistic noisy two-qubit quantum states $mathcal{F}$ that yield conclusive results under this witness family. We solve the problem of detecting entanglement among $K$ quantum states in $mathcal{F}$, of which $m$ states are entangled, with $m$ potentially unknown. We recognize a structural connection of this problem to the Bad Arm Identification problem in stochastic Multi-Armed Bandits (MAB). In contrast to existing quantum bandit frameworks, we establish a new correspondence tailored for entanglement detection and term it the $(m,K)$-quantum Multi-Armed Bandit. We implement two well-known MAB policies for arbitrary states derived from $mathcal{F}$, present theoretical guarantees on the measurement/sample complexity and demonstrate the practicality of the policies through numerical simulations. More broadly, this paper highlights the potential for employing classical machine learning techniques for quantum entanglement detection.
[ "['Bharati. K' 'Vikesh Siddhu' 'Krishna Jagannathan']" ]
null
null
2406.19753
null
null
http://arxiv.org/pdf/2406.19753v1
2024-06-28T08:53:33Z
2024-06-28T08:53:33Z
Backdoor Attack in Prompt-Based Continual Learning
Prompt-based approaches offer a cutting-edge solution to data privacy issues in continual learning, particularly in scenarios involving multiple data suppliers where long-term storage of private user data is prohibited. Despite delivering state-of-the-art performance, its impressive remembering capability can become a double-edged sword, raising security concerns as it might inadvertently retain poisoned knowledge injected during learning from private user data. Following this insight, in this paper, we expose continual learning to a potential threat: backdoor attack, which drives the model to follow a desired adversarial target whenever a specific trigger is present while still performing normally on clean samples. We highlight three critical challenges in executing backdoor attacks on incremental learners and propose corresponding solutions: (1) emph{Transferability}: We employ a surrogate dataset and manipulate prompt selection to transfer backdoor knowledge to data from other suppliers; (2) emph{Resiliency}: We simulate static and dynamic states of the victim to ensure the backdoor trigger remains robust during intense incremental learning processes; and (3) emph{Authenticity}: We apply binary cross-entropy loss as an anti-cheating factor to prevent the backdoor trigger from devolving into adversarial noise. Extensive experiments across various benchmark datasets and continual learners validate our continual backdoor framework, achieving up to $100%$ attack success rate, with further ablation studies confirming our contributions' effectiveness.
[ "['Trang Nguyen' 'Anh Tran' 'Nhat Ho']" ]
null
null
2406.19765
null
null
http://arxiv.org/pdf/2406.19765v2
2024-07-02T08:29:56Z
2024-06-28T09:10:23Z
Systematic Literature Review on Application of Learning-based Approaches in Continuous Integration
Context: Machine learning (ML) and deep learning (DL) analyze raw data to extract valuable insights in specific phases. The rise of continuous practices in software projects emphasizes automating Continuous Integration (CI) with these learning-based methods, while the growing adoption of such approaches underscores the need for systematizing knowledge. Objective: Our objective is to comprehensively review and analyze existing literature concerning learning-based methods within the CI domain. We endeavour to identify and analyse various techniques documented in the literature, emphasizing the fundamental attributes of training phases within learning-based solutions in the context of CI. Method: We conducted a Systematic Literature Review (SLR) involving 52 primary studies. Through statistical and thematic analyses, we explored the correlations between CI tasks and the training phases of learning-based methodologies across the selected studies, encompassing a spectrum from data engineering techniques to evaluation metrics. Results: This paper presents an analysis of the automation of CI tasks utilizing learning-based methods. We identify and analyze nine types of data sources, four steps in data preparation, four feature types, nine subsets of data features, five approaches for hyperparameter selection and tuning, and fifteen evaluation metrics. Furthermore, we discuss the latest techniques employed, existing gaps in CI task automation, and the characteristics of the utilized learning-based techniques. Conclusion: This study provides a comprehensive overview of learning-based methods in CI, offering valuable insights for researchers and practitioners developing CI task automation. It also highlights the need for further research to advance these methods in CI.
[ "['Ali Kazemi Arani' 'Triet Huynh Minh Le' 'Mansooreh Zahedi'\n 'M. Ali Babar']" ]
null
null
2406.19768
null
null
http://arxiv.org/pdf/2406.19768v2
2024-07-01T11:02:45Z
2024-06-28T09:17:51Z
Contextualized Hybrid Ensemble Q-learning: Learning Fast with Control Priors
Combining Reinforcement Learning (RL) with a prior controller can yield the best out of two worlds: RL can solve complex nonlinear problems, while the control prior ensures safer exploration and speeds up training. Prior work largely blends both components with a fixed weight, neglecting that the RL agent's performance varies with the training progress and across regions in the state space. Therefore, we advocate for an adaptive strategy that dynamically adjusts the weighting based on the RL agent's current capabilities. We propose a new adaptive hybrid RL algorithm, Contextualized Hybrid Ensemble Q-learning (CHEQ). CHEQ combines three key ingredients: (i) a time-invariant formulation of the adaptive hybrid RL problem treating the adaptive weight as a context variable, (ii) a weight adaption mechanism based on the parametric uncertainty of a critic ensemble, and (iii) ensemble-based acceleration for data-efficient RL. Evaluating CHEQ on a car racing task reveals substantially stronger data efficiency, exploration safety, and transferability to unknown scenarios than state-of-the-art adaptive hybrid RL methods.
[ "['Emma Cramer' 'Bernd Frauenknecht' 'Ramil Sabirov' 'Sebastian Trimpe']" ]
null
null
2406.19770
null
null
http://arxiv.org/pdf/2406.19770v1
2024-06-28T09:17:58Z
2024-06-28T09:17:58Z
Self-Supervised Spatial-Temporal Normality Learning for Time Series Anomaly Detection
Time Series Anomaly Detection (TSAD) finds widespread applications across various domains such as financial markets, industrial production, and healthcare. Its primary objective is to learn the normal patterns of time series data, thereby identifying deviations in test samples. Most existing TSAD methods focus on modeling data from the temporal dimension, while ignoring the semantic information in the spatial dimension. To address this issue, we introduce a novel approach, called Spatial-Temporal Normality learning (STEN). STEN is composed of a sequence Order prediction-based Temporal Normality learning (OTN) module that captures the temporal correlations within sequences, and a Distance prediction-based Spatial Normality learning (DSN) module that learns the relative spatial relations between sequences in a feature space. By synthesizing these two modules, STEN learns expressive spatial-temporal representations for the normal patterns hidden in the time series data. Extensive experiments on five popular TSAD benchmarks show that STEN substantially outperforms state-of-the-art competing methods. Our code is available at https://github.com/mala-lab/STEN.
[ "['Yutong Chen' 'Hongzuo Xu' 'Guansong Pang' 'Hezhe Qiao' 'Yuan Zhou'\n 'Mingsheng Shang']" ]
null
null
2406.19792
null
null
http://arxiv.org/pdf/2406.19792v1
2024-06-28T09:55:29Z
2024-06-28T09:55:29Z
Improving Performance Prediction of Electrolyte Formulations with Transformer-based Molecular Representation Model
Development of efficient and high-performing electrolytes is crucial for advancing energy storage technologies, particularly in batteries. Predicting the performance of battery electrolytes rely on complex interactions between the individual constituents. Consequently, a strategy that adeptly captures these relationships and forms a robust representation of the formulation is essential for integrating with machine learning models to predict properties accurately. In this paper, we introduce a novel approach leveraging a transformer-based molecular representation model to effectively and efficiently capture the representation of electrolyte formulations. The performance of the proposed approach is evaluated on two battery property prediction tasks and the results show superior performance compared to the state-of-the-art methods.
[ "['Indra Priyadarsini' 'Vidushi Sharma' 'Seiji Takeda' 'Akihiro Kishimoto'\n 'Lisa Hamada' 'Hajime Shinohara']" ]
null
null
2406.19800
null
null
http://arxiv.org/pdf/2406.19800v1
2024-06-28T10:13:50Z
2024-06-28T10:13:50Z
Modeling the Real World with High-Density Visual Particle Dynamics
We present High-Density Visual Particle Dynamics (HD-VPD), a learned world model that can emulate the physical dynamics of real scenes by processing massive latent point clouds containing 100K+ particles. To enable efficiency at this scale, we introduce a novel family of Point Cloud Transformers (PCTs) called Interlacers leveraging intertwined linear-attention Performer layers and graph-based neighbour attention layers. We demonstrate the capabilities of HD-VPD by modeling the dynamics of high degree-of-freedom bi-manual robots with two RGB-D cameras. Compared to the previous graph neural network approach, our Interlacer dynamics is twice as fast with the same prediction quality, and can achieve higher quality using 4x as many particles. We illustrate how HD-VPD can evaluate motion plan quality with robotic box pushing and can grasping tasks. See videos and particle dynamics rendered by HD-VPD at https://sites.google.com/view/hd-vpd.
[ "['William F. Whitney' 'Jacob Varley' 'Deepali Jain'\n 'Krzysztof Choromanski' 'Sumeet Singh' 'Vikas Sindhwani']" ]
null
null
2406.19801
null
null
http://arxiv.org/abs/2406.19801v1
2024-06-28T10:16:10Z
2024-06-28T10:16:10Z
MulTi-Wise Sampling: Trading Uniform T-Wise Feature Interaction Coverage for Smaller Samples
Ensuring the functional safety of highly configurable systems often requires testing representative subsets of all possible configurations to reduce testing effort and save resources. The ratio of covered t-wise feature interactions (i.e., T-Wise Feature Interaction Coverage) is a common criterion for determining whether a subset of configurations is representative and capable of finding faults. Existing t-wise sampling algorithms uniformly cover t-wise feature interactions for all features, resulting in lengthy execution times and large sample sizes, particularly when large t-wise feature interactions are considered (i.e., high values of t). In this paper, we introduce a novel approach to t-wise feature interaction sampling, questioning the necessity of uniform coverage across all t-wise feature interactions, called emph{mulTiWise{}}. Our approach prioritizes between subsets of critical and non-critical features, considering higher t-values for subsets of critical features when generating a t-wise feature interaction sample. We evaluate our approach using subject systems from real-world applications, including busybox{}, soletta{}, fiasco{}, and uclibc{}. Our results show that sacrificing uniform t-wise feature interaction coverage between all features reduces the time needed to generate a sample and the resulting sample size. Hence, mulTiWise{} Sampling offers an alternative to existing approaches if knowledge about feature criticality is available.
[ "['Tobias Pett' 'Sebastian Krieter' 'Thomas Thüm' 'Ina Schaefer']" ]
null
null
2406.19807
null
null
http://arxiv.org/pdf/2406.19807v1
2024-06-28T10:30:46Z
2024-06-28T10:30:46Z
Deceptive Diffusion: Generating Synthetic Adversarial Examples
We introduce the concept of deceptive diffusion -- training a generative AI model to produce adversarial images. Whereas a traditional adversarial attack algorithm aims to perturb an existing image to induce a misclassificaton, the deceptive diffusion model can create an arbitrary number of new, misclassified images that are not directly associated with training or test images. Deceptive diffusion offers the possibility of strengthening defence algorithms by providing adversarial training data at scale, including types of misclassification that are otherwise difficult to find. In our experiments, we also investigate the effect of training on a partially attacked data set. This highlights a new type of vulnerability for generative diffusion models: if an attacker is able to stealthily poison a portion of the training data, then the resulting diffusion model will generate a similar proportion of misleading outputs.
[ "['Lucas Beerens' 'Catherine F. Higham' 'Desmond J. Higham']" ]
null
null
2406.19825
null
null
http://arxiv.org/pdf/2406.19825v1
2024-06-28T11:01:02Z
2024-06-28T11:01:02Z
Reinforcement Learning for Efficient Design and Control Co-optimisation of Energy Systems
The ongoing energy transition drives the development of decentralised renewable energy sources, which are heterogeneous and weather-dependent, complicating their integration into energy systems. This study tackles this issue by introducing a novel reinforcement learning (RL) framework tailored for the co-optimisation of design and control in energy systems. Traditionally, the integration of renewable sources in the energy sector has relied on complex mathematical modelling and sequential processes. By leveraging RL's model-free capabilities, the framework eliminates the need for explicit system modelling. By optimising both control and design policies jointly, the framework enhances the integration of renewable sources and improves system efficiency. This contribution paves the way for advanced RL applications in energy management, leading to more efficient and effective use of renewable energy sources.
[ "['Marine Cauz' 'Adrien Bolland' 'Nicolas Wyrsch' 'Christophe Ballif']" ]
null
null
2406.19827
null
null
http://arxiv.org/pdf/2406.19827v1
2024-06-28T11:06:46Z
2024-06-28T11:06:46Z
Towards Stable and Storage-efficient Dataset Distillation: Matching Convexified Trajectory
The rapid evolution of deep learning and large language models has led to an exponential growth in the demand for training data, prompting the development of Dataset Distillation methods to address the challenges of managing large datasets. Among these, Matching Training Trajectories (MTT) has been a prominent approach, which replicates the training trajectory of an expert network on real data with a synthetic dataset. However, our investigation found that this method suffers from three significant limitations: 1. Instability of expert trajectory generated by Stochastic Gradient Descent (SGD); 2. Low convergence speed of the distillation process; 3. High storage consumption of the expert trajectory. To address these issues, we offer a new perspective on understanding the essence of Dataset Distillation and MTT through a simple transformation of the objective function, and introduce a novel method called Matching Convexified Trajectory (MCT), which aims to provide better guidance for the student trajectory. MCT leverages insights from the linearized dynamics of Neural Tangent Kernel methods to create a convex combination of expert trajectories, guiding the student network to converge rapidly and stably. This trajectory is not only easier to store, but also enables a continuous sampling strategy during distillation, ensuring thorough learning and fitting of the entire expert trajectory. Comprehensive experiments across three public datasets validate the superiority of MCT over traditional MTT methods.
[ "['Wenliang Zhong' 'Haoyu Tang' 'Qinghai Zheng' 'Mingzhu Xu' 'Yupeng Hu'\n 'Liqiang Nie']" ]
null
null
2406.19832
null
null
http://arxiv.org/abs/2406.19832v1
2024-06-28T11:11:16Z
2024-06-28T11:11:16Z
MuGSI: Distilling GNNs with Multi-Granularity Structural Information for Graph Classification
Recent works have introduced GNN-to-MLP knowledge distillation (KD) frameworks to combine both GNN's superior performance and MLP's fast inference speed. However, existing KD frameworks are primarily designed for node classification within single graphs, leaving their applicability to graph classification largely unexplored. Two main challenges arise when extending KD for node classification to graph classification: (1) The inherent sparsity of learning signals due to soft labels being generated at the graph level; (2) The limited expressiveness of student MLPs, especially in datasets with limited input feature spaces. To overcome these challenges, we introduce MuGSI, a novel KD framework that employs Multi-granularity Structural Information for graph classification. Specifically, we propose multi-granularity distillation loss in MuGSI to tackle the first challenge. This loss function is composed of three distinct components: graph-level distillation, subgraph-level distillation, and node-level distillation. Each component targets a specific granularity of the graph structure, ensuring a comprehensive transfer of structural knowledge from the teacher model to the student model. To tackle the second challenge, MuGSI proposes to incorporate a node feature augmentation component, thereby enhancing the expressiveness of the student MLPs and making them more capable learners. We perform extensive experiments across a variety of datasets and different teacher/student model architectures. The experiment results demonstrate the effectiveness, efficiency, and robustness of MuGSI. Codes are publicly available at: textbf{url{https://github.com/tianyao-aka/MuGSI}.}
[ "['Tianjun Yao' 'Jiaqi Sun' 'Defu Cao' 'Kun Zhang' 'Guangyi Chen']" ]
null
null
2406.19861
null
null
http://arxiv.org/pdf/2406.19861v1
2024-06-28T12:05:47Z
2024-06-28T12:05:47Z
Operator World Models for Reinforcement Learning
Policy Mirror Descent (PMD) is a powerful and theoretically sound methodology for sequential decision-making. However, it is not directly applicable to Reinforcement Learning (RL) due to the inaccessibility of explicit action-value functions. We address this challenge by introducing a novel approach based on learning a world model of the environment using conditional mean embeddings. We then leverage the operatorial formulation of RL to express the action-value function in terms of this quantity in closed form via matrix operations. Combining these estimators with PMD leads to POWR, a new RL algorithm for which we prove convergence rates to the global optimum. Preliminary experiments in finite and infinite state settings support the effectiveness of our method.
[ "['Pietro Novelli' 'Marco Pratticò' 'Massimiliano Pontil' 'Carlo Ciliberto']" ]
null
null
2406.19871
null
null
http://arxiv.org/pdf/2406.19871v1
2024-06-28T12:26:28Z
2024-06-28T12:26:28Z
Koopman based trajectory model and computation offloading for high mobility paradigm in ISAC enabled IoT system
User experience on mobile devices is constrained by limited battery capacity and processing power, but 6G technology advancements are diving rapidly into mobile technical evolution. Mobile edge computing (MEC) offers a solution, offloading computationally intensive tasks to edge cloud servers, reducing battery drain compared to local processing. The upcoming integrated sensing and communication in mobile communication may improve the trajectory prediction and processing delays. This study proposes a greedy resource allocation optimization strategy for multi-user networks to minimize aggregate energy usage. Numerical results show potential improvement at 33% for every 1000 iteration. Addressing prediction model division and velocity accuracy issues is crucial for better results. A plan for further improvement and achieving objectives is outlined for the upcoming work phase.
[ "['Minh-Tuan Tran']" ]
null
null
2406.19881
null
null
http://arxiv.org/pdf/2406.19881v1
2024-06-28T12:44:01Z
2024-06-28T12:44:01Z
Attention Meets UAVs: A Comprehensive Evaluation of DDoS Detection in Low-Cost UAVs
This paper explores the critical issue of enhancing cybersecurity measures for low-cost, Wi-Fi-based Unmanned Aerial Vehicles (UAVs) against Distributed Denial of Service (DDoS) attacks. In the current work, we have explored three variants of DDoS attacks, namely Transmission Control Protocol (TCP), Internet Control Message Protocol (ICMP), and TCP + ICMP flooding attacks, and developed a detection mechanism that runs on the companion computer of the UAV system. As a part of the detection mechanism, we have evaluated various machine learning, and deep learning algorithms, such as XGBoost, Isolation Forest, Long Short-Term Memory (LSTM), Bidirectional-LSTM (Bi-LSTM), LSTM with attention, Bi-LSTM with attention, and Time Series Transformer (TST) in terms of various classification metrics. Our evaluation reveals that algorithms with attention mechanisms outperform their counterparts in general, and TST stands out as the most efficient model with a run time of 0.1 seconds. TST has demonstrated an F1 score of 0.999, 0.997, and 0.943 for TCP, ICMP, and TCP + ICMP flooding attacks respectively. In this work, we present the necessary steps required to build an on-board DDoS detection mechanism. Further, we also present the ablation study to identify the best TST hyperparameters for DDoS detection, and we have also underscored the advantage of adapting learnable positional embeddings in TST for DDoS detection with an improvement in F1 score from 0.94 to 0.99.
[ "['Ashish Sharma' 'SVSLN Surya Suhas Vaddhiparthy' 'Sai Usha Goparaju'\n 'Deepak Gangadharan' 'Harikumar Kandath']" ]
null
null
2406.19897
null
null
http://arxiv.org/pdf/2406.19897v1
2024-06-28T13:05:17Z
2024-06-28T13:05:17Z
FI-CBL: A Probabilistic Method for Concept-Based Learning with Expert Rules
A method for solving concept-based learning (CBL) problem is proposed. The main idea behind the method is to divide each concept-annotated image into patches, to transform the patches into embeddings by using an autoencoder, and to cluster the embeddings assuming that each cluster will mainly contain embeddings of patches with certain concepts. To find concepts of a new image, the method implements the frequentist inference by computing prior and posterior probabilities of concepts based on rates of patches from images with certain values of the concepts. Therefore, the proposed method is called the Frequentist Inference CBL (FI-CBL). FI-CBL allows us to incorporate the expert rules in the form of logic functions into the inference procedure. An idea behind the incorporation is to update prior and conditional probabilities of concepts to satisfy the rules. The method is transparent because it has an explicit sequence of probabilistic calculations and a clear frequency interpretation. Numerical experiments show that FI-CBL outperforms the concept bottleneck model in cases when the number of training data is small. The code of proposed algorithms is publicly available.
[ "['Lev V. Utkin' 'Andrei V. Konstantinov' 'Stanislav R. Kirpichenko']" ]
null
null
2406.19900
null
null
http://arxiv.org/pdf/2406.19900v1
2024-06-28T13:10:13Z
2024-06-28T13:10:13Z
`Just One More Sensor is Enough' -- Iterative Water Leak Localization with Physical Simulation and a Small Number of Pressure Sensors
In this article, we propose an approach to leak localisation in a complex water delivery grid with the use of data from physical simulation (e.g. EPANET software). This task is usually achieved by a network of multiple water pressure sensors and analysis of the so-called sensitivity matrix of pressure differences between the network's simulated data and actual data of the network affected by the leak. However, most algorithms using this approach require a significant number of pressure sensors -- a condition that is not easy to fulfil in the case of many less equipped networks. Therefore, we answer the question of whether leak localisation is possible by utilising very few sensors but having the ability to relocate one of them. Our algorithm is based on physical simulations (EPANET software) and an iterative scheme for mobile sensor relocation. The experiments show that the proposed system can equalise the low number of sensors with adjustments made for their positioning, giving a very good approximation of leak's position both in simulated cases and real-life example taken from BattLeDIM competition L-Town data.
[ "['Michał Cholewa' 'Michał Romaszewski' 'Przemysław Głomb'\n 'Katarzyna Kołodziej' 'Michał Gorawski' 'Jakub Koral' 'Wojciech Koral'\n 'Andrzej Madej' 'Kryspin Musioł']" ]
null
null
2406.19931
null
null
http://arxiv.org/pdf/2406.19931v1
2024-06-28T14:01:22Z
2024-06-28T14:01:22Z
Decoupling General and Personalized Knowledge in Federated Learning via Additive and Low-Rank Decomposition
To address data heterogeneity, the key strategy of Personalized Federated Learning (PFL) is to decouple general knowledge (shared among clients) and client-specific knowledge, as the latter can have a negative impact on collaboration if not removed. Existing PFL methods primarily adopt a parameter partitioning approach, where the parameters of a model are designated as one of two types: parameters shared with other clients to extract general knowledge and parameters retained locally to learn client-specific knowledge. However, as these two types of parameters are put together like a jigsaw puzzle into a single model during the training process, each parameter may simultaneously absorb both general and client-specific knowledge, thus struggling to separate the two types of knowledge effectively. In this paper, we introduce FedDecomp, a simple but effective PFL paradigm that employs parameter additive decomposition to address this issue. Instead of assigning each parameter of a model as either a shared or personalized one, FedDecomp decomposes each parameter into the sum of two parameters: a shared one and a personalized one, thus achieving a more thorough decoupling of shared and personalized knowledge compared to the parameter partitioning method. In addition, as we find that retaining local knowledge of specific clients requires much lower model capacity compared with general knowledge across all clients, we let the matrix containing personalized parameters be low rank during the training process. Moreover, a new alternating training strategy is proposed to further improve the performance. Experimental results across multiple datasets and varying degrees of data heterogeneity demonstrate that FedDecomp outperforms state-of-the-art methods up to 4.9%.
[ "['Xinghao Wu' 'Xuefeng Liu' 'Jianwei Niu' 'Haolin Wang' 'Shaojie Tang'\n 'Guogang Zhu' 'Hao Su']" ]
null
null
2406.19948
null
null
http://arxiv.org/pdf/2406.19948v1
2024-06-28T14:30:14Z
2024-06-28T14:30:14Z
Kolmogorov-Smirnov GAN
We propose a novel deep generative model, the Kolmogorov-Smirnov Generative Adversarial Network (KSGAN). Unlike existing approaches, KSGAN formulates the learning process as a minimization of the Kolmogorov-Smirnov (KS) distance, generalized to handle multivariate distributions. This distance is calculated using the quantile function, which acts as the critic in the adversarial training process. We formally demonstrate that minimizing the KS distance leads to the trained approximate distribution aligning with the target distribution. We propose an efficient implementation and evaluate its effectiveness through experiments. The results show that KSGAN performs on par with existing adversarial methods, exhibiting stability during training, resistance to mode dropping and collapse, and tolerance to variations in hyperparameter settings. Additionally, we review the literature on the Generalized KS test and discuss the connections between KSGAN and existing adversarial generative models.
[ "['Maciej Falkiewicz' 'Naoya Takeishi' 'Alexandros Kalousis']" ]
null
null
2406.19958
null
null
http://arxiv.org/pdf/2406.19958v1
2024-06-28T14:45:29Z
2024-06-28T14:45:29Z
The Computational Curse of Big Data for Bayesian Additive Regression Trees: A Hitting Time Analysis
Bayesian Additive Regression Trees (BART) is a popular Bayesian non-parametric regression model that is commonly used in causal inference and beyond. Its strong predictive performance is supported by theoretical guarantees that its posterior distribution concentrates around the true regression function at optimal rates under various data generative settings and for appropriate prior choices. In this paper, we show that the BART sampler often converges slowly, confirming empirical observations by other researchers. Assuming discrete covariates, we show that, while the BART posterior concentrates on a set comprising all optimal tree structures (smallest bias and complexity), the Markov chain's hitting time for this set increases with $n$ (training sample size), under several common data generative settings. As $n$ increases, the approximate BART posterior thus becomes increasingly different from the exact posterior (for the same number of MCMC samples), contrasting with earlier concentration results on the exact posterior. This contrast is highlighted by our simulations showing worsening frequentist undercoverage for approximate posterior intervals and a growing ratio between the MSE of the approximate posterior and that obtainable by artificially improving convergence via averaging multiple sampler chains. Finally, based on our theoretical insights, possibilities are discussed to improve the BART sampler convergence performance.
[ "['Yan Shuo Tan' 'Omer Ronen' 'Theo Saarinen' 'Bin Yu']" ]
null
null
2406.19963
null
null
http://arxiv.org/pdf/2406.19963v2
2024-07-01T14:05:22Z
2024-06-28T14:51:01Z
Text2Robot: Evolutionary Robot Design from Text Descriptions
Robot design has traditionally been costly and labor-intensive. Despite advancements in automated processes, it remains challenging to navigate a vast design space while producing physically manufacturable robots. We introduce Text2Robot, a framework that converts user text specifications and performance preferences into physical quadrupedal robots. Within minutes, Text2Robot can use text-to-3D models to provide strong initializations of diverse morphologies. Within a day, our geometric processing algorithms and body-control co-optimization produce a walking robot by explicitly considering real-world electronics and manufacturability. Text2Robot enables rapid prototyping and opens new opportunities for robot design with generative models.
[ "['Ryan P. Ringel' 'Zachary S. Charlick' 'Jiaxun Liu' 'Boxi Xia'\n 'Boyuan Chen']" ]
null
null
2406.19973
null
null
http://arxiv.org/pdf/2406.19973v1
2024-06-28T15:01:23Z
2024-06-28T15:01:23Z
STLLaVA-Med: Self-Training Large Language and Vision Assistant for Medical
Large Vision-Language Models (LVLMs) have shown significant potential in assisting medical diagnosis by leveraging extensive biomedical datasets. However, the advancement of medical image understanding and reasoning critically depends on building high-quality visual instruction data, which is costly and labor-intensive to obtain, particularly in the medical domain. To mitigate this data-starving issue, we introduce Self-Training Large Language and Vision Assistant for Medical (STLLaVA-Med). The proposed method is designed to train a policy model (an LVLM) capable of auto-generating medical visual instruction data to improve data efficiency, guided through Direct Preference Optimization (DPO). Specifically, a more powerful and larger LVLM (e.g., GPT-4o) is involved as a biomedical expert to oversee the DPO fine-tuning process on the auto-generated data, encouraging the policy model to align efficiently with human preferences. We validate the efficacy and data efficiency of STLLaVA-Med across three major medical Visual Question Answering (VQA) benchmarks, demonstrating competitive zero-shot performance with the utilization of only 9% of the medical data.
[ "['Guohao Sun' 'Can Qin' 'Huazhu Fu' 'Linwei Wang' 'Zhiqiang Tao']" ]
null
null
2406.19976
null
null
http://arxiv.org/pdf/2406.19976v1
2024-06-28T15:03:08Z
2024-06-28T15:03:08Z
ScaleBiO: Scalable Bilevel Optimization for LLM Data Reweighting
Bilevel optimization has shown its utility across various machine learning settings, yet most algorithms in practice require second-order information, making it challenging to scale them up. Only recently, a paradigm of first-order algorithms emerged, capable of effectively addressing bilevel optimization problems. Nevertheless, the practical efficiency of this paradigm remains unverified, particularly in the context of large language models (LLMs). This paper introduces the first scalable instantiation of this paradigm called ScaleBiO, focusing on bilevel optimization for large-scale LLM data reweighting. By combining with a recently proposed memory-efficient training technique called LISA, our novel algorithm allows the paradigm to scale to 34-billion-parameter LLMs on eight A40 GPUs, marking the first successful application of bilevel optimization under practical scenarios for large-sized LLMs. Empirically, extensive experiments on data reweighting verify the effectiveness of ScaleBiO for different-scaled models, including GPT-2, LLaMA-3-8B, GPT-NeoX-20B, and Yi-34B, where bilevel optimization succeeds in filtering irrelevant data samples and selecting informative samples. Theoretically, ScaleBiO ensures the optimality of the learned data weights, along with a convergence guarantee matching the conventional first-order bilevel optimization paradigm on smooth and strongly convex objectives.
[ "['Rui Pan' 'Jipeng Zhang' 'Xingyuan Pan' 'Renjie Pi' 'Xiaoyu Wang'\n 'Tong Zhang']" ]
null
null
2406.19980
null
null
http://arxiv.org/pdf/2406.19980v1
2024-06-28T15:06:22Z
2024-06-28T15:06:22Z
Comparative Analysis of LSTM Neural Networks and Traditional Machine Learning Models for Predicting Diabetes Patient Readmission
Diabetes mellitus is a chronic metabolic disorder that has emerged as one of the major health problems worldwide due to its high prevalence and serious complications, which are pricey to manage. Effective management requires good glycemic control and regular follow-up in the clinic; however, non-adherence to scheduled follow-ups is very common. This study uses the Diabetes 130-US Hospitals dataset for analysis and prediction of readmission patients by various traditional machine learning models, such as XGBoost, LightGBM, CatBoost, Decision Tree, and Random Forest, and also uses an in-house LSTM neural network for comparison. The quality of the data was assured by preprocessing it, and the performance evaluation for all these models was based on accuracy, precision, recall, and F1-score. LightGBM turned out to be the best traditional model, while XGBoost was the runner-up. The LSTM model suffered from overfitting despite high training accuracy. A major strength of LSTM is capturing temporal dependencies among the patient data. Further, SHAP values were used, which improved model interpretability, whereby key factors among them number of lab procedures and discharge disposition were identified as critical in the prediction of readmissions. This study demonstrates that model selection, validation, and interpretability are key steps in predictive healthcare modeling. This will help health providers design interventions for improved follow-up adherence and better management of diabetes.
[ "['Abolfazl Zarghani']" ]
null
null
2406.19983
null
null
http://arxiv.org/pdf/2406.19983v1
2024-06-28T15:15:01Z
2024-06-28T15:15:01Z
Machine Learning Predictors for Min-Entropy Estimation
This study investigates the application of machine learning predictors for min-entropy estimation in Random Number Generators (RNGs), a key component in cryptographic applications where accurate entropy assessment is essential for cybersecurity. Our research indicates that these predictors, and indeed any predictor that leverages sequence correlations, primarily estimate average min-entropy, a metric not extensively studied in this context. We explore the relationship between average min-entropy and the traditional min-entropy, focusing on their dependence on the number of target bits being predicted. Utilizing data from Generalized Binary Autoregressive Models, a subset of Markov processes, we demonstrate that machine learning models (including a hybrid of convolutional and recurrent Long Short-Term Memory layers and the transformer-based GPT-2 model) outperform traditional NIST SP 800-90B predictors in certain scenarios. Our findings underscore the importance of considering the number of target bits in min-entropy assessment for RNGs and highlight the potential of machine learning approaches in enhancing entropy estimation techniques for improved cryptographic security.
[ "['Javier Blanco-Romero' 'Vicente Lorenzo' 'Florina Almenares Mendoza'\n 'Daniel Díaz-Sánchez']" ]
null
null
2406.19995
null
null
http://arxiv.org/pdf/2406.19995v1
2024-06-28T15:27:57Z
2024-06-28T15:27:57Z
Single Parent Family: A Spectrum of Family Members from a Single Pre-Trained Foundation Model
This paper introduces a novel method of Progressive Low Rank Decomposition (PLRD) tailored for the compression of large language models. Our approach leverages a pre-trained model, which is then incrementally decompressed to smaller sizes using progressively lower ranks. This method allows for significant reductions in computational overhead and energy consumption, as subsequent models are derived from the original without the need for retraining from scratch. We detail the implementation of PLRD, which strategically decreases the tensor ranks, thus optimizing the trade-off between model performance and resource usage. The efficacy of PLRD is demonstrated through extensive experiments showing that models trained with PLRD method on only 1B tokens maintain comparable performance with traditionally trained models while using 0.1% of the tokens. The versatility of PLRD is highlighted by its ability to generate multiple model sizes from a single foundational model, adapting fluidly to varying computational and memory budgets. Our findings suggest that PLRD could set a new standard for the efficient scaling of LLMs, making advanced AI more feasible on diverse platforms.
[ "['Habib Hajimolahoseini' 'Mohammad Hassanpour' 'Foozhan Ataiefard'\n 'Boxing Chen' 'Yang Liu']" ]
null
null
2406.19997
null
null
http://arxiv.org/pdf/2406.19997v1
2024-06-28T15:32:59Z
2024-06-28T15:32:59Z
Wavelets Are All You Need for Autoregressive Image Generation
In this paper, we take a new approach to autoregressive image generation that is based on two main ingredients. The first is wavelet image coding, which allows to tokenize the visual details of an image from coarse to fine details by ordering the information starting with the most significant bits of the most significant wavelet coefficients. The second is a variant of a language transformer whose architecture is re-designed and optimized for token sequences in this 'wavelet language'. The transformer learns the significant statistical correlations within a token sequence, which are the manifestations of well-known correlations between the wavelet subbands at various resolutions. We show experimental results with conditioning on the generation process.
[ "['Wael Mattar' 'Idan Levy' 'Nir Sharon' 'Shai Dekel']" ]
null
null
2406.20006
null
null
http://arxiv.org/pdf/2406.20006v1
2024-06-28T15:46:08Z
2024-06-28T15:46:08Z
On the Trade-off between Flatness and Optimization in Distributed Learning
This paper proposes a theoretical framework to evaluate and compare the performance of gradient-descent algorithms for distributed learning in relation to their behavior around local minima in nonconvex environments. Previous works have noticed that convergence toward flat local minima tend to enhance the generalization ability of learning algorithms. This work discovers two interesting results. First, it shows that decentralized learning strategies are able to escape faster away from local minimizers and favor convergence toward flatter minima relative to the centralized solution in the large-batch training regime. Second, and importantly, the ultimate classification accuracy is not solely dependent on the flatness of the local minimizer but also on how well a learning algorithm can approach that minimum. In other words, the classification accuracy is a function of both flatness and optimization performance. The paper examines the interplay between the two measures of flatness and optimization error closely. One important conclusion is that decentralized strategies of the diffusion type deliver enhanced classification accuracy because it strikes a more favorable balance between flatness and optimization performance.
[ "['Ying Cao' 'Zhaoxian Wu' 'Kun Yuan' 'Ali H. Sayed']" ]
null
null
2406.20031
null
null
http://arxiv.org/pdf/2406.20031v1
2024-06-28T16:20:22Z
2024-06-28T16:20:22Z
Pairwise Difference Learning for Classification
Pairwise difference learning (PDL) has recently been introduced as a new meta-learning technique for regression. Instead of learning a mapping from instances to outcomes in the standard way, the key idea is to learn a function that takes two instances as input and predicts the difference between the respective outcomes. Given a function of this kind, predictions for a query instance are derived from every training example and then averaged. This paper extends PDL toward the task of classification and proposes a meta-learning technique for inducing a PDL classifier by solving a suitably defined (binary) classification problem on a paired version of the original training data. We analyze the performance of the PDL classifier in a large-scale empirical study and find that it outperforms state-of-the-art methods in terms of prediction performance. Last but not least, we provide an easy-to-use and publicly available implementation of PDL in a Python package.
[ "['Mohamed Karim Belaid' 'Maximilian Rabus' 'Eyke Hüllermeier']" ]
null
null
2406.20037
null
null
http://arxiv.org/pdf/2406.20037v2
2024-07-15T16:42:24Z
2024-06-28T16:34:22Z
Explore as a Storm, Exploit as a Raindrop: On the Benefit of Fine-Tuning Kernel Schedulers with Coordinate Descent
Machine-learning models consist of kernels, which are algorithms applying operations on tensors -- data indexed by a linear combination of natural numbers. Examples of kernels include convolutions, transpositions, and vectorial products. There are many ways to implement a kernel. These implementations form the kernel's optimization space. Kernel scheduling is the problem of finding the best implementation, given an objective function -- typically execution speed. Kernel optimizers such as Ansor, Halide, and AutoTVM solve this problem via search heuristics, which combine two phases: exploration and exploitation. The first step evaluates many different kernel optimization spaces. The latter tries to improve the best implementations by investigating a kernel within the same space. For example, Ansor combines kernel generation through sketches for exploration and leverages an evolutionary algorithm to exploit the best sketches. In this work, we demonstrate the potential to reduce Ansor's search time while enhancing kernel quality by incorporating Droplet Search, an AutoTVM algorithm, into Ansor's exploration phase. The approach involves limiting the number of samples explored by Ansor, selecting the best, and exploiting it with a coordinate descent algorithm. By applying this approach to the first 300 kernels that Ansor generates, we usually obtain better kernels in less time than if we let Ansor analyze 10,000 kernels. This result has been replicated in 20 well-known deep-learning models (AlexNet, ResNet, VGG, DenseNet, etc.) running on four architectures: an AMD Ryzen 7 (x86), an NVIDIA A100 tensor core, an NVIDIA RTX 3080 GPU, and an ARM A64FX. A patch with this combined approach was approved in Ansor in February 2024. As evidence of the generality of this search methodology, a similar patch, achieving equally good results, was submitted to TVM's MetaSchedule in June 2024.
[ "['Michael Canesche' 'Gaurav Verma' 'Fernando Magno Quintao Pereira']" ]
null
null
2406.20046
null
null
http://arxiv.org/pdf/2406.20046v1
2024-06-28T16:58:32Z
2024-06-28T16:58:32Z
Evaluation of autonomous systems under data distribution shifts
We posit that data can only be safe to use up to a certain threshold of the data distribution shift, after which control must be relinquished by the autonomous system and operation halted or handed to a human operator. With the use of a computer vision toy example we demonstrate that network predictive accuracy is impacted by data distribution shifts and propose distance metrics between training and testing data to define safe operation limits within said shifts. We conclude that beyond an empirically obtained threshold of the data distribution shift, it is unreasonable to expect network predictive accuracy not to degrade
[ "['Daniel Sikar' 'Artur Garcez']" ]
null
null
2406.20053
null
null
http://arxiv.org/pdf/2406.20053v1
2024-06-28T17:05:46Z
2024-06-28T17:05:46Z
Covert Malicious Finetuning: Challenges in Safeguarding LLM Adaptation
Black-box finetuning is an emerging interface for adapting state-of-the-art language models to user needs. However, such access may also let malicious actors undermine model safety. To demonstrate the challenge of defending finetuning interfaces, we introduce covert malicious finetuning, a method to compromise model safety via finetuning while evading detection. Our method constructs a malicious dataset where every individual datapoint appears innocuous, but finetuning on the dataset teaches the model to respond to encoded harmful requests with encoded harmful responses. Applied to GPT-4, our method produces a finetuned model that acts on harmful instructions 99% of the time and avoids detection by defense mechanisms such as dataset inspection, safety evaluations, and input/output classifiers. Our findings question whether black-box finetuning access can be secured against sophisticated adversaries.
[ "['Danny Halawi' 'Alexander Wei' 'Eric Wallace' 'Tony T. Wang'\n 'Nika Haghtalab' 'Jacob Steinhardt']" ]
null
null
2406.20055
null
null
http://arxiv.org/pdf/2406.20055v1
2024-06-28T17:07:11Z
2024-06-28T17:07:11Z
SpotlessSplats: Ignoring Distractors in 3D Gaussian Splatting
3D Gaussian Splatting (3DGS) is a promising technique for 3D reconstruction, offering efficient training and rendering speeds, making it suitable for real-time applications.However, current methods require highly controlled environments (no moving people or wind-blown elements, and consistent lighting) to meet the inter-view consistency assumption of 3DGS. This makes reconstruction of real-world captures problematic. We present SpotlessSplats, an approach that leverages pre-trained and general-purpose features coupled with robust optimization to effectively ignore transient distractors. Our method achieves state-of-the-art reconstruction quality both visually and quantitatively, on casual captures.
[ "['Sara Sabour' 'Lily Goli' 'George Kopanas' 'Mark Matthews' 'Dmitry Lagun'\n 'Leonidas Guibas' 'Alec Jacobson' 'David J. Fleet' 'Andrea Tagliasacchi']" ]
null
null
2406.20062
null
null
http://arxiv.org/pdf/2406.20062v1
2024-06-28T17:20:13Z
2024-06-28T17:20:13Z
Cost-aware Bayesian optimization via the Pandora's Box Gittins index
Bayesian optimization is a technique for efficiently optimizing unknown functions in a black-box manner. To handle practical settings where gathering data requires use of finite resources, it is desirable to explicitly incorporate function evaluation costs into Bayesian optimization policies. To understand how to do so, we develop a previously-unexplored connection between cost-aware Bayesian optimization and the Pandora's Box problem, a decision problem from economics. The Pandora's Box problem admits a Bayesian-optimal solution based on an expression called the Gittins index, which can be reinterpreted as an acquisition function. We study the use of this acquisition function for cost-aware Bayesian optimization, and demonstrate empirically that it performs well, particularly in medium-high dimensions. We further show that this performance carries over to classical Bayesian optimization without explicit evaluation costs. Our work constitutes a first step towards integrating techniques from Gittins index theory into Bayesian optimization.
[ "['Qian Xie' 'Raul Astudillo' 'Peter Frazier' 'Ziv Scully'\n 'Alexander Terenin']" ]
null
null
2406.20081
null
null
http://arxiv.org/pdf/2406.20081v1
2024-06-28T17:47:32Z
2024-06-28T17:47:32Z
Segment Anything without Supervision
The Segmentation Anything Model (SAM) requires labor-intensive data labeling. We present Unsupervised SAM (UnSAM) for promptable and automatic whole-image segmentation that does not require human annotations. UnSAM utilizes a divide-and-conquer strategy to "discover" the hierarchical structure of visual scenes. We first leverage top-down clustering methods to partition an unlabeled image into instance/semantic level segments. For all pixels within a segment, a bottom-up clustering method is employed to iteratively merge them into larger groups, thereby forming a hierarchical structure. These unsupervised multi-granular masks are then utilized to supervise model training. Evaluated across seven popular datasets, UnSAM achieves competitive results with the supervised counterpart SAM, and surpasses the previous state-of-the-art in unsupervised segmentation by 11% in terms of AR. Moreover, we show that supervised SAM can also benefit from our self-supervised labels. By integrating our unsupervised pseudo masks into SA-1B's ground-truth masks and training UnSAM with only 1% of SA-1B, a lightly semi-supervised UnSAM can often segment entities overlooked by supervised SAM, exceeding SAM's AR by over 6.7% and AP by 3.9% on SA-1B.
[ "['XuDong Wang' 'Jingfeng Yang' 'Trevor Darrell']" ]
null
null
2406.20086
null
null
http://arxiv.org/pdf/2406.20086v1
2024-06-28T17:54:47Z
2024-06-28T17:54:47Z
Token Erasure as a Footprint of Implicit Vocabulary Items in LLMs
LLMs process text as sequences of tokens that roughly correspond to words, where less common words are represented by multiple tokens. However, individual tokens are often semantically unrelated to the meanings of the words/concepts they comprise. For example, Llama-2-7b's tokenizer splits the word "northeastern" into the tokens ['_n', 'ort', 'he', 'astern'], none of which correspond to semantically meaningful units like "north" or "east." Similarly, the overall meanings of named entities like "Neil Young" and multi-word expressions like "break a leg" cannot be directly inferred from their constituent tokens. Mechanistically, how do LLMs convert such arbitrary groups of tokens into useful higher-level representations? In this work, we find that last token representations of named entities and multi-token words exhibit a pronounced "erasure" effect, where information about previous and current tokens is rapidly forgotten in early layers. Using this observation, we propose a method to "read out" the implicit vocabulary of an autoregressive LLM by examining differences in token representations across layers, and present results of this method for Llama-2-7b and Llama-3-8B. To our knowledge, this is the first attempt to probe the implicit vocabulary of an LLM.
[ "['Sheridan Feucht' 'David Atkinson' 'Byron Wallace' 'David Bau']" ]
null
null
2406.20087
null
null
http://arxiv.org/pdf/2406.20087v1
2024-06-28T17:55:24Z
2024-06-28T17:55:24Z
ProgressGym: Alignment with a Millennium of Moral Progress
Frontier AI systems, including large language models (LLMs), hold increasing influence over the epistemology of human users. Such influence can reinforce prevailing societal values, potentially contributing to the lock-in of misguided moral beliefs and, consequently, the perpetuation of problematic moral practices on a broad scale. We introduce progress alignment as a technical solution to mitigate this imminent risk. Progress alignment algorithms learn to emulate the mechanics of human moral progress, thereby addressing the susceptibility of existing alignment methods to contemporary moral blindspots. To empower research in progress alignment, we introduce ProgressGym, an experimental framework allowing the learning of moral progress mechanics from history, in order to facilitate future progress in real-world moral decisions. Leveraging 9 centuries of historical text and 18 historical LLMs, ProgressGym enables codification of real-world progress alignment challenges into concrete benchmarks. Specifically, we introduce three core challenges: tracking evolving values (PG-Follow), preemptively anticipating moral progress (PG-Predict), and regulating the feedback loop between human and AI value shifts (PG-Coevolve). Alignment methods without a temporal dimension are inapplicable to these tasks. In response, we present lifelong and extrapolative algorithms as baseline methods of progress alignment, and build an open leaderboard soliciting novel algorithms and challenges. The framework and the leaderboard are available at https://github.com/PKU-Alignment/ProgressGym and https://huggingface.co/spaces/PKU-Alignment/ProgressGym-LeaderBoard respectively.
[ "['Tianyi Qiu' 'Yang Zhang' 'Xuchuan Huang' 'Jasmine Xinze Li' 'Jiaming Ji'\n 'Yaodong Yang']" ]
null
null
2406.20094
null
null
http://arxiv.org/pdf/2406.20094v1
2024-06-28T17:59:01Z
2024-06-28T17:59:01Z
Scaling Synthetic Data Creation with 1,000,000,000 Personas
We propose a novel persona-driven data synthesis methodology that leverages various perspectives within a large language model (LLM) to create diverse synthetic data. To fully exploit this methodology at scale, we introduce Persona Hub -- a collection of 1 billion diverse personas automatically curated from web data. These 1 billion personas (~13% of the world's total population), acting as distributed carriers of world knowledge, can tap into almost every perspective encapsulated within the LLM, thereby facilitating the creation of diverse synthetic data at scale for various scenarios. By showcasing Persona Hub's use cases in synthesizing high-quality mathematical and logical reasoning problems, instructions (i.e., user prompts), knowledge-rich texts, game NPCs and tools (functions) at scale, we demonstrate persona-driven data synthesis is versatile, scalable, flexible, and easy to use, potentially driving a paradigm shift in synthetic data creation and applications in practice, which may have a profound impact on LLM research and development.
[ "['Xin Chan' 'Xiaoyang Wang' 'Dian Yu' 'Haitao Mi' 'Dong Yu']" ]
null
null
2406.20095
null
null
http://arxiv.org/pdf/2406.20095v1
2024-06-28T17:59:12Z
2024-06-28T17:59:12Z
LLaRA: Supercharging Robot Learning Data for Vision-Language Policy
Large Language Models (LLMs) equipped with extensive world knowledge and strong reasoning skills can tackle diverse tasks across domains, often by posing them as conversation-style instruction-response pairs. In this paper, we propose LLaRA: Large Language and Robotics Assistant, a framework which formulates robot action policy as conversations, and provides improved responses when trained with auxiliary data that complements policy learning. LLMs with visual inputs, i.e., Vision Language Models (VLMs), have the capacity to process state information as visual-textual prompts and generate optimal policy decisions in text. To train such action policy VLMs, we first introduce an automated pipeline to generate diverse high-quality robotics instruction data from existing behavior cloning data. A VLM finetuned with the resulting collection of datasets based on a conversation-style formulation tailored for robotics tasks, can generate meaningful robot action policy decisions. Our experiments across multiple simulated and real-world environments demonstrate the state-of-the-art performance of the proposed LLaRA framework. The code, datasets, and pretrained models are available at https://github.com/LostXine/LLaRA.
[ "['Xiang Li' 'Cristina Mata' 'Jongwoo Park' 'Kumara Kahatapitiya'\n 'Yoo Sung Jang' 'Jinghuan Shang' 'Kanchana Ranasinghe' 'Ryan Burgert'\n 'Mu Cai' 'Yong Jae Lee' 'Michael S. Ryoo']" ]
null
null
2407.00002
null
null
http://arxiv.org/pdf/2407.00002v2
2024-07-09T08:28:57Z
2024-04-09T14:08:06Z
Kermut: Composite kernel regression for protein variant effects
Reliable prediction of protein variant effects is crucial for both protein optimization and for advancing biological understanding. For practical use in protein engineering, it is important that we can also provide reliable uncertainty estimates for our predictions, and while prediction accuracy has seen much progress in recent years, uncertainty metrics are rarely reported. We here provide a Gaussian process regression model, Kermut, with a novel composite kernel for modelling mutation similarity, which obtains state-of-the-art performance for protein variant effect prediction while also offering estimates of uncertainty through its posterior. An analysis of the quality of the uncertainty estimates demonstrates that our model provides meaningful levels of overall calibration, but that instance-specific uncertainty calibration remains more challenging. We hope that this will encourage future work in this promising direction.
[ "['Peter Mørch Groth' 'Mads Herbert Kerrn' 'Lars Olsen' 'Jesper Salomon'\n 'Wouter Boomsma']" ]
null
null
2407.00004
null
null
http://arxiv.org/pdf/2407.00004v1
2024-04-16T12:57:06Z
2024-04-16T12:57:06Z
Multi-objective generative AI for designing novel brain-targeting small molecules
The strict selectivity of the blood-brain barrier (BBB) represents one of the most formidable challenges to successful central nervous system (CNS) drug delivery. Computational methods to generate BBB permeable drugs in silico may be valuable tools in the CNS drug design pipeline. However, in real-world applications, BBB penetration alone is insufficient; rather, after transiting the BBB, molecules must bind to a specific target or receptor in the brain and must also be safe and non-toxic. To discover small molecules that concurrently satisfy these constraints, we use multi-objective generative AI to synthesize drug-like BBB-permeable small molecules. Specifically, we computationally synthesize molecules with predicted binding affinity against dopamine receptor D2, the primary target for many clinically effective antipsychotic drugs. After training several graph neural network-based property predictors, we adapt SyntheMol (Swanson et al., 2024), a recently developed Monte Carlo Tree Search-based algorithm for antibiotic design, to perform a multi-objective guided traversal over an easily synthesizable molecular space. We design a library of 26,581 novel and diverse small molecules containing hits with high predicted BBB permeability and favorable predicted safety and toxicity profiles, and that could readily be synthesized for experimental validation in the wet lab. We also validate top scoring molecules with molecular docking simulation against the D2 receptor and demonstrate predicted binding affinity on par with risperidone, a clinically prescribed D2-targeting antipsychotic. In the future, the SyntheMol-based computational approach described here may enable the discovery of novel neurotherapeutics for currently intractable disorders of the CNS.
[ "['Ayush Noori' 'Iñaki Arango' 'William E. Byrd' 'Nada Amin']" ]
null
null
2407.00007
null
null
http://arxiv.org/pdf/2407.00007v1
2024-04-23T13:06:09Z
2024-04-23T13:06:09Z
Graph Neural Networks and Reinforcement Learning for Proactive Application Image Placement
The shift from Cloud Computing to a Cloud-Edge continuum presents new opportunities and challenges for data-intensive and interactive applications. Edge computing has garnered a lot of attention from both industry and academia in recent years, emerging as a key enabler for meeting the increasingly strict demands of Next Generation applications. In Edge computing the computations are placed closer to the end-users, to facilitate low-latency and high-bandwidth applications and services. However, the distributed, dynamic, and heterogeneous nature of Edge computing, presents a significant challenge for service placement. A critical aspect of Edge computing involves managing the placement of applications within the network system to minimize each application's runtime, considering the resources available on system devices and the capabilities of the system's network. The placement of application images must be proactively planned to minimize image tranfer time, and meet the strict demands of the applications. In this regard, this paper proposes an approach for proactive image placement that combines Graph Neural Networks and actor-critic Reinforcement Learning, which is evaluated empirically and compared against various solutions. The findings indicate that although the proposed approach may result in longer execution times in certain scenarios, it consistently achieves superior outcomes in terms of application placement.
[ "['Antonios Makris' 'Theodoros Theodoropoulos' 'Evangelos Psomakelis'\n 'Emanuele Carlini' 'Matteo Mordacchini' 'Patrizio Dazzi'\n 'Konstantinos Tserpes']" ]
null
null
2407.00011
null
null
http://arxiv.org/pdf/2407.00011v1
2024-04-28T14:05:13Z
2024-04-28T14:05:13Z
Enhancing Computational Efficiency in Multiscale Systems Using Deep Learning of Coordinates and Flow Maps
Complex systems often show macroscopic coherent behavior due to the interactions of microscopic agents like molecules, cells, or individuals in a population with their environment. However, simulating such systems poses several computational challenges during simulation as the underlying dynamics vary and span wide spatiotemporal scales of interest. To capture the fast-evolving features, finer time steps are required while ensuring that the simulation time is long enough to capture the slow-scale behavior, making the analyses computationally unmanageable. This paper showcases how deep learning techniques can be used to develop a precise time-stepping approach for multiscale systems using the joint discovery of coordinates and flow maps. While the former allows us to represent the multiscale dynamics on a representative basis, the latter enables the iterative time-stepping estimation of the reduced variables. The resulting framework achieves state-of-the-art predictive accuracy while incurring lesser computational costs. We demonstrate this ability of the proposed scheme on the large-scale Fitzhugh Nagumo neuron model and the 1D Kuramoto-Sivashinsky equation in the chaotic regime.
[ "['Asif Hamid' 'Danish Rafiq' 'Shahkar Ahmad Nahvi' 'Mohammad Abid Bazaz']" ]
null
null
2407.00020
null
null
http://arxiv.org/pdf/2407.00020v1
2024-05-06T08:59:16Z
2024-05-06T08:59:16Z
Visual Language Model based Cross-modal Semantic Communication Systems
Semantic Communication (SC) has emerged as a novel communication paradigm in recent years, successfully transcending the Shannon physical capacity limits through innovative semantic transmission concepts. Nevertheless, extant Image Semantic Communication (ISC) systems face several challenges in dynamic environments, including low semantic density, catastrophic forgetting, and uncertain Signal-to-Noise Ratio (SNR). To address these challenges, we propose a novel Vision-Language Model-based Cross-modal Semantic Communication (VLM-CSC) system. The VLM-CSC comprises three novel components: (1) Cross-modal Knowledge Base (CKB) is used to extract high-density textual semantics from the semantically sparse image at the transmitter and reconstruct the original image based on textual semantics at the receiver. The transmission of high-density semantics contributes to alleviating bandwidth pressure. (2) Memory-assisted Encoder and Decoder (MED) employ a hybrid long/short-term memory mechanism, enabling the semantic encoder and decoder to overcome catastrophic forgetting in dynamic environments when there is a drift in the distribution of semantic features. (3) Noise Attention Module (NAM) employs attention mechanisms to adaptively adjust the semantic coding and the channel coding based on SNR, ensuring the robustness of the CSC system. The experimental simulations validate the effectiveness, adaptability, and robustness of the CSC system.
[ "['Feibo Jiang' 'Chuanguo Tang' 'Li Dong' 'Kezhi Wang' 'Kun Yang'\n 'Cunhua Pan']" ]
null
null
2407.00023
null
null
http://arxiv.org/pdf/2407.00023v1
2024-05-08T06:30:58Z
2024-05-08T06:30:58Z
Preble: Efficient Distributed Prompt Scheduling for LLM Serving
Prompts to large language models (LLMs) have evolved beyond simple user questions. For LLMs to solve complex problems, today's practices include domain-specific instructions, illustration of tool usages, and long context, such as textbook chapters in prompts. As such, many parts of prompts are repetitive across requests, and their attention computation results can be reused. However, today's LLM serving systems treat every request in isolation, missing the opportunity of computation reuse. This paper proposes Preble, the first distributed LLM serving platform that targets and optimizes for prompt sharing. We perform a study on five popular LLM workloads. Based on our study results, we designed a distributed scheduling system that co-optimizes computation reuse and load balancing. Our evaluation of Preble on two to 8 GPUs with real workloads and request arrival patterns on two open-source LLM models shows that Preble outperforms the state-of-the-art average latency by 1.5X to 14.5X and p99 by 2X to 10X.
[ "['Vikranth Srivatsa' 'Zijian He' 'Reyna Abhyankar' 'Dongming Li'\n 'Yiying Zhang']" ]
null
null
2407.00028
null
null
http://arxiv.org/pdf/2407.00028v1
2024-05-14T23:43:34Z
2024-05-14T23:43:34Z
Harnessing XGBoost for Robust Biomarker Selection of Obsessive-Compulsive Disorder (OCD) from Adolescent Brain Cognitive Development (ABCD) data
This study evaluates the performance of various supervised machine learning models in analyzing highly correlated neural signaling data from the Adolescent Brain Cognitive Development (ABCD) Study, with a focus on predicting obsessive-compulsive disorder scales. We simulated a dataset to mimic the correlation structures commonly found in imaging data and evaluated logistic regression, elastic networks, random forests, and XGBoost on their ability to handle multicollinearity and accurately identify predictive features. Our study aims to guide the selection of appropriate machine learning methods for processing neuroimaging data, highlighting models that best capture underlying signals in high feature correlations and prioritize clinically relevant features associated with Obsessive-Compulsive Disorder (OCD).
[ "['Xinyu Shen' 'Qimin Zhang' 'Huili Zheng' 'Weiwei Qi']" ]
null
null
2407.00040
null
null
http://arxiv.org/pdf/2407.00040v1
2024-05-29T06:12:05Z
2024-05-29T06:12:05Z
A Machine Learning Approach for Identifying Anatomical Biomarkers of Early Mild Cognitive Impairment
Alzheimer's Disease (AD) is a progressive neurodegenerative disorder that primarily affects the aging population by impairing cognitive and motor functions. Early detection of AD through accessible methodologies like magnetic resonance imaging (MRI) is vital for developing effective interventions to halt or slow the disease's progression. This study aims to perform a comprehensive analysis of machine learning techniques for selecting MRI-based biomarkers and classifying individuals into healthy controls (HC) and unstable controls (uHC) who later show mild cognitive impairment within five years. The research utilizes MRI data from the Alzheimer's Disease Neuroinformatics Initiative (ADNI) and the Open Access Series of Imaging Studies 3 (OASIS-3), focusing on both HC and uHC participants. The study addresses the challenges of imbalanced data by testing classification methods on balanced and unbalanced datasets, and harmonizes data using polynomial regression to mitigate nuisance variables like age, gender, and intracranial volume. Results indicate that Gaussian Naive Bayes and RusBoost classifiers shows an optimal performance, achieving accuracies of up to 76.46% and 72.48% respectively on the ADNI dataset. For the OASIS-3 dataset, Kernel Naive Bayes and RusBoost yield accuracies ranging from 64.66% to 75.71%, improving further in age-matched datasets. Brain regions like the entorhinal cortex, hippocampus, lateral ventricle, and lateral orbitofrontal cortex are identified as significantly impacted during early cognitive decline. Despite limitations such as small sample sizes, the study's harmonization approach enhances the robustness of biomarker selection, suggesting the potential of this semi-automatic machine learning pipeline for early AD detection using MRI.
[ "['Alwani Liyana Ahmad' 'Jose Sanchez-Bornot' 'Roberto C. Sotero'\n 'Damien Coyle' 'Zamzuri Idris' 'Ibrahima Faye']" ]
null
null
2407.00047
null
null
http://arxiv.org/pdf/2407.00047v1
2024-06-05T21:17:34Z
2024-06-05T21:17:34Z
One Queue Is All You Need: Resolving Head-of-Line Blocking in Large Language Model Serving
$ $Large language models (LLMs) have become an increasingly important workload for cloud providers catering to both enterprise and consumer applications. LLM inference requests from these applications have end-to-end latency SLOs that must be adhered to in production settings. However, existing LLM serving systems focus on optimization objectives such as request serving throughput or request execution latency rather than the end-to-end latency SLOs. Achieving end-to-end SLOs for latency-sensitive requests is challenging due to head-of-line (HOL) blocking in the request queue, which results from bursty arrival rates and insufficient resources. To address the above challenge, we propose QLM, a multi-model queue management framework for LLM serving. QLM uses stochastic programming to orchestrate the actions of multiple LLM Serving Operations (LSOs) to reduce HOL blocking and maximize SLO attainment. Specifically, QLM uses the following LSOs: model swapping, request eviction, GPU-CPU state swapping, load balancing, and warm model start. Evaluation on heterogeneous GPU devices and models with real-world LLM serving dataset shows that QLM improves SLO attainment by 40-90% and throughput by 20-400% while maintaining or improving device utilization compared to other state-of-the-art LLM serving systems.
[ "['Archit Patke' 'Dhemath Reddy' 'Saurabh Jha' 'Haoran Qiu'\n 'Christian Pinto' 'Shengkun Cui' 'Chandra Narayanaswami'\n 'Zbigniew Kalbarczyk' 'Ravishankar Iyer']" ]
null
null
2407.00050
null
null
http://arxiv.org/pdf/2407.00050v1
2024-06-11T09:24:51Z
2024-06-11T09:24:51Z
FoldToken2: Learning compact, invariant and generative protein structure language
The equivalent nature of 3D coordinates has posed long term challenges in protein structure representation learning, alignment, and generation. Can we create a compact and invariant language that equivalently represents protein structures? Towards this goal, we propose FoldToken2 to transfer equivariant structures into discrete tokens, while maintaining the recoverability of the original structures. From FoldToken1 to FoldToken2, we improve three key components: (1) invariant structure encoder, (2) vector-quantized compressor, and (3) equivalent structure decoder. We evaluate FoldToken2 on the protein structure reconstruction task and show that it outperforms previous FoldToken1 by 20% in TMScore and 81% in RMSD. FoldToken2 probably be the first method that works well on both single-chain and multi-chain protein structures quantization. We believe that FoldToken2 will inspire further improvement in protein structure representation learning, structure alignment, and structure generation tasks.
[ "['Zhangyang Gao' 'Cheng Tan' 'Stan Z. Li']" ]
null
null
2407.00063
null
null
http://arxiv.org/pdf/2407.00063v2
2024-07-02T11:17:45Z
2024-06-17T07:07:42Z
An Interpretable Alternative to Neural Representation Learning for Rating Prediction -- Transparent Latent Class Modeling of User Reviews
Nowadays, neural network (NN) and deep learning (DL) techniques are widely adopted in many applications, including recommender systems. Given the sparse and stochastic nature of collaborative filtering (CF) data, recent works have critically analyzed the effective improvement of neural-based approaches compared to simpler and often transparent algorithms for recommendation. Previous results showed that NN and DL models can be outperformed by traditional algorithms in many tasks. Moreover, given the largely black-box nature of neural-based methods, interpretable results are not naturally obtained. Following on this debate, we first present a transparent probabilistic model that topologically organizes user and product latent classes based on the review information. In contrast to popular neural techniques for representation learning, we readily obtain a statistical, visualization-friendly tool that can be easily inspected to understand user and product characteristics from a textual-based perspective. Then, given the limitations of common embedding techniques, we investigate the possibility of using the estimated interpretable quantities as model input for a rating prediction task. To contribute to the recent debates, we evaluate our results in terms of both capacity for interpretability and predictive performances in comparison with popular text-based neural approaches. The results demonstrate that the proposed latent class representations can yield competitive predictive performances, compared to popular, but difficult-to-interpret approaches.
[ "['Giuseppe Serra' 'Peter Tino' 'Zhao Xu' 'Xin Yao']" ]
null
null
2407.00066
null
null
http://arxiv.org/pdf/2407.00066v1
2024-06-17T15:21:35Z
2024-06-17T15:21:35Z
Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead
Fine-tuning large language models (LLMs) with low-rank adapters (LoRAs) has become common practice, often yielding numerous copies of the same LLM differing only in their LoRA updates. This paradigm presents challenges for systems that serve real-time responses to queries that each involve a different LoRA. Prior works optimize the design of such systems but still require continuous loading and offloading of LoRAs, as it is infeasible to store thousands of LoRAs in GPU memory. To mitigate this issue, we investigate the efficacy of compression when serving LoRA adapters. We consider compressing adapters individually via SVD and propose a method for joint compression of LoRAs into a shared basis paired with LoRA-specific scaling matrices. Our experiments with up to 500 LoRAs demonstrate that compressed LoRAs preserve performance while offering major throughput gains in realistic serving scenarios with over a thousand LoRAs, maintaining 75% of the throughput of serving a single LoRA.
[ "['Rickard Brüel-Gabrielsson' 'Jiacheng Zhu' 'Onkar Bhardwaj'\n 'Leshem Choshen' 'Kristjan Greenewald' 'Mikhail Yurochkin'\n 'Justin Solomon']" ]
null
null
2407.00067
null
null
http://arxiv.org/abs/2407.00067v1
2024-06-17T16:02:45Z
2024-06-17T16:02:45Z
Perceptron Collaborative Filtering
While multivariate logistic regression classifiers are a great way of implementing collaborative filtering - a method of making automatic predictions about the interests of a user by collecting preferences or taste information from many other users, we can also achieve similar results using neural networks. A recommender system is a subclass of information filtering system that provide suggestions for items that are most pertinent to a particular user. A perceptron or a neural network is a machine learning model designed for fitting complex datasets using backpropagation and gradient descent. When coupled with advanced optimization techniques, the model may prove to be a great substitute for classical logistic classifiers. The optimizations include feature scaling, mean normalization, regularization, hyperparameter tuning and using stochastic/mini-batch gradient descent instead of regular gradient descent. In this use case, we will use the perceptron in the recommender system to fit the parameters i.e., the data from a multitude of users and use it to predict the preference/interest of a particular user.
[ "['Arya Chakraborty']" ]
null
null
2407.00071
null
null
http://arxiv.org/pdf/2407.00071v1
2024-06-19T16:47:44Z
2024-06-19T16:47:44Z
Combinatorial Reasoning: Selecting Reasons in Generative AI Pipelines via Combinatorial Optimization
Recent Large Language Models (LLMs) have demonstrated impressive capabilities at tasks that require human intelligence and are a significant step towards human-like artificial intelligence (AI). Yet the performance of LLMs at reasoning tasks have been subpar and the reasoning capability of LLMs is a matter of significant debate. While it has been shown that the choice of the prompting technique to the LLM can alter its performance on a multitude of tasks, including reasoning, the best performing techniques require human-made prompts with the knowledge of the tasks at hand. We introduce a framework for what we call Combinatorial Reasoning (CR), a fully-automated prompting method, where reasons are sampled from an LLM pipeline and mapped into a Quadratic Unconstrained Binary Optimization (QUBO) problem. The framework investigates whether QUBO solutions can be profitably used to select a useful subset of the reasons to construct a Chain-of-Thought style prompt. We explore the acceleration of CR with specialized solvers. We also investigate the performance of simpler zero-shot strategies such as linear majority rule or random selection of reasons. Our preliminary study indicates that coupling a combinatorial solver to generative AI pipelines is an interesting avenue for AI reasoning and elucidates design principles for future CR methods.
[ "['Mert Esencan' 'Tarun Advaith Kumar' 'Ata Akbari Asanjan' 'P. Aaron Lott'\n 'Masoud Mohseni' 'Can Unlu' 'Davide Venturelli' 'Alan Ho']" ]
null
null
2407.00075
null
null
http://arxiv.org/pdf/2407.00075v1
2024-06-21T19:18:16Z
2024-06-21T19:18:16Z
Logicbreaks: A Framework for Understanding Subversion of Rule-based Inference
We study how to subvert language models from following the rules. We model rule-following as inference in propositional Horn logic, a mathematical system in which rules have the form "if $P$ and $Q$, then $R$" for some propositions $P$, $Q$, and $R$. We prove that although transformers can faithfully abide by such rules, maliciously crafted prompts can nevertheless mislead even theoretically constructed models. Empirically, we find that attacks on our theoretical models mirror popular attacks on large language models. Our work suggests that studying smaller theoretical models can help understand the behavior of large language models in rule-based settings like logical reasoning and jailbreak attacks.
[ "['Anton Xue' 'Avishree Khare' 'Rajeev Alur' 'Surbhi Goel' 'Eric Wong']" ]
null
null
2407.00077
null
null
http://arxiv.org/pdf/2407.00077v2
2024-07-02T02:06:17Z
2024-06-22T15:32:53Z
Differentially Private Graph Diffusion with Applications in Personalized PageRanks
Graph diffusion, which iteratively propagates real-valued substances among the graph, is used in numerous graph/network-involved applications. However, releasing diffusion vectors may reveal sensitive linking information in the data such as transaction information in financial network data. However, protecting the privacy of graph data is challenging due to its interconnected nature. This work proposes a novel graph diffusion framework with edge-level differential privacy guarantees by using noisy diffusion iterates. The algorithm injects Laplace noise per diffusion iteration and adopts a degree-based thresholding function to mitigate the high sensitivity induced by low-degree nodes. Our privacy loss analysis is based on Privacy Amplification by Iteration (PABI), which to our best knowledge, is the first effort that analyzes PABI with Laplace noise and provides relevant applications. We also introduce a novel Infinity-Wasserstein distance tracking method, which tightens the analysis of privacy leakage and makes PABI more applicable in practice. We evaluate this framework by applying it to Personalized Pagerank computation for ranking tasks. Experiments on real-world network data demonstrate the superiority of our method under stringent privacy conditions.
[ "['Rongzhe Wei' 'Eli Chien' 'Pan Li']" ]
null
null
2407.00080
null
null
http://arxiv.org/abs/2407.00080v1
2024-06-24T08:18:36Z
2024-06-24T08:18:36Z
Decentralized Task Offloading and Load-Balancing for Mobile Edge Computing in Dense Networks
We study the problem of decentralized task offloading and load-balancing in a dense network with numerous devices and a set of edge servers. Solving this problem optimally is complicated due to the unknown network information and random task sizes. The shared network resources also influence the users' decisions and resource distribution. Our solution combines the mean field multi-agent multi-armed bandit (MAB) game with a load-balancing technique that adjusts the servers' rewards to achieve a target population profile despite the distributed user decision-making. Numerical results demonstrate the efficacy of our approach and the convergence to the target load distribution.
[ "['Mariam Yahya' 'Alexander Conzelmann' 'Setareh Maghsudi']" ]
null
null
2407.00081
null
null
http://arxiv.org/pdf/2407.00081v1
2024-06-24T09:04:09Z
2024-06-24T09:04:09Z
Semantic Revolution from Communications to Orchestration for 6G: Challenges, Enablers, and Research Directions
In the context of emerging 6G services, the realization of everything-to-everything interactions involving a myriad of physical and digital entities presents a crucial challenge. This challenge is exacerbated by resource scarcity in communication infrastructures, necessitating innovative solutions for effective service implementation. Exploring the potential of Semantic Communications (SemCom) to enhance point-to-point physical layer efficiency shows great promise in addressing this challenge. However, achieving efficient SemCom requires overcoming the significant hurdle of knowledge sharing between semantic decoders and encoders, particularly in the dynamic and non-stationary environment with stringent end-to-end quality requirements. To bridge this gap in existing literature, this paper introduces the Knowledge Base Management And Orchestration (KB-MANO) framework. Rooted in the concepts of Computing-Network Convergence (CNC) and lifelong learning, KB-MANO is crafted for the allocation of network and computing resources dedicated to updating and redistributing KBs across the system. The primary objective is to minimize the impact of knowledge management activities on actual service provisioning. A proof-of-concept is proposed to showcase the integration of KB-MANO with resource allocation in radio access networks. Finally, the paper offers insights into future research directions, emphasizing the transformative potential of semantic-oriented communication systems in the realm of 6G technology.
[ "['Masoud Shokrnezhad' 'Hamidreza Mazandarani' 'Tarik Taleb'\n 'Jaeseung Song' 'Richard Li']" ]
null
null
2407.00082
null
null
http://arxiv.org/abs/2407.00082v1
2024-06-24T14:38:04Z
2024-06-24T14:38:04Z
Adapting Job Recommendations to User Preference Drift with Behavioral-Semantic Fusion Learning
Job recommender systems are crucial for aligning job opportunities with job-seekers in online job-seeking. However, users tend to adjust their job preferences to secure employment opportunities continually, which limits the performance of job recommendations. The inherent frequency of preference drift poses a challenge to promptly and precisely capture user preferences. To address this issue, we propose a novel session-based framework, BISTRO, to timely model user preference through fusion learning of semantic and behavioral information. Specifically, BISTRO is composed of three stages: 1) coarse-grained semantic clustering, 2) fine-grained job preference extraction, and 3) personalized top-$k$ job recommendation. Initially, BISTRO segments the user interaction sequence into sessions and leverages session-based semantic clustering to achieve broad identification of person-job matching. Subsequently, we design a hypergraph wavelet learning method to capture the nuanced job preference drift. To mitigate the effect of noise in interactions caused by frequent preference drift, we innovatively propose an adaptive wavelet filtering technique to remove noisy interaction. Finally, a recurrent neural network is utilized to analyze session-based interaction for inferring personalized preferences. Extensive experiments on three real-world offline recruitment datasets demonstrate the significant performances of our framework. Significantly, BISTRO also excels in online experiments, affirming its effectiveness in live recruitment settings. This dual success underscores the robustness and adaptability of BISTRO. The source code is available at https://github.com/Applied-Machine-Learning-Lab/BISTRO.
[ "['Xiao Han' 'Chen Zhu' 'Xiao Hu' 'Chuan Qin' 'Xiangyu Zhao' 'Hengshu Zhu']" ]
null
null
2407.00085
null
null
http://arxiv.org/pdf/2407.00085v1
2024-06-24T17:43:49Z
2024-06-24T17:43:49Z
Compressing Search with Language Models
Millions of people turn to Google Search each day for information on things as diverse as new cars or flu symptoms. The terms that they enter contain valuable information on their daily intent and activities, but the information in these search terms has been difficult to fully leverage. User-defined categorical filters have been the most common way to shrink the dimensionality of search data to a tractable size for analysis and modeling. In this paper we present a new approach to reducing the dimensionality of search data while retaining much of the information in the individual terms without user-defined rules. Our contributions are two-fold: 1) we introduce SLaM Compression, a way to quantify search terms using pre-trained language models and create a representation of search data that has low dimensionality, is memory efficient, and effectively acts as a summary of search, and 2) we present CoSMo, a Constrained Search Model for estimating real world events using only search data. We demonstrate the efficacy of our contributions by estimating with high accuracy U.S. automobile sales and U.S. flu rates using only Google Search data.
[ "['Thomas Mulc' 'Jennifer L. Steele']" ]
null
null
2407.00087
null
null
http://arxiv.org/pdf/2407.00087v1
2024-06-25T07:20:11Z
2024-06-25T07:20:11Z
ARES: Alternating Reinforcement Learning and Supervised Fine-Tuning for Enhanced Multi-Modal Chain-of-Thought Reasoning Through Diverse AI Feedback
Large Multimodal Models (LMMs) excel at comprehending human instructions and demonstrate remarkable results across a broad spectrum of tasks. Reinforcement Learning from Human Feedback (RLHF) and AI Feedback (RLAIF) further refine LLMs by aligning them with specific preferences. These methods primarily use ranking-based feedback for entire generations. With advanced AI models (Teacher), such as GPT-4 and Claude 3 Opus, we can request various types of detailed feedback that are expensive for humans to provide. We propose a two-stage algorithm ARES that Alternates REinforcement Learning (RL) and Supervised Fine-Tuning (SFT). First, we request the Teacher to score how much each sentence contributes to solving the problem in a Chain-of-Thought (CoT). This sentence-level feedback allows us to consider individual valuable segments, providing more granular rewards for the RL procedure. Second, we ask the Teacher to correct the wrong reasoning after the RL stage. The RL procedure requires massive efforts for hyperparameter tuning and often generates errors like repetitive words and incomplete sentences. With the correction feedback, we stabilize the RL fine-tuned model through SFT. We conduct experiments on multi-model dataset ScienceQA and A-OKVQA to demonstrate the effectiveness of our proposal. ARES rationale reasoning achieves around 70% win rate against baseline models judged by GPT-4o. Additionally, we observe that the improved rationale reasoning leads to a 2.5% increase in inference answer accuracy on average for the multi-modal datasets.
[ "['Ju-Seung Byun' 'Jiyun Chun' 'Jihyung Kil' 'Andrew Perrault']" ]
null
null
2407.00091
null
null
http://arxiv.org/pdf/2407.00091v1
2024-06-25T20:49:19Z
2024-06-25T20:49:19Z
Learning to Rank for Maps at Airbnb
As a two-sided marketplace, Airbnb brings together hosts who own listings for rent with prospective guests from around the globe. Results from a guest's search for listings are displayed primarily through two interfaces: (1) as a list of rectangular cards that contain on them the listing image, price, rating, and other details, referred to as list-results (2) as oval pins on a map showing the listing price, called map-results. Both these interfaces, since their inception, have used the same ranking algorithm that orders listings by their booking probabilities and selects the top listings for display. But some of the basic assumptions underlying ranking, built for a world where search results are presented as lists, simply break down for maps. This paper describes how we rebuilt ranking for maps by revising the mathematical foundations of how users interact with search results. Our iterative and experiment-driven approach led us through a path full of twists and turns, ending in a unified theory for the two interfaces. Our journey shows how assumptions taken for granted when designing machine learning algorithms may not apply equally across all user interfaces, and how they can be adapted. The net impact was one of the largest improvements in user experience for Airbnb which we discuss as a series of experimental validations.
[ "['Malay Haldar' 'Hongwei Zhang' 'Kedar Bellare' 'Sherry Chen'\n 'Soumyadip Banerjee' 'Xiaotang Wang' 'Mustafa Abdool' 'Huiji Gao'\n 'Pavan Tapadia' 'Liwei He' 'Sanjeev Katariya']" ]
null
null
2407.00099
null
null
http://arxiv.org/pdf/2407.00099v1
2024-06-27T04:29:21Z
2024-06-27T04:29:21Z
Optimal Transport for Latent Integration with An Application to Heterogeneous Neuronal Activity Data
Detecting dynamic patterns of task-specific responses shared across heterogeneous datasets is an essential and challenging problem in many scientific applications in medical science and neuroscience. In our motivating example of rodent electrophysiological data, identifying the dynamical patterns in neuronal activity associated with ongoing cognitive demands and behavior is key to uncovering the neural mechanisms of memory. One of the greatest challenges in investigating a cross-subject biological process is that the systematic heterogeneity across individuals could significantly undermine the power of existing machine learning methods to identify the underlying biological dynamics. In addition, many technically challenging neurobiological experiments are conducted on only a handful of subjects where rich longitudinal data are available for each subject. The low sample sizes of such experiments could further reduce the power to detect common dynamic patterns among subjects. In this paper, we propose a novel heterogeneous data integration framework based on optimal transport to extract shared patterns in complex biological processes. The key advantages of the proposed method are that it can increase discriminating power in identifying common patterns by reducing heterogeneity unrelated to the signal by aligning the extracted latent spatiotemporal information across subjects. Our approach is effective even with a small number of subjects, and does not require auxiliary matching information for the alignment. In particular, our method can align longitudinal data across heterogeneous subjects in a common latent space to capture the dynamics of shared patterns while utilizing temporal dependency within subjects.
[ "['Yubai Yuan' 'Babak Shahbaba' 'Norbert Fortin' 'Keiland Cooper'\n 'Qing Nie' 'Annie Qu']" ]
null
null
2407.00100
null
null
http://arxiv.org/pdf/2407.00100v1
2024-06-27T05:25:46Z
2024-06-27T05:25:46Z
Enhancing In-Context Learning via Implicit Demonstration Augmentation
The emergence of in-context learning (ICL) enables large pre-trained language models (PLMs) to make predictions for unseen inputs without updating parameters. Despite its potential, ICL's effectiveness heavily relies on the quality, quantity, and permutation of demonstrations, commonly leading to suboptimal and unstable performance. In this paper, we tackle this challenge for the first time from the perspective of demonstration augmentation. Specifically, we start with enriching representations of demonstrations by leveraging their deep feature distribution. We then theoretically reveal that when the number of augmented copies approaches infinity, the augmentation is approximately equal to a novel logit calibration mechanism integrated with specific statistical properties. This insight results in a simple yet highly efficient method that significantly improves the average and worst-case accuracy across diverse PLMs and tasks. Moreover, our method effectively reduces performance variance among varying demonstrations, permutations, and templates, and displays the capability to address imbalanced class distributions.
[ "['Xiaoling Zhou' 'Wei Ye' 'Yidong Wang' 'Chaoya Jiang' 'Zhemg Lee'\n 'Rui Xie' 'Shikun Zhang']" ]
null
null
2407.00101
null
null
http://arxiv.org/pdf/2407.00101v1
2024-06-27T06:28:30Z
2024-06-27T06:28:30Z
Hybrid Approach to Parallel Stochastic Gradient Descent
Stochastic Gradient Descent is used for large datasets to train models to reduce the training time. On top of that data parallelism is widely used as a method to efficiently train neural networks using multiple worker nodes in parallel. Synchronous and asynchronous approach to data parallelism is used by most systems to train the model in parallel. However, both of them have their drawbacks. We propose a third approach to data parallelism which is a hybrid between synchronous and asynchronous approaches, using both approaches to train the neural network. When the threshold function is selected appropriately to gradually shift all parameter aggregation from asynchronous to synchronous, we show that in a given time period our hybrid approach outperforms both asynchronous and synchronous approaches.
[ "['Aakash Sudhirbhai Vora' 'Dhrumil Chetankumar Joshi'\n 'Aksh Kantibhai Patel']" ]
null
null
2407.00102
null
null
http://arxiv.org/pdf/2407.00102v1
2024-06-27T07:20:36Z
2024-06-27T07:20:36Z
Curriculum Learning with Quality-Driven Data Selection
The impressive multimodal capabilities demonstrated by OpenAI's GPT-4 have generated significant interest in the development of Multimodal Large Language Models (MLLMs). Visual instruction tuning of MLLMs with machine-generated instruction-following data has shown to enhance zero-shot capabilities across various tasks. However, there has been limited exploration into controlling the quality of the instruction data.Current methodologies for data selection in MLLMs often rely on single, unreliable scores or use downstream tasks for selection, which is time-consuming and can lead to potential overfitting on the chosen evaluation datasets. To mitigate these limitations, we propose a novel data selection methodology that utilizes image-text correlation and model perplexity to evaluate and select data of varying quality. This approach leverages the distinct distribution of these two attributes, mapping data quality into a two-dimensional space that allows for the selection of data based on their location within this distribution. By utilizing this space, we can analyze the impact of task type settings, used as prompts, on data quality. Additionally, this space can be used to construct multi-stage subsets of varying quality to facilitate curriculum learning. Our research includes comprehensive experiments conducted on various datasets. The results emphasize substantial enhancements in five commonly assessed capabilities compared to using the complete dataset. Our codes, data, and models are publicly available at: url{https://anonymous.4open.science/r/EHIT-31B4}
[ "['Biao Wu' 'Fang Meng' 'Ling Chen']" ]
null
null
2407.00104
null
null
http://arxiv.org/pdf/2407.00104v1
2024-06-27T07:33:34Z
2024-06-27T07:33:34Z
AI-Driven Skin Cancer Diagnosis: Grad-CAM and Expert Annotations for Enhanced Interpretability
An AI tool has been developed to provide interpretable support for the diagnosis of BCC via teledermatology, thus speeding up referrals and optimizing resource utilization. The interpretability is provided in two ways: on the one hand, the main BCC dermoscopic patterns are found in the image to justify the BCC/Non BCC classification. Secondly, based on the common visual XAI Grad-CAM, a clinically inspired visual explanation is developed where the relevant features for diagnosis are located. Since there is no established ground truth for BCC dermoscopic features, a standard reference is inferred from the diagnosis of four dermatologists using an Expectation Maximization (EM) based algorithm. The results demonstrate significant improvements in classification accuracy and interpretability, positioning this approach as a valuable tool for early BCC detection and referral to dermatologists. The BCC/non-BCC classification achieved an accuracy rate of 90%. For Clinically-inspired XAI results, the detection of BCC patterns useful to clinicians reaches 99% accuracy. As for the Clinically-inspired Visual XAI results, the mean of the Grad-CAM normalized value within the manually segmented clinical features is 0.57, while outside this region it is 0.16. This indicates that the model struggles to accurately identify the regions of the BCC patterns. These results prove the ability of the AI tool to provide a useful explanation.
[ "['Iván Matas' 'Carmen Serrano' 'Francisca Silva' 'Amalia Serrano'\n 'Tomás Toledo-Pastrana' 'Begoña Acha']" ]
null
null
2407.00105
null
null
http://arxiv.org/pdf/2407.00105v1
2024-06-27T08:50:25Z
2024-06-27T08:50:25Z
Multiple Kronecker RLS fusion-based link propagation for drug-side effect prediction
Drug-side effect prediction has become an essential area of research in the field of pharmacology. As the use of medications continues to rise, so does the importance of understanding and mitigating the potential risks associated with them. At present, researchers have turned to data-driven methods to predict drug-side effects. Drug-side effect prediction is a link prediction problem, and the related data can be described from various perspectives. To process these kinds of data, a multi-view method, called Multiple Kronecker RLS fusion-based link propagation (MKronRLSF-LP), is proposed. MKronRLSF-LP extends the Kron-RLS by finding the consensus partitions and multiple graph Laplacian constraints in the multi-view setting. Both of these multi-view settings contribute to a higher quality result. Extensive experiments have been conducted on drug-side effect datasets, and our empirical results provide evidence that our approach is effective and robust.
[ "['Yuqing Qian' 'Ziyu Zheng' 'Prayag Tiwari' 'Yijie Ding' 'Quan Zou']" ]