categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
null
null
2405.05192
null
null
http://arxiv.org/pdf/2405.05192v1
2024-05-08T16:30:45Z
2024-05-08T16:30:45Z
Full error analysis of the random deep splitting method for nonlinear parabolic PDEs and PIDEs with infinite activity
In this paper, we present a randomized extension of the deep splitting algorithm introduced in [Beck, Becker, Cheridito, Jentzen, and Neufeld (2021)] using random neural networks suitable to approximately solve both high-dimensional nonlinear parabolic PDEs and PIDEs with jumps having (possibly) infinite activity. We provide a full error analysis of our so-called random deep splitting method. In particular, we prove that our random deep splitting method converges to the (unique viscosity) solution of the nonlinear PDE or PIDE under consideration. Moreover, we empirically analyze our random deep splitting method by considering several numerical examples including both nonlinear PDEs and nonlinear PIDEs relevant in the context of pricing of financial derivatives under default risk. In particular, we empirically demonstrate in all examples that our random deep splitting method can approximately solve nonlinear PDEs and PIDEs in 10'000 dimensions within seconds.
[ "['Ariel Neufeld' 'Philipp Schmocker' 'Sizhou Wu']" ]
null
null
2405.05196
null
null
http://arxiv.org/pdf/2405.05196v1
2024-05-08T16:35:06Z
2024-05-08T16:35:06Z
SINBAD: Saliency-informed detection of breakage caused by ad blocking
Privacy-enhancing blocking tools based on filter-list rules tend to break legitimate functionality. Filter-list maintainers could benefit from automated breakage detection tools that allow them to proactively fix problematic rules before deploying them to millions of users. We introduce SINBAD, an automated breakage detector that improves the accuracy over the state of the art by 20%, and is the first to detect dynamic breakage and breakage caused by style-oriented filter rules. The success of SINBAD is rooted in three innovations: (1) the use of user-reported breakage issues in forums that enable the creation of a high-quality dataset for training in which only breakage that users perceive as an issue is included; (2) the use of 'web saliency' to automatically identify user-relevant regions of a website on which to prioritize automated interactions aimed at triggering breakage; and (3) the analysis of webpages via subtrees which enables fine-grained identification of problematic filter rules.
[ "['Saiid El Hajj Chehade' 'Sandra Siby' 'Carmela Troncoso']" ]
null
null
2405.05202
null
null
http://arxiv.org/pdf/2405.05202v2
2024-05-22T20:36:51Z
2024-05-08T16:39:59Z
Discretely Beyond $1/e$: Guided Combinatorial Algorithms for Submodular Maximization
For constrained, not necessarily monotone submodular maximization, all known approximation algorithms with ratio greater than $1/e$ require continuous ideas, such as queries to the multilinear extension of a submodular function and its gradient, which are typically expensive to simulate with the original set function. For combinatorial algorithms, the best known approximation ratios for both size and matroid constraint are obtained by a simple randomized greedy algorithm of Buchbinder et al. [9]: $1/e approx 0.367$ for size constraint and $0.281$ for the matroid constraint in $mathcal O (kn)$ queries, where $k$ is the rank of the matroid. In this work, we develop the first combinatorial algorithms to break the $1/e$ barrier: we obtain approximation ratio of $0.385$ in $mathcal O (kn)$ queries to the submodular set function for size constraint, and $0.305$ for a general matroid constraint. These are achieved by guiding the randomized greedy algorithm with a fast local search algorithm. Further, we develop deterministic versions of these algorithms, maintaining the same ratio and asymptotic time complexity. Finally, we develop a deterministic, nearly linear time algorithm with ratio $0.377$.
[ "['Yixin Chen' 'Ankur Nath' 'Chunli Peng' 'Alan Kuhnle']" ]
null
null
2405.05205
null
null
http://arxiv.org/pdf/2405.05205v1
2024-05-08T16:43:25Z
2024-05-08T16:43:25Z
Hybrid Quantum Graph Neural Network for Molecular Property Prediction
To accelerate the process of materials design, materials science has increasingly used data driven techniques to extract information from collected data. Specially, machine learning (ML) algorithms, which span the ML discipline, have demonstrated ability to predict various properties of materials with the level of accuracy similar to explicit calculation of quantum mechanical theories, but with significantly reduced run time and computational resources. Within ML, graph neural networks have emerged as an important algorithm within the field of machine learning, since they are capable of predicting accurately a wide range of important physical, chemical and electronic properties due to their higher learning ability based on the graph representation of material and molecular descriptors through the aggregation of information embedded within the graph. In parallel with the development of state of the art classical machine learning applications, the fusion of quantum computing and machine learning have created a new paradigm where classical machine learning model can be augmented with quantum layers which are able to encode high dimensional data more efficiently. Leveraging the structure of existing algorithms, we developed a unique and novel gradient free hybrid quantum classical convoluted graph neural network (HyQCGNN) to predict formation energies of perovskite materials. The performance of our hybrid statistical model is competitive with the results obtained purely from a classical convoluted graph neural network, and other classical machine learning algorithms, such as XGBoost. Consequently, our study suggests a new pathway to explore how quantum feature encoding and parametric quantum circuits can yield drastic improvements of complex ML algorithm like graph neural network.
[ "['Michael Vitz' 'Hamed Mohammadbagherpoor' 'Samarth Sandeep'\n 'Andrew Vlasic' 'Richard Padbury' 'Anh Pham']" ]
null
null
2405.05206
null
null
http://arxiv.org/pdf/2405.05206v1
2024-05-08T16:43:50Z
2024-05-08T16:43:50Z
Anomaly Detection in Certificate Transparency Logs
We propose an anomaly detection technique for X.509 certificates utilizing Isolation Forest. This method can be beneficial when compliance testing with X.509 linters proves unsatisfactory, and we seek to identify anomalies beyond standards compliance. The technique is validated on a sample of certificates from Certificate Transparency logs.
[ "['Richard Ostertág' 'Martin Stanek']" ]
null
null
2405.05219
null
null
http://arxiv.org/pdf/2405.05219v1
2024-05-08T17:11:38Z
2024-05-08T17:11:38Z
Conv-Basis: A New Paradigm for Efficient Attention Inference and Gradient Computation in Transformers
Large Language Models (LLMs) have profoundly changed the world. Their self-attention mechanism is the key to the success of transformers in LLMs. However, the quadratic computational cost $O(n^2)$ to the length $n$ input sequence is the notorious obstacle for further improvement and scalability in the longer context. In this work, we leverage the convolution-like structure of attention matrices to develop an efficient approximation method for attention computation using convolution matrices. We propose a $mathsf{conv}$ basis system, "similar" to the rank basis, and show that any lower triangular (attention) matrix can always be decomposed as a sum of $k$ structured convolution matrices in this basis system. We then design an algorithm to quickly decompose the attention matrix into $k$ convolution matrices. Thanks to Fast Fourier Transforms (FFT), the attention {it inference} can be computed in $O(knd log n)$ time, where $d$ is the hidden dimension. In practice, we have $ d ll n$, i.e., $d=3,072$ and $n=1,000,000$ for Gemma. Thus, when $kd = n^{o(1)}$, our algorithm achieve almost linear time, i.e., $n^{1+o(1)}$. Furthermore, the attention {it training forward} and {it backward gradient} can be computed in $n^{1+o(1)}$ as well. Our approach can avoid explicitly computing the $n times n$ attention matrix, which may largely alleviate the quadratic computational complexity. Furthermore, our algorithm works on any input matrices. This work provides a new paradigm for accelerating attention computation in transformers to enable their application to longer contexts.
[ "['Jiuxiang Gu' 'Yingyu Liang' 'Heshan Liu' 'Zhenmei Shi' 'Zhao Song'\n 'Junze Yin']" ]
null
null
2405.05231
null
null
http://arxiv.org/pdf/2405.05231v1
2024-05-08T17:27:11Z
2024-05-08T17:27:11Z
DiskGNN: Bridging I/O Efficiency and Model Accuracy for Out-of-Core GNN Training
Graph neural networks (GNNs) are machine learning models specialized for graph data and widely used in many applications. To train GNNs on large graphs that exceed CPU memory, several systems store data on disk and conduct out-of-core processing. However, these systems suffer from either read amplification when reading node features that are usually smaller than a disk page or degraded model accuracy by treating the graph as disconnected partitions. To close this gap, we build a system called DiskGNN, which achieves high I/O efficiency and thus fast training without hurting model accuracy. The key technique used by DiskGNN is offline sampling, which helps decouple graph sampling from model computation. In particular, by conducting graph sampling beforehand, DiskGNN acquires the node features that will be accessed by model computation, and such information is utilized to pack the target node features contiguously on disk to avoid read amplification. Besides, name{} also adopts designs including four-level feature store to fully utilize the memory hierarchy to cache node features and reduce disk access, batched packing to accelerate the feature packing process, and pipelined training to overlap disk access with other operations. We compare DiskGNN with Ginex and MariusGNN, which are state-of-the-art systems for out-of-core GNN training. The results show that DiskGNN can speed up the baselines by over 8x while matching their best model accuracy.
[ "['Renjie Liu' 'Yichuan Wang' 'Xiao Yan' 'Zhenkun Cai' 'Minjie Wang'\n 'Haitian Jiang' 'Bo Tang' 'Jinyang Li']" ]
null
null
2405.05235
null
null
http://arxiv.org/pdf/2405.05235v1
2024-05-08T17:28:07Z
2024-05-08T17:28:07Z
RACH Traffic Prediction in Massive Machine Type Communications
Traffic pattern prediction has emerged as a promising approach for efficiently managing and mitigating the impacts of event-driven bursty traffic in massive machine-type communication (mMTC) networks. However, achieving accurate predictions of bursty traffic remains a non-trivial task due to the inherent randomness of events, and these challenges intensify within live network environments. Consequently, there is a compelling imperative to design a lightweight and agile framework capable of assimilating continuously collected data from the network and accurately forecasting bursty traffic in mMTC networks. This paper addresses these challenges by presenting a machine learning-based framework tailored for forecasting bursty traffic in multi-channel slotted ALOHA networks. The proposed machine learning network comprises long-term short-term memory (LSTM) and a DenseNet with feed-forward neural network (FFNN) layers, where the residual connections enhance the training ability of the machine learning network in capturing complicated patterns. Furthermore, we develop a new low-complexity online prediction algorithm that updates the states of the LSTM network by leveraging frequently collected data from the mMTC network. Simulation results and complexity analysis demonstrate the superiority of our proposed algorithm in terms of both accuracy and complexity, making it well-suited for time-critical live scenarios. We evaluate the performance of the proposed framework in a network with a single base station and thousands of devices organized into groups with distinct traffic-generating characteristics. Comprehensive evaluations and simulations indicate that our proposed machine learning approach achieves a remarkable $52%$ higher accuracy in long-term predictions compared to traditional methods, without imposing additional processing load on the system.
[ "['Hossein Mehri' 'Hao Chen' 'Hani Mehrpouyan']" ]
null
null
2405.05236
null
null
http://arxiv.org/pdf/2405.05236v3
2024-05-14T17:53:13Z
2024-05-08T17:30:50Z
Stability and Performance Analysis of Discrete-Time ReLU Recurrent Neural Networks
This paper presents sufficient conditions for the stability and $ell_2$-gain performance of recurrent neural networks (RNNs) with ReLU activation functions. These conditions are derived by combining Lyapunov/dissipativity theory with Quadratic Constraints (QCs) satisfied by repeated ReLUs. We write a general class of QCs for repeated RELUs using known properties for the scalar ReLU. Our stability and performance condition uses these QCs along with a "lifted" representation for the ReLU RNN. We show that the positive homogeneity property satisfied by a scalar ReLU does not expand the class of QCs for the repeated ReLU. We present examples to demonstrate the stability / performance condition and study the effect of the lifting horizon.
[ "['Sahel Vahedi Noori' 'Bin Hu' 'Geir Dullerud' 'Peter Seiler']" ]
null
null
2405.05239
null
null
http://arxiv.org/pdf/2405.05239v1
2024-05-08T17:36:14Z
2024-05-08T17:36:14Z
Cellular Traffic Prediction Using Online Prediction Algorithms
The advent of 5G technology promises a paradigm shift in the realm of telecommunications, offering unprecedented speeds and connectivity. However, the efficient management of traffic in 5G networks remains a critical challenge. It is due to the dynamic and heterogeneous nature of network traffic, varying user behaviors, extended network size, and diverse applications, all of which demand highly accurate and adaptable prediction models to optimize network resource allocation and management. This paper investigates the efficacy of live prediction algorithms for forecasting cellular network traffic in real-time scenarios. We apply two live prediction algorithms on machine learning models, one of which is recently proposed Fast LiveStream Prediction (FLSP) algorithm. We examine the performance of these algorithms under two distinct data gathering methodologies: synchronous, where all network cells report statistics simultaneously, and asynchronous, where reporting occurs across consecutive time slots. Our study delves into the impact of these gathering scenarios on the predictive performance of traffic models. Our study reveals that the FLSP algorithm can halve the required bandwidth for asynchronous data reporting compared to conventional online prediction algorithms, while simultaneously enhancing prediction accuracy and reducing processing load. Additionally, we conduct a thorough analysis of algorithmic complexity and memory requirements across various machine learning models. Through empirical evaluation, we provide insights into the trade-offs inherent in different prediction strategies, offering valuable guidance for network optimization and resource allocation in dynamic environments.
[ "['Hossein Mehri' 'Hao Chen' 'Hani Mehrpouyan']" ]
null
null
2405.05240
null
null
http://arxiv.org/pdf/2405.05240v1
2024-05-08T17:36:29Z
2024-05-08T17:36:29Z
An LSTM-Based Chord Generation System Using Chroma Histogram Representations
This paper proposes a system for chord generation to monophonic symbolic melodies using an LSTM-based model trained on chroma histogram representations of chords. Chroma representations promise more harmonically rich generation than chord label-based approaches, whilst maintaining a small number of dimensions in the dataset. This system is shown to be suitable for limited real-time use. While it does not meet the state-of-the-art for coherent long-term generation, it does show diatonic generation with cadential chord relationships. The need for further study into chroma histograms as an extracted feature in chord generation tasks is highlighted.
[ "['Jack Hardwick']" ]
null
null
2405.05241
null
null
http://arxiv.org/pdf/2405.05241v2
2024-07-11T16:24:52Z
2024-05-08T17:37:57Z
BenthicNet: A global compilation of seafloor images for deep learning applications
Advances in underwater imaging enable the collection of extensive seafloor image datasets that are necessary for monitoring important benthic ecosystems. The ability to collect seafloor imagery has outpaced our capacity to analyze it, hindering expedient mobilization of this crucial environmental information. Recent machine learning approaches provide opportunities to increase the efficiency with which seafloor image datasets are analyzed, yet large and consistent datasets necessary to support development of such approaches are scarce. Here we present BenthicNet: a global compilation of seafloor imagery designed to support the training and evaluation of large-scale image recognition models. An initial set of over 11.4 million images was collected and curated to represent a diversity of seafloor environments using a representative subset of 1.3 million images. These are accompanied by 2.6 million annotations translated to the CATAMI scheme, which span 190,000 of the images. A large deep learning model was trained on this compilation and preliminary results suggest it has utility for automating large and small-scale image analysis tasks. The compilation and model are made openly available for use by the scientific community at https://doi.org/10.20383/103.0614.
[ "['Scott C. Lowe' 'Benjamin Misiuk' 'Isaac Xu' 'Shakhboz Abdulazizov'\n 'Amit R. Baroi' 'Alex C. Bastos' 'Merlin Best' 'Vicki Ferrini'\n 'Ariell Friedman' 'Deborah Hart' 'Ove Hoegh-Guldberg'\n 'Daniel Ierodiaconou' 'Julia Mackin-McLaughlin' 'Kathryn Markey'\n 'Pedro S. Menandro' 'Jacquomo Monk' 'Shreya Nemani' \"John O'Brien\"\n 'Elizabeth Oh' 'Luba Y. Reshitnyk' 'Katleen Robert' 'Chris M. Roelfsema'\n 'Jessica A. Sameoto' 'Alexandre C. G. Schimel' 'Jordan A. Thomson'\n 'Brittany R. Wilson' 'Melisa C. Wong' 'Craig J. Brown'\n 'Thomas Trappenberg']" ]
null
null
2405.05243
null
null
http://arxiv.org/pdf/2405.05243v1
2024-05-08T17:40:03Z
2024-05-08T17:40:03Z
Deep learning-based variational autoencoder for classification of quantum and classical states of light
Advancements in optical quantum technologies have been enabled by the generation, manipulation, and characterization of light, with identification based on its photon statistics. However, characterizing light and its sources through single photon measurements often requires efficient detectors and longer measurement times to obtain high-quality photon statistics. Here we introduce a deep learning-based variational autoencoder (VAE) method for classifying single photon added coherent state (SPACS), single photon added thermal state (SPACS), mixed states between coherent/SPACS and thermal/SPATS of light. Our semisupervised learning-based VAE efficiently maps the photon statistics features of light to a lower dimension, enabling quasi-instantaneous classification with low average photon counts. The proposed VAE method is robust and maintains classification accuracy in the presence of losses inherent in an experiment, such as finite collection efficiency, non-unity quantum efficiency, finite number of detectors, etc. Additionally, leveraging the transfer learning capabilities of VAE enables successful classification of data of any quality using a single trained model. We envision that such a deep learning methodology will enable better classification of quantum light and light sources even in the presence of poor detection quality.
[ "['Mahesh Bhupati' 'Abhishek Mall' 'Anshuman Kumar' 'Pankaj K. Jha']" ]
null
null
2405.05252
null
null
http://arxiv.org/pdf/2405.05252v1
2024-05-08T17:56:47Z
2024-05-08T17:56:47Z
Attention-Driven Training-Free Efficiency Enhancement of Diffusion Models
Diffusion Models (DMs) have exhibited superior performance in generating high-quality and diverse images. However, this exceptional performance comes at the cost of expensive architectural design, particularly due to the attention module heavily used in leading models. Existing works mainly adopt a retraining process to enhance DM efficiency. This is computationally expensive and not very scalable. To this end, we introduce the Attention-driven Training-free Efficient Diffusion Model (AT-EDM) framework that leverages attention maps to perform run-time pruning of redundant tokens, without the need for any retraining. Specifically, for single-denoising-step pruning, we develop a novel ranking algorithm, Generalized Weighted Page Rank (G-WPR), to identify redundant tokens, and a similarity-based recovery method to restore tokens for the convolution operation. In addition, we propose a Denoising-Steps-Aware Pruning (DSAP) approach to adjust the pruning budget across different denoising timesteps for better generation quality. Extensive evaluations show that AT-EDM performs favorably against prior art in terms of efficiency (e.g., 38.8% FLOPs saving and up to 1.53x speed-up over Stable Diffusion XL) while maintaining nearly the same FID and CLIP scores as the full model. Project webpage: https://atedm.github.io.
[ "['Hongjie Wang' 'Difan Liu' 'Yan Kang' 'Yijun Li' 'Zhe Lin' 'Niraj K. Jha'\n 'Yuchen Liu']" ]
null
null
2405.05255
null
null
http://arxiv.org/pdf/2405.05255v1
2024-05-08T17:59:03Z
2024-05-08T17:59:03Z
Diffusion-HMC: Parameter Inference with Diffusion Model driven Hamiltonian Monte Carlo
Diffusion generative models have excelled at diverse image generation and reconstruction tasks across fields. A less explored avenue is their application to discriminative tasks involving regression or classification problems. The cornerstone of modern cosmology is the ability to generate predictions for observed astrophysical fields from theory and constrain physical models from observations using these predictions. This work uses a single diffusion generative model to address these interlinked objectives -- as a surrogate model or emulator for cold dark matter density fields conditional on input cosmological parameters, and as a parameter inference model that solves the inverse problem of constraining the cosmological parameters of an input field. The model is able to emulate fields with summary statistics consistent with those of the simulated target distribution. We then leverage the approximate likelihood of the diffusion generative model to derive tight constraints on cosmology by using the Hamiltonian Monte Carlo method to sample the posterior on cosmological parameters for a given test image. Finally, we demonstrate that this parameter inference approach is more robust to the addition of noise than baseline parameter inference networks.
[ "['Nayantara Mudur' 'Carolina Cuesta-Lazaro' 'Douglas P. Finkbeiner']" ]
null
null
2405.05256
null
null
http://arxiv.org/pdf/2405.05256v1
2024-05-08T17:59:11Z
2024-05-08T17:59:11Z
THRONE: An Object-based Hallucination Benchmark for the Free-form Generations of Large Vision-Language Models
Mitigating hallucinations in large vision-language models (LVLMs) remains an open problem. Recent benchmarks do not address hallucinations in open-ended free-form responses, which we term "Type I hallucinations". Instead, they focus on hallucinations responding to very specific question formats -- typically a multiple-choice response regarding a particular object or attribute -- which we term "Type II hallucinations". Additionally, such benchmarks often require external API calls to models which are subject to change. In practice, we observe that a reduction in Type II hallucinations does not lead to a reduction in Type I hallucinations but rather that the two forms of hallucinations are often anti-correlated. To address this, we propose THRONE, a novel object-based automatic framework for quantitatively evaluating Type I hallucinations in LVLM free-form outputs. We use public language models (LMs) to identify hallucinations in LVLM responses and compute informative metrics. By evaluating a large selection of recent LVLMs using public datasets, we show that an improvement in existing metrics do not lead to a reduction in Type I hallucinations, and that established benchmarks for measuring Type I hallucinations are incomplete. Finally, we provide a simple and effective data augmentation method to reduce Type I and Type II hallucinations as a strong baseline.
[ "['Prannay Kaul' 'Zhizhong Li' 'Hao Yang' 'Yonatan Dukler'\n 'Ashwin Swaminathan' 'C. J. Taylor' 'Stefano Soatto']" ]
null
null
2405.05258
null
null
http://arxiv.org/pdf/2405.05258v1
2024-05-08T17:59:53Z
2024-05-08T17:59:53Z
Multi-Modal Data-Efficient 3D Scene Understanding for Autonomous Driving
Efficient data utilization is crucial for advancing 3D scene understanding in autonomous driving, where reliance on heavily human-annotated LiDAR point clouds challenges fully supervised methods. Addressing this, our study extends into semi-supervised learning for LiDAR semantic segmentation, leveraging the intrinsic spatial priors of driving scenes and multi-sensor complements to augment the efficacy of unlabeled datasets. We introduce LaserMix++, an evolved framework that integrates laser beam manipulations from disparate LiDAR scans and incorporates LiDAR-camera correspondences to further assist data-efficient learning. Our framework is tailored to enhance 3D scene consistency regularization by incorporating multi-modality, including 1) multi-modal LaserMix operation for fine-grained cross-sensor interactions; 2) camera-to-LiDAR feature distillation that enhances LiDAR feature learning; and 3) language-driven knowledge guidance generating auxiliary supervisions using open-vocabulary models. The versatility of LaserMix++ enables applications across LiDAR representations, establishing it as a universally applicable solution. Our framework is rigorously validated through theoretical analysis and extensive experiments on popular driving perception datasets. Results demonstrate that LaserMix++ markedly outperforms fully supervised alternatives, achieving comparable accuracy with five times fewer annotations and significantly improving the supervised-only baselines. This substantial advancement underscores the potential of semi-supervised approaches in reducing the reliance on extensive labeled data in LiDAR-based 3D scene understanding systems.
[ "['Lingdong Kong' 'Xiang Xu' 'Jiawei Ren' 'Wenwei Zhang' 'Liang Pan'\n 'Kai Chen' 'Wei Tsang Ooi' 'Ziwei Liu']" ]
null
null
2405.05272
null
null
http://arxiv.org/pdf/2405.05272v1
2024-04-29T05:25:32Z
2024-04-29T05:25:32Z
Learning bridge numbers of knots
This paper employs various computational techniques to determine the bridge numbers of both classical and virtual knots. For classical knots, there is no ambiguity of what the bridge number means. For virtual knots, there are multiple natural definitions of bridge number, and we demonstrate that the difference can be arbitrarily far apart. We then acquired two datasets, one for classical and one for virtual knots, each comprising over one million labeled data points. With the data, we conduct experiments to evaluate the effectiveness of common machine learning models in classifying knots based on their bridge numbers.
[ "['Hanh Vo' 'Puttipong Pongtanapaisan' 'Thieu Nguyen']" ]
null
null
2405.05277
null
null
http://arxiv.org/abs/2405.05277v1
2024-05-03T23:58:27Z
2024-05-03T23:58:27Z
An Attention-Based Deep Generative Model for Anomaly Detection in Industrial Control Systems
Anomaly detection is critical for the secure and reliable operation of industrial control systems. As our reliance on such complex cyber-physical systems grows, it becomes paramount to have automated methods for detecting anomalies, preventing attacks, and responding intelligently. {This paper presents a novel deep generative model to meet this need. The proposed model follows a variational autoencoder architecture with a convolutional encoder and decoder to extract features from both spatial and temporal dimensions. Additionally, we incorporate an attention mechanism that directs focus towards specific regions, enhancing the representation of relevant features and improving anomaly detection accuracy. We also employ a dynamic threshold approach leveraging the reconstruction probability and make our source code publicly available to promote reproducibility and facilitate further research. Comprehensive experimental analysis is conducted on data from all six stages of the Secure Water Treatment (SWaT) testbed, and the experimental results demonstrate the superior performance of our approach compared to several state-of-the-art baseline techniques.
[ "['Mayra Macas' 'Chunming Wu' 'Walter Fuertes']" ]
null
null
2405.05282
null
null
http://arxiv.org/pdf/2405.05282v2
2024-06-07T18:41:17Z
2024-05-07T12:34:18Z
The Detection of KIC 1718360, A Rotating Variable with a Possible Companion, Using Machine Learning
This paper presents the detection of a periodic dimming event in the lightcurve of the G1.5IV-V type star KIC 1718360. This is based on visible-light observations conducted by both the TESS and Kepler space telescopes. Analysis of the data seems to point toward a high rotation rate in the star, with a rotational period of 2.938 days. The high variability seen within the star's lightcurve points toward classification as a rotating variable. The initial observation was made in Kepler Quarter 16 data using the One-Class SVM machine learning method. Subsequent observations by the TESS space telescope corroborated these findings. It appears that KIC 1718360 is a nearby rotating variable that appears in little to no major catalogs as such. A secondary, additional periodic dip is also present, indicating a possible exoplanetary companion.
[ "['Jakob Roche']" ]
null
null
2405.05286
null
null
http://arxiv.org/pdf/2405.05286v1
2024-05-07T22:54:17Z
2024-05-07T22:54:17Z
Tiny Deep Ensemble: Uncertainty Estimation in Edge AI Accelerators via Ensembling Normalization Layers with Shared Weights
The applications of artificial intelligence (AI) are rapidly evolving, and they are also commonly used in safety-critical domains, such as autonomous driving and medical diagnosis, where functional safety is paramount. In AI-driven systems, uncertainty estimation allows the user to avoid overconfidence predictions and achieve functional safety. Therefore, the robustness and reliability of model predictions can be improved. However, conventional uncertainty estimation methods, such as the deep ensemble method, impose high computation and, accordingly, hardware (latency and energy) overhead because they require the storage and processing of multiple models. Alternatively, Monte Carlo dropout (MC-dropout) methods, although having low memory overhead, necessitate numerous ($sim 100$) forward passes, leading to high computational overhead and latency. Thus, these approaches are not suitable for battery-powered edge devices with limited computing and memory resources. In this paper, we propose the Tiny-Deep Ensemble approach, a low-cost approach for uncertainty estimation on edge devices. In our approach, only normalization layers are ensembled $M$ times, with all ensemble members sharing common weights and biases, leading to a significant decrease in storage requirements and latency. Moreover, our approach requires only one forward pass in a hardware architecture that allows batch processing for inference and uncertainty estimation. Furthermore, it has approximately the same memory overhead compared to a single model. Therefore, latency and memory overhead are reduced by a factor of up to $sim Mtimes$. Nevertheless, our method does not compromise accuracy, with an increase in inference accuracy of up to $sim 1%$ and a reduction in RMSE of $17.17%$ in various benchmark datasets, tasks, and state-of-the-art architectures.
[ "['Soyed Tuhin Ahmed' 'Michael Hefenbrock' 'Mehdi B. Tahoori']" ]
null
null
2405.05288
null
null
http://arxiv.org/pdf/2405.05288v3
2024-05-22T05:54:17Z
2024-05-08T03:40:36Z
Learning Social Graph for Inactive User Recommendation
Social relations have been widely incorporated into recommender systems to alleviate data sparsity problem. However, raw social relations don't always benefit recommendation due to their inferior quality and insufficient quantity, especially for inactive users, whose interacted items are limited. In this paper, we propose a novel social recommendation method called LSIR (textbf{L}earning textbf{S}ocial Graph for textbf{I}nactive User textbf{R}ecommendation) that learns an optimal social graph structure for social recommendation, especially for inactive users. LSIR recursively aggregates user and item embeddings to collaboratively encode item and user features. Then, graph structure learning (GSL) is employed to refine the raw user-user social graph, by removing noisy edges and adding new edges based on the enhanced embeddings. Meanwhile, mimic learning is implemented to guide active users in mimicking inactive users during model training, which improves the construction of new edges for inactive users. Extensive experiments on real-world datasets demonstrate that LSIR achieves significant improvements of up to 129.58% on NDCG in inactive user recommendation. Our code is available at~url{https://github.com/liun-online/LSIR}.
[ "['Nian Liu' 'Shen Fan' 'Ting Bai' 'Peng Wang' 'Mingwei Sun' 'Yanhu Mo'\n 'Xiaoxiao Xu' 'Hong Liu' 'Chuan Shi']" ]
null
null
2405.05293
null
null
http://arxiv.org/pdf/2405.05293v1
2024-05-08T09:38:38Z
2024-05-08T09:38:38Z
A Review on Fragment-based De Novo 2D Molecule Generation
In the field of computational molecule generation, an essential task in the discovery of new chemical compounds, fragment-based deep generative models are a leading approach, consistently achieving state-of-the-art results in molecular design benchmarks as of 2023. We present a detailed comparative assessment of their architectures, highlighting their unique approaches to molecular fragmentation and generative modeling. This review also includes comparisons of output quality, generation speed, and the current limitations of specific models. We also highlight promising avenues for future research that could bridge fragment-based models to real-world applications.
[ "['Sergei Voloboev']" ]
null
null
2405.05294
null
null
http://arxiv.org/pdf/2405.05294v1
2024-05-08T10:03:50Z
2024-05-08T10:03:50Z
Harmonizing Program Induction with Rate-Distortion Theory
Many aspects of human learning have been proposed as a process of constructing mental programs: from acquiring symbolic number representations to intuitive theories about the world. In parallel, there is a long-tradition of using information processing to model human cognition through Rate Distortion Theory (RDT). Yet, it is still poorly understood how to apply RDT when mental representations take the form of programs. In this work, we adapt RDT by proposing a three way trade-off among rate (description length), distortion (error), and computational costs (search budget). We use simulations on a melody task to study the implications of this trade-off, and show that constructing a shared program library across tasks provides global benefits. However, this comes at the cost of sensitivity to curricula, which is also characteristic of human learners. Finally, we use methods from partial information decomposition to generate training curricula that induce more effective libraries and better generalization.
[ "['Hanqi Zhou' 'David G. Nagy' 'Charley M. Wu']" ]
null
null
2405.05295
null
null
http://arxiv.org/pdf/2405.05295v1
2024-05-08T11:03:22Z
2024-05-08T11:03:22Z
Relevant Irrelevance: Generating Alterfactual Explanations for Image Classifiers
In this paper, we demonstrate the feasibility of alterfactual explanations for black box image classifiers. Traditional explanation mechanisms from the field of Counterfactual Thinking are a widely-used paradigm for Explainable Artificial Intelligence (XAI), as they follow a natural way of reasoning that humans are familiar with. However, most common approaches from this field are based on communicating information about features or characteristics that are especially important for an AI's decision. However, to fully understand a decision, not only knowledge about relevant features is needed, but the awareness of irrelevant information also highly contributes to the creation of a user's mental model of an AI system. To this end, a novel approach for explaining AI systems called alterfactual explanations was recently proposed on a conceptual level. It is based on showing an alternative reality where irrelevant features of an AI's input are altered. By doing so, the user directly sees which input data characteristics can change arbitrarily without influencing the AI's decision. In this paper, we show for the first time that it is possible to apply this idea to black box models based on neural networks. To this end, we present a GAN-based approach to generate these alterfactual explanations for binary image classifiers. Further, we present a user study that gives interesting insights on how alterfactual explanations can complement counterfactual explanations.
[ "['Silvan Mertes' 'Tobias Huber' 'Christina Karle' 'Katharina Weitz'\n 'Ruben Schlagowski' 'Cristina Conati' 'Elisabeth André']" ]
null
null
2405.05334
null
null
http://arxiv.org/pdf/2405.05334v1
2024-05-08T18:09:16Z
2024-05-08T18:09:16Z
Multiplicative Dynamic Mode Decomposition
Koopman operators are infinite-dimensional operators that linearize nonlinear dynamical systems, facilitating the study of their spectral properties and enabling the prediction of the time evolution of observable quantities. Recent methods have aimed to approximate Koopman operators while preserving key structures. However, approximating Koopman operators typically requires a dictionary of observables to capture the system's behavior in a finite-dimensional subspace. The selection of these functions is often heuristic, may result in the loss of spectral information, and can severely complicate structure preservation. This paper introduces Multiplicative Dynamic Mode Decomposition (MultDMD), which enforces the multiplicative structure inherent in the Koopman operator within its finite-dimensional approximation. Leveraging this multiplicative property, we guide the selection of observables and define a constrained optimization problem for the matrix approximation, which can be efficiently solved. MultDMD presents a structured approach to finite-dimensional approximations and can more accurately reflect the spectral properties of the Koopman operator. We elaborate on the theoretical framework of MultDMD, detailing its formulation, optimization strategy, and convergence properties. The efficacy of MultDMD is demonstrated through several examples, including the nonlinear pendulum, the Lorenz system, and fluid dynamics data, where we demonstrate its remarkable robustness to noise.
[ "['Nicolas Boullé' 'Matthew J. Colbrook']" ]
null
null
2405.05343
null
null
http://arxiv.org/pdf/2405.05343v1
2024-05-08T18:16:37Z
2024-05-08T18:16:37Z
Distributed Least Squares in Small Space via Sketching and Bias Reduction
Matrix sketching is a powerful tool for reducing the size of large data matrices. Yet there are fundamental limitations to this size reduction when we want to recover an accurate estimator for a task such as least square regression. We show that these limitations can be circumvented in the distributed setting by designing sketching methods that minimize the bias of the estimator, rather than its error. In particular, we give a sparse sketching method running in optimal space and current matrix multiplication time, which recovers a nearly-unbiased least squares estimator using two passes over the data. This leads to new communication-efficient distributed averaging algorithms for least squares and related tasks, which directly improve on several prior approaches. Our key novelty is a new bias analysis for sketched least squares, giving a sharp characterization of its dependence on the sketch sparsity. The techniques include new higher-moment restricted Bai-Silverstein inequalities, which are of independent interest to the non-asymptotic analysis of deterministic equivalents for random matrices that arise from sketching.
[ "['Sachin Garg' 'Kevin Tan' 'Michał Dereziński']" ]
null
null
2405.05348
null
null
http://arxiv.org/pdf/2405.05348v1
2024-05-08T18:27:20Z
2024-05-08T18:27:20Z
The Effect of Model Size on LLM Post-hoc Explainability via LIME
Large language models (LLMs) are becoming bigger to boost performance. However, little is known about how explainability is affected by this trend. This work explores LIME explanations for DeBERTaV3 models of four different sizes on natural language inference (NLI) and zero-shot classification (ZSC) tasks. We evaluate the explanations based on their faithfulness to the models' internal decision processes and their plausibility, i.e. their agreement with human explanations. The key finding is that increased model size does not correlate with plausibility despite improved model performance, suggesting a misalignment between the LIME explanations and the models' internal processes as model size increases. Our results further suggest limitations regarding faithfulness metrics in NLI contexts.
[ "['Henning Heyen' 'Amy Widdicombe' 'Noah Y. Siegel' 'Maria Perez-Ortiz'\n 'Philip Treleaven']" ]
null
null
2405.05349
null
null
http://arxiv.org/pdf/2405.05349v1
2024-05-08T18:27:37Z
2024-05-08T18:27:37Z
Offline Model-Based Optimization via Policy-Guided Gradient Search
Offline optimization is an emerging problem in many experimental engineering domains including protein, drug or aircraft design, where online experimentation to collect evaluation data is too expensive or dangerous. To avoid that, one has to optimize an unknown function given only its offline evaluation at a fixed set of inputs. A naive solution to this problem is to learn a surrogate model of the unknown function and optimize this surrogate instead. However, such a naive optimizer is prone to erroneous overestimation of the surrogate (possibly due to over-fitting on a biased sample of function evaluation) on inputs outside the offline dataset. Prior approaches addressing this challenge have primarily focused on learning robust surrogate models. However, their search strategies are derived from the surrogate model rather than the actual offline data. To fill this important gap, we introduce a new learning-to-search perspective for offline optimization by reformulating it as an offline reinforcement learning problem. Our proposed policy-guided gradient search approach explicitly learns the best policy for a given surrogate model created from the offline data. Our empirical results on multiple benchmarks demonstrate that the learned optimization policy can be combined with existing offline surrogates to significantly improve the optimization performance.
[ "['Yassine Chemingui' 'Aryan Deshwal' 'Trong Nghia Hoang'\n 'Janardhan Rao Doppa']" ]
null
null
2405.05369
null
null
http://arxiv.org/pdf/2405.05369v1
2024-05-08T18:52:47Z
2024-05-08T18:52:47Z
Model Reconstruction Using Counterfactual Explanations: Mitigating the Decision Boundary Shift
Counterfactual explanations find ways of achieving a favorable model outcome with minimum input perturbation. However, counterfactual explanations can also be exploited to steal the model by strategically training a surrogate model to give similar predictions as the original (target) model. In this work, we investigate model extraction by specifically leveraging the fact that the counterfactual explanations also lie quite close to the decision boundary. We propose a novel strategy for model extraction that we call Counterfactual Clamping Attack (CCA) which trains a surrogate model using a unique loss function that treats counterfactuals differently than ordinary instances. Our approach also alleviates the related problem of decision boundary shift that arises in existing model extraction attacks which treat counterfactuals as ordinary instances. We also derive novel mathematical relationships between the error in model approximation and the number of queries using polytope theory. Experimental results demonstrate that our strategy provides improved fidelity between the target and surrogate model predictions on several real world datasets.
[ "['Pasan Dissanayake' 'Sanghamitra Dutta']" ]
null
null
2405.05378
null
null
http://arxiv.org/pdf/2405.05378v1
2024-05-08T19:08:45Z
2024-05-08T19:08:45Z
"They are uncultured": Unveiling Covert Harms and Social Threats in LLM Generated Conversations
Large language models (LLMs) have emerged as an integral part of modern societies, powering user-facing applications such as personal assistants and enterprise applications like recruitment tools. Despite their utility, research indicates that LLMs perpetuate systemic biases. Yet, prior works on LLM harms predominantly focus on Western concepts like race and gender, often overlooking cultural concepts from other parts of the world. Additionally, these studies typically investigate "harm" as a singular dimension, ignoring the various and subtle forms in which harms manifest. To address this gap, we introduce the Covert Harms and Social Threats (CHAST), a set of seven metrics grounded in social science literature. We utilize evaluation models aligned with human assessments to examine the presence of covert harms in LLM-generated conversations, particularly in the context of recruitment. Our experiments reveal that seven out of the eight LLMs included in this study generated conversations riddled with CHAST, characterized by malign views expressed in seemingly neutral language unlikely to be detected by existing methods. Notably, these LLMs manifested more extreme views and opinions when dealing with non-Western concepts like caste, compared to Western ones such as race.
[ "['Preetam Prabhu Srikar Dammu' 'Hayoung Jung' 'Anjali Singh'\n 'Monojit Choudhury' 'Tanushree Mitra']" ]
null
null
2405.05386
null
null
http://arxiv.org/pdf/2405.05386v1
2024-05-08T19:31:06Z
2024-05-08T19:31:06Z
Interpretability Needs a New Paradigm
Interpretability is the study of explaining models in understandable terms to humans. At present, interpretability is divided into two paradigms: the intrinsic paradigm, which believes that only models designed to be explained can be explained, and the post-hoc paradigm, which believes that black-box models can be explained. At the core of this debate is how each paradigm ensures its explanations are faithful, i.e., true to the model's behavior. This is important, as false but convincing explanations lead to unsupported confidence in artificial intelligence (AI), which can be dangerous. This paper's position is that we should think about new paradigms while staying vigilant regarding faithfulness. First, by examining the history of paradigms in science, we see that paradigms are constantly evolving. Then, by examining the current paradigms, we can understand their underlying beliefs, the value they bring, and their limitations. Finally, this paper presents 3 emerging paradigms for interpretability. The first paradigm designs models such that faithfulness can be easily measured. Another optimizes models such that explanations become faithful. The last paradigm proposes to develop models that produce both a prediction and an explanation.
[ "['Andreas Madsen' 'Himabindu Lakkaraju' 'Siva Reddy' 'Sarath Chandar']" ]
null
null
2405.05398
null
null
http://arxiv.org/pdf/2405.05398v1
2024-05-08T20:03:12Z
2024-05-08T20:03:12Z
ASPIRE: Iterative Amortized Posterior Inference for Bayesian Inverse Problems
Due to their uncertainty quantification, Bayesian solutions to inverse problems are the framework of choice in applications that are risk averse. These benefits come at the cost of computations that are in general, intractable. New advances in machine learning and variational inference (VI) have lowered the computational barrier by learning from examples. Two VI paradigms have emerged that represent different tradeoffs: amortized and non-amortized. Amortized VI can produce fast results but due to generalizing to many observed datasets it produces suboptimal inference results. Non-amortized VI is slower at inference but finds better posterior approximations since it is specialized towards a single observed dataset. Current amortized VI techniques run into a sub-optimality wall that can not be improved without more expressive neural networks or extra training data. We present a solution that enables iterative improvement of amortized posteriors that uses the same networks architectures and training data. The benefits of our method requires extra computations but these remain frugal since they are based on physics-hybrid methods and summary statistics. Importantly, these computations remain mostly offline thus our method maintains cheap and reusable online evaluation while bridging the approximation gap these two paradigms. We denote our proposed method ASPIRE - Amortized posteriors with Summaries that are Physics-based and Iteratively REfined. We first validate our method on a stylized problem with a known posterior then demonstrate its practical use on a high-dimensional and nonlinear transcranial medical imaging problem with ultrasound. Compared with the baseline and previous methods from the literature our method stands out as an computationally efficient and high-fidelity method for posterior inference.
[ "['Rafael Orozco' 'Ali Siahkoohi' 'Mathias Louboutin' 'Felix J. Herrmann']" ]
null
null
2405.05409
null
null
http://arxiv.org/pdf/2405.05409v2
2024-05-24T07:00:31Z
2024-05-08T20:23:24Z
Initialization is Critical to Whether Transformers Fit Composite Functions by Inference or Memorizing
Transformers have shown impressive capabilities across various tasks, but their performance on compositional problems remains a topic of debate. In this work, we investigate the mechanisms of how transformers behave on unseen compositional tasks. We discover that the parameter initialization scale plays a critical role in determining whether the model learns inferential solutions, which capture the underlying compositional primitives, or symmetric solutions, which simply memorize mappings without understanding the compositional structure. By analyzing the information flow and vector representations within the model, we reveal the distinct mechanisms underlying these solution types. We further find that inferential solutions exhibit low complexity bias, which we hypothesize is a key factor enabling them to learn individual mappings for single anchors. Building upon the understanding of these mechanisms, we can predict the learning behavior of models with different initialization scales when faced with data of varying complexity. Our findings provide valuable insights into the role of initialization scale in shaping the type of solution learned by transformers and their ability to learn and generalize compositional tasks.
[ "['Zhongwang Zhang' 'Pengxiao Lin' 'Zhiwei Wang' 'Yaoyu Zhang'\n 'Zhi-Qin John Xu']" ]
null
null
2405.05424
null
null
http://arxiv.org/pdf/2405.05424v1
2024-05-08T20:49:34Z
2024-05-08T20:49:34Z
Latent Variable Double Gaussian Process Model for Decoding Complex Neural Data
Non-parametric models, such as Gaussian Processes (GP), show promising results in the analysis of complex data. Their applications in neuroscience data have recently gained traction. In this research, we introduce a novel neural decoder model built upon GP models. The core idea is that two GPs generate neural data and their associated labels using a set of low-dimensional latent variables. Under this modeling assumption, the latent variables represent the underlying manifold or essential features present in the neural data. When GPs are trained, the latent variable can be inferred from neural data to decode the labels with a high accuracy. We demonstrate an application of this decoder model in a verbal memory experiment dataset and show that the decoder accuracy in predicting stimulus significantly surpasses the state-of-the-art decoder models. The preceding performance of this model highlights the importance of utilizing non-parametric models in the analysis of neuroscience data.
[ "['Navid Ziaei' 'Joshua J. Stim' 'Melanie D. Goodman-Keiser'\n 'Scott Sponheim' 'Alik S. Widge' 'Sasoun Krikorian' 'Ali Yousefi']" ]
null
null
2405.05428
null
null
http://arxiv.org/pdf/2405.05428v1
2024-05-08T21:18:02Z
2024-05-08T21:18:02Z
Adversary-Guided Motion Retargeting for Skeleton Anonymization
Skeleton-based motion visualization is a rising field in computer vision, especially in the case of virtual reality (VR). With further advancements in human-pose estimation and skeleton extracting sensors, more and more applications that utilize skeleton data have come about. These skeletons may appear to be anonymous but they contain embedded personally identifiable information (PII). In this paper we present a new anonymization technique that is based on motion retargeting, utilizing adversary classifiers to further remove PII embedded in the skeleton. Motion retargeting is effective in anonymization as it transfers the movement of the user onto the a dummy skeleton. In doing so, any PII linked to the skeleton will be based on the dummy skeleton instead of the user we are protecting. We propose a Privacy-centric Deep Motion Retargeting model (PMR) which aims to further clear the retargeted skeleton of PII through adversarial learning. In our experiments, PMR achieves motion retargeting utility performance on par with state of the art models while also reducing the performance of privacy attacks.
[ "['Thomas Carr' 'Depeng Xu' 'Aidong Lu']" ]
null
null
2405.05429
null
null
http://arxiv.org/pdf/2405.05429v3
2024-07-10T08:47:05Z
2024-05-08T21:19:18Z
How Inverse Conditional Flows Can Serve as a Substitute for Distributional Regression
Neural network representations of simple models, such as linear regression, are being studied increasingly to better understand the underlying principles of deep learning algorithms. However, neural representations of distributional regression models, such as the Cox model, have received little attention so far. We close this gap by proposing a framework for distributional regression using inverse flow transformations (DRIFT), which includes neural representations of the aforementioned models. We empirically demonstrate that the neural representations of models in DRIFT can serve as a substitute for their classical statistical counterparts in several applications involving continuous, ordered, time-series, and survival outcomes. We confirm that models in DRIFT empirically match the performance of several statistical methods in terms of estimation of partial effects, prediction, and aleatoric uncertainty quantification. DRIFT covers both interpretable statistical models and flexible neural networks opening up new avenues in both statistical modeling and deep learning.
[ "['Lucas Kook' 'Chris Kolb' 'Philipp Schiele' 'Daniel Dold'\n 'Marcel Arpogaus' 'Cornelius Fritz' 'Philipp F. Baumann' 'Philipp Kopper'\n 'Tobias Pielok' 'Emilio Dorigatti' 'David Rügamer']" ]
null
null
2405.05430
null
null
http://arxiv.org/pdf/2405.05430v1
2024-05-08T21:23:01Z
2024-05-08T21:23:01Z
Towards Invariant Time Series Forecasting in Smart Cities
In the transformative landscape of smart cities, the integration of the cutting-edge web technologies into time series forecasting presents a pivotal opportunity to enhance urban planning, sustainability, and economic growth. The advancement of deep neural networks has significantly improved forecasting performance. However, a notable challenge lies in the ability of these models to generalize well to out-of-distribution (OOD) time series data. The inherent spatial heterogeneity and domain shifts across urban environments create hurdles that prevent models from adapting and performing effectively in new urban environments. To tackle this problem, we propose a solution to derive invariant representations for more robust predictions under different urban environments instead of relying on spurious correlation across urban environments for better generalizability. Through extensive experiments on both synthetic and real-world data, we demonstrate that our proposed method outperforms traditional time series forecasting models when tackling domain shifts in changing urban environments. The effectiveness and robustness of our method can be extended to diverse fields including climate modeling, urban planning, and smart city resource management.
[ "['Ziyi Zhang' 'Shaogang Ren' 'Xiaoning Qian' 'Nick Duffield']" ]
null
null
2405.05431
null
null
http://arxiv.org/pdf/2405.05431v2
2024-06-12T18:54:02Z
2024-05-08T21:24:49Z
Searching for Programmatic Policies in Semantic Spaces
Syntax-guided synthesis is commonly used to generate programs encoding policies. In this approach, the set of programs, that can be written in a domain-specific language defines the search space, and an algorithm searches within this space for programs that encode strong policies. In this paper, we propose an alternative method for synthesizing programmatic policies, where we search within an approximation of the language's semantic space. We hypothesized that searching in semantic spaces is more sample-efficient compared to syntax-based spaces. Our rationale is that the search is more efficient if the algorithm evaluates different agent behaviors as it searches through the space, a feature often missing in syntax-based spaces. This is because small changes in the syntax of a program often do not result in different agent behaviors. We define semantic spaces by learning a library of programs that present different agent behaviors. Then, we approximate the semantic space by defining a neighborhood function for local search algorithms, where we replace parts of the current candidate program with programs from the library. We evaluated our hypothesis in a real-time strategy game called MicroRTS. Empirical results support our hypothesis that searching in semantic spaces can be more sample-efficient than searching in syntax-based spaces.
[ "['Rubens O. Moraes' 'Levi H. S. Lelis']" ]
null
null
2405.05439
null
null
http://arxiv.org/pdf/2405.05439v1
2024-05-08T22:00:35Z
2024-05-08T22:00:35Z
How Generalizable Is My Behavior Cloning Policy? A Statistical Approach to Trustworthy Performance Evaluation
With the rise of stochastic generative models in robot policy learning, end-to-end visuomotor policies are increasingly successful at solving complex tasks by learning from human demonstrations. Nevertheless, since real-world evaluation costs afford users only a small number of policy rollouts, it remains a challenge to accurately gauge the performance of such policies. This is exacerbated by distribution shifts causing unpredictable changes in performance during deployment. To rigorously evaluate behavior cloning policies, we present a framework that provides a tight lower-bound on robot performance in an arbitrary environment, using a minimal number of experimental policy rollouts. Notably, by applying the standard stochastic ordering to robot performance distributions, we provide a worst-case bound on the entire distribution of performance (via bounds on the cumulative distribution function) for a given task. We build upon established statistical results to ensure that the bounds hold with a user-specified confidence level and tightness, and are constructed from as few policy rollouts as possible. In experiments we evaluate policies for visuomotor manipulation in both simulation and hardware. Specifically, we (i) empirically validate the guarantees of the bounds in simulated manipulation settings, (ii) find the degree to which a learned policy deployed on hardware generalizes to new real-world environments, and (iii) rigorously compare two policies tested in out-of-distribution settings. Our experimental data, code, and implementation of confidence bounds are open-source.
[ "['Joseph A. Vincent' 'Haruki Nishimura' 'Masha Itkina' 'Paarth Shah'\n 'Mac Schwager' 'Thomas Kollar']" ]
null
null
2405.05445
null
null
http://arxiv.org/pdf/2405.05445v1
2024-05-08T22:28:57Z
2024-05-08T22:28:57Z
Large Language Model Enhanced Machine Learning Estimators for Classification
Pre-trained large language models (LLM) have emerged as a powerful tool for simulating various scenarios and generating output given specific instructions and multimodal input. In this work, we analyze the specific use of LLM to enhance a classical supervised machine learning method for classification problems. We propose a few approaches to integrate LLM into a classical machine learning estimator to further enhance the prediction performance. We examine the performance of the proposed approaches through both standard supervised learning binary classification tasks, and a transfer learning task where the test data observe distribution changes compared to the training data. Numerical experiments using four publicly available datasets are conducted and suggest that using LLM to enhance classical machine learning estimators can provide significant improvement on prediction performance.
[ "['Yuhang Wu' 'Yingfei Wang' 'Chu Wang' 'Zeyu Zheng']" ]
null
null
2405.05446
null
null
http://arxiv.org/pdf/2405.05446v1
2024-05-08T22:40:52Z
2024-05-08T22:40:52Z
GDGS: Gradient Domain Gaussian Splatting for Sparse Representation of Radiance Fields
The 3D Gaussian splatting methods are getting popular. However, they work directly on the signal, leading to a dense representation of the signal. Even with some techniques such as pruning or distillation, the results are still dense. In this paper, we propose to model the gradient of the original signal. The gradients are much sparser than the original signal. Therefore, the gradients use much less Gaussian splats, leading to the more efficient storage and thus higher computational performance during both training and rendering. Thanks to the sparsity, during the view synthesis, only a small mount of pixels are needed, leading to much higher computational performance ($100sim 1000times$ faster). And the 2D image can be recovered from the gradients via solving a Poisson equation with linear computation complexity. Several experiments are performed to confirm the sparseness of the gradients and the computation performance of the proposed method. The method can be applied various applications, such as human body modeling and indoor environment modeling.
[ "['Yuanhao Gong']" ]
null
null
2405.05449
null
null
http://arxiv.org/pdf/2405.05449v1
2024-05-08T22:54:04Z
2024-05-08T22:54:04Z
Markowitz Meets Bellman: Knowledge-distilled Reinforcement Learning for Portfolio Management
Investment portfolios, central to finance, balance potential returns and risks. This paper introduces a hybrid approach combining Markowitz's portfolio theory with reinforcement learning, utilizing knowledge distillation for training agents. In particular, our proposed method, called KDD (Knowledge Distillation DDPG), consist of two training stages: supervised and reinforcement learning stages. The trained agents optimize portfolio assembly. A comparative analysis against standard financial models and AI frameworks, using metrics like returns, the Sharpe ratio, and nine evaluation indices, reveals our model's superiority. It notably achieves the highest yield and Sharpe ratio of 2.03, ensuring top profitability with the lowest risk in comparable return scenarios.
[ "['Gang Hu' 'Ming Gu']" ]
null
null
2405.05455
null
null
http://arxiv.org/pdf/2405.05455v1
2024-05-08T23:09:43Z
2024-05-08T23:09:43Z
Automated Program Repair: Emerging trends pose and expose problems for benchmarks
Machine learning (ML) now pervades the field of Automated Program Repair (APR). Algorithms deploy neural machine translation and large language models (LLMs) to generate software patches, among other tasks. But, there are important differences between these applications of ML and earlier work. Evaluations and comparisons must take care to ensure that results are valid and likely to generalize. A challenge is that the most popular APR evaluation benchmarks were not designed with ML techniques in mind. This is especially true for LLMs, whose large and often poorly-disclosed training datasets may include problems on which they are evaluated.
[ "['Joseph Renzullo' 'Pemma Reiter' 'Westley Weimer' 'Stephanie Forrest']" ]
null
null
2405.05461
null
null
http://arxiv.org/pdf/2405.05461v1
2024-05-08T23:37:25Z
2024-05-08T23:37:25Z
Taking a Moment for Distributional Robustness
A rich line of recent work has studied distributionally robust learning approaches that seek to learn a hypothesis that performs well, in the worst-case, on many different distributions over a population. We argue that although the most common approaches seek to minimize the worst-case loss over distributions, a more reasonable goal is to minimize the worst-case distance to the true conditional expectation of labels given each covariate. Focusing on the minmax loss objective can dramatically fail to output a solution minimizing the distance to the true conditional expectation when certain distributions contain high levels of label noise. We introduce a new min-max objective based on what is known as the adversarial moment violation and show that minimizing this objective is equivalent to minimizing the worst-case $ell_2$-distance to the true conditional expectation if we take the adversary's strategy space to be sufficiently rich. Previous work has suggested minimizing the maximum regret over the worst-case distribution as a way to circumvent issues arising from differential noise levels. We show that in the case of square loss, minimizing the worst-case regret is also equivalent to minimizing the worst-case $ell_2$-distance to the true conditional expectation. Although their objective and our objective both minimize the worst-case distance to the true conditional expectation, we show that our approach provides large empirical savings in computational cost in terms of the number of groups, while providing the same noise-oblivious worst-distribution guarantee as the minimax regret approach, thus making positive progress on an open question posed by Agarwal and Zhang (2022).
[ "['Jabari Hastings' 'Christopher Jung' 'Charlotte Peale'\n 'Vasilis Syrgkanis']" ]
null
null
2405.05462
null
null
http://arxiv.org/pdf/2405.05462v1
2024-05-08T23:38:02Z
2024-05-08T23:38:02Z
Cross-Modality Translation with Generative Adversarial Networks to Unveil Alzheimer's Disease Biomarkers
Generative approaches for cross-modality transformation have recently gained significant attention in neuroimaging. While most previous work has focused on case-control data, the application of generative models to disorder-specific datasets and their ability to preserve diagnostic patterns remain relatively unexplored. Hence, in this study, we investigated the use of a generative adversarial network (GAN) in the context of Alzheimer's disease (AD) to generate functional network connectivity (FNC) and T1-weighted structural magnetic resonance imaging data from each other. We employed a cycle-GAN to synthesize data in an unpaired data transition and enhanced the transition by integrating weak supervision in cases where paired data were available. Our findings revealed that our model could offer remarkable capability, achieving a structural similarity index measure (SSIM) of $0.89 pm 0.003$ for T1s and a correlation of $0.71 pm 0.004$ for FNCs. Moreover, our qualitative analysis revealed similar patterns between generated and actual data when comparing AD to cognitively normal (CN) individuals. In particular, we observed significantly increased functional connectivity in cerebellar-sensory motor and cerebellar-visual networks and reduced connectivity in cerebellar-subcortical, auditory-sensory motor, sensory motor-visual, and cerebellar-cognitive control networks. Additionally, the T1 images generated by our model showed a similar pattern of atrophy in the hippocampal and other temporal regions of Alzheimer's patients.
[ "['Reihaneh Hassanzadeh' 'Anees Abrol' 'Hamid Reza Hassanzadeh'\n 'Vince D. Calhoun']" ]
null
null
2405.05465
null
null
http://arxiv.org/pdf/2405.05465v2
2024-05-21T05:17:29Z
2024-05-08T23:42:13Z
Vidur: A Large-Scale Simulation Framework For LLM Inference
Optimizing the deployment of Large language models (LLMs) is expensive today since it requires experimentally running an application workload against an LLM implementation while exploring large configuration space formed by system knobs such as parallelization strategies, batching techniques, and scheduling policies. To address this challenge, we present Vidur - a large-scale, high-fidelity, easily-extensible simulation framework for LLM inference performance. Vidur models the performance of LLM operators using a combination of experimental profiling and predictive modeling, and evaluates the end-to-end inference performance for different workloads by estimating several metrics of interest such as latency and throughput. We validate the fidelity of Vidur on several LLMs and show that it estimates inference latency with less than 9% error across the range. Further, we present Vidur-Search, a configuration search tool that helps optimize LLM deployment. Vidur-Search uses Vidur to automatically identify the most cost-effective deployment configuration that meets application performance constraints. For example, Vidur-Search finds the best deployment configuration for LLaMA2-70B in one hour on a CPU machine, in contrast to a deployment-based exploration which would require 42K GPU hours - costing ~218K dollars. Source code for Vidur is available at https://github.com/microsoft/vidur.
[ "['Amey Agrawal' 'Nitin Kedia' 'Jayashree Mohan' 'Ashish Panwar'\n 'Nipun Kwatra' 'Bhargav Gulavani' 'Ramachandran Ramjee' 'Alexey Tumanov']" ]
null
null
2405.05467
null
null
http://arxiv.org/pdf/2405.05467v1
2024-05-08T23:50:54Z
2024-05-08T23:50:54Z
AFEN: Respiratory Disease Classification using Ensemble Learning
We present AFEN (Audio Feature Ensemble Learning), a model that leverages Convolutional Neural Networks (CNN) and XGBoost in an ensemble learning fashion to perform state-of-the-art audio classification for a range of respiratory diseases. We use a meticulously selected mix of audio features which provide the salient attributes of the data and allow for accurate classification. The extracted features are then used as an input to two separate model classifiers 1) a multi-feature CNN classifier and 2) an XGBoost Classifier. The outputs of the two models are then fused with the use of soft voting. Thus, by exploiting ensemble learning, we achieve increased robustness and accuracy. We evaluate the performance of the model on a database of 920 respiratory sounds, which undergoes data augmentation techniques to increase the diversity of the data and generalizability of the model. We empirically verify that AFEN sets a new state-of-the-art using Precision and Recall as metrics, while decreasing training time by 60%.
[ "['Rahul Nadkarni' 'Emmanouil Nikolakakis' 'Razvan Marinescu']" ]
null
null
2405.05468
null
null
http://arxiv.org/pdf/2405.05468v1
2024-05-08T23:52:37Z
2024-05-08T23:52:37Z
Model-Free Robust $φ$-Divergence Reinforcement Learning Using Both Offline and Online Data
The robust $phi$-regularized Markov Decision Process (RRMDP) framework focuses on designing control policies that are robust against parameter uncertainties due to mismatches between the simulator (nominal) model and real-world settings. This work makes two important contributions. First, we propose a model-free algorithm called Robust $phi$-regularized fitted Q-iteration (RPQ) for learning an $epsilon$-optimal robust policy that uses only the historical data collected by rolling out a behavior policy (with robust exploratory requirement) on the nominal model. To the best of our knowledge, we provide the first unified analysis for a class of $phi$-divergences achieving robust optimal policies in high-dimensional systems with general function approximation. Second, we introduce the hybrid robust $phi$-regularized reinforcement learning framework to learn an optimal robust policy using both historical data and online sampling. Towards this framework, we propose a model-free algorithm called Hybrid robust Total-variation-regularized Q-iteration (HyTQ: pronounced height-Q). To the best of our knowledge, we provide the first improved out-of-data-distribution assumption in large-scale problems with general function approximation under the hybrid robust $phi$-regularized reinforcement learning framework. Finally, we provide theoretical guarantees on the performance of the learned policies of our algorithms on systems with arbitrary large state space.
[ "['Kishan Panaganti' 'Adam Wierman' 'Eric Mazumdar']" ]
null
null
2405.05480
null
null
http://arxiv.org/pdf/2405.05480v2
2024-06-28T00:05:14Z
2024-05-09T00:37:56Z
FloorSet -- a VLSI Floorplanning Dataset with Design Constraints of Real-World SoCs
Floorplanning for systems-on-a-chip (SoCs) and its sub-systems is a crucial and non-trivial step of the physical design flow. It represents a difficult combinatorial optimization problem. A typical large scale SoC with 120 partitions generates a search-space of nearly 10E250. As novel machine learning (ML) approaches emerge to tackle such problems, there is a growing need for a modern benchmark that comprises a large training dataset and performance metrics that better reflect real-world constraints and objectives compared to existing benchmarks. To address this need, we present FloorSet -- two comprehensive datasets of synthetic fixed-outline floorplan layouts that reflect the distribution of real SoCs. Each dataset has 1M training samples and 100 test samples where each sample is a synthetic floor-plan. FloorSet-Prime comprises fully-abutted rectilinear partitions and near-optimal wire-length. A simplified dataset that reflects early design phases, FloorSet-Lite comprises rectangular partitions, with under 5 percent white-space and near-optimal wire-length. Both datasets define hard constraints seen in modern design flows such as shape constraints, edge-affinity, grouping constraints, and pre-placement constraints. FloorSet is intended to spur fundamental research on large-scale constrained optimization problems. Crucially, FloorSet alleviates the core issue of reproducibility in modern ML driven solutions to such problems. FloorSet is available as an open-source repository for the research community.
[ "['Uday Mallappa' 'Hesham Mostafa' 'Mikhail Galkin' 'Mariano Phielipp'\n 'Somdeb Majumdar']" ]
null
null
2405.05485
null
null
http://arxiv.org/pdf/2405.05485v1
2024-05-09T01:04:34Z
2024-05-09T01:04:34Z
Variance Control for Black Box Variational Inference Using The James-Stein Estimator
Black Box Variational Inference is a promising framework in a succession of recent efforts to make Variational Inference more ``black box". However, in basic version it either fails to converge due to instability or requires some fine-tuning of the update steps prior to execution that hinder it from being completely general purpose. We propose a method for regulating its parameter updates by reframing stochastic gradient ascent as a multivariate estimation problem. We examine the properties of the James-Stein estimator as a replacement for the arithmetic mean of Monte Carlo estimates of the gradient of the evidence lower bound. The proposed method provides relatively weaker variance reduction than Rao-Blackwellization, but offers a tradeoff of being simpler and requiring no fine tuning on the part of the analyst. Performance on benchmark datasets also demonstrate a consistent performance at par or better than the Rao-Blackwellized approach in terms of model fit and time to convergence.
[ "['Dominic B. Dayta']" ]
null
null
2405.05492
null
null
http://arxiv.org/pdf/2405.05492v1
2024-05-09T01:38:38Z
2024-05-09T01:38:38Z
A logifold structure on measure space
In this paper,we develop a local-to-global and measure-theoretical approach to understand datasets. The idea is to take network models with restricted domains as local charts of datasets. We develop the mathematical foundations for these structures, and show in experiments how it can be used to find fuzzy domains and to improve accuracy in data classification problems.
[ "['Inkee Jung' 'Siu-Cheong Lau']" ]
null
null
2405.05499
null
null
http://arxiv.org/pdf/2405.05499v2
2024-05-14T07:52:01Z
2024-05-09T02:11:01Z
Multi-Scale Dilated Convolution Network for Long-Term Time Series Forecasting
Accurate forecasting of long-term time series has important applications for decision making and planning. However, it remains challenging to capture the long-term dependencies in time series data. To better extract long-term dependencies, We propose Multi Scale Dilated Convolution Network (MSDCN), a method that utilizes a shallow dilated convolution architecture to capture the period and trend characteristics of long time series. We design different convolution blocks with exponentially growing dilations and varying kernel sizes to sample time series data at different scales. Furthermore, we utilize traditional autoregressive model to capture the linear relationships within the data. To validate the effectiveness of the proposed approach, we conduct experiments on eight challenging long-term time series forecasting benchmark datasets. The experimental results show that our approach outperforms the prior state-of-the-art approaches and shows significant inference speed improvements compared to several strong baseline methods.
[ "['Feifei Li' 'Suhan Guo' 'Feng Han' 'Jian Zhao' 'Furao Shen']" ]
null
null
2405.05502
null
null
http://arxiv.org/pdf/2405.05502v1
2024-05-09T02:16:50Z
2024-05-09T02:16:50Z
Towards Accurate and Robust Architectures via Neural Architecture Search
To defend deep neural networks from adversarial attacks, adversarial training has been drawing increasing attention for its effectiveness. However, the accuracy and robustness resulting from the adversarial training are limited by the architecture, because adversarial training improves accuracy and robustness by adjusting the weight connection affiliated to the architecture. In this work, we propose ARNAS to search for accurate and robust architectures for adversarial training. First we design an accurate and robust search space, in which the placement of the cells and the proportional relationship of the filter numbers are carefully determined. With the design, the architectures can obtain both accuracy and robustness by deploying accurate and robust structures to their sensitive positions, respectively. Then we propose a differentiable multi-objective search strategy, performing gradient descent towards directions that are beneficial for both natural loss and adversarial loss, thus the accuracy and robustness can be guaranteed at the same time. We conduct comprehensive experiments in terms of white-box attacks, black-box attacks, and transferability. Experimental results show that the searched architecture has the strongest robustness with the competitive accuracy, and breaks the traditional idea that NAS-based architectures cannot transfer well to complex tasks in robustness scenarios. By analyzing outstanding architectures searched, we also conclude that accurate and robust neural architectures tend to deploy different structures near the input and output, which has great practical significance on both hand-crafting and automatically designing of accurate and robust architectures.
[ "['Yuwei Ou' 'Yuqi Feng' 'Yanan Sun']" ]
null
null
2405.05512
null
null
http://arxiv.org/pdf/2405.05512v3
2024-07-02T14:14:41Z
2024-05-09T02:41:42Z
Characteristic Learning for Provable One Step Generation
We propose the characteristic generator, a novel one-step generative model that combines the efficiency of sampling in Generative Adversarial Networks (GANs) with the stable performance of flow-based models. Our model is driven by characteristics, along which the probability density transport can be described by ordinary differential equations (ODEs). Specifically, We estimate the velocity field through nonparametric regression and utilize Euler method to solve the probability flow ODE, generating a series of discrete approximations to the characteristics. We then use a deep neural network to fit these characteristics, ensuring a one-step mapping that effectively pushes the prior distribution towards the target distribution. In the theoretical aspect, we analyze the errors in velocity matching, Euler discretization, and characteristic fitting to establish a non-asymptotic convergence rate for the characteristic generator in 2-Wasserstein distance. To the best of our knowledge, this is the first thorough analysis for simulation-free one step generative models. Additionally, our analysis refines the error analysis of flow-based generative models in prior works. We apply our method on both synthetic and real datasets, and the results demonstrate that the characteristic generator achieves high generation quality with just a single evaluation of neural network.
[ "['Zhao Ding' 'Chenguang Duan' 'Yuling Jiao' 'Ruoxuan Li'\n 'Jerry Zhijian Yang' 'Pingwen Zhang']" ]
null
null
2405.05520
null
null
http://arxiv.org/pdf/2405.05520v1
2024-05-09T03:19:19Z
2024-05-09T03:19:19Z
Continuous max-flow augmentation of self-supervised few-shot learning on SPECT left ventricles
Single-Photon Emission Computed Tomography (SPECT) left ventricular assessment protocols are important for detecting ischemia in high-risk patients. To quantitatively measure myocardial function, clinicians depend on commercially available solutions to segment and reorient the left ventricle (LV) for evaluation. Based on large normal datasets, the segmentation performance and the high price of these solutions can hinder the availability of reliable and precise localization of the LV delineation. To overcome the aforementioned shortcomings this paper aims to give a recipe for diagnostic centers as well as for clinics to automatically segment the myocardium based on small and low-quality labels on reconstructed SPECT, complete field-of-view (FOV) volumes. A combination of Continuous Max-Flow (CMF) with prior shape information is developed to augment the 3D U-Net self-supervised learning (SSL) approach on various geometries of SPECT apparatus. Experimental results on the acquired dataset have shown a 5-10% increase in quantitative metrics based on the previous State-of-the-Art (SOTA) solutions, suggesting a good plausible way to tackle the few-shot SSL problem on high-noise SPECT cardiac datasets.
[ "['Ádám István Szűcs' 'Béla Kári' 'Oszkár Pártos']" ]
null
null
2405.05521
null
null
http://arxiv.org/pdf/2405.05521v1
2024-05-09T03:19:20Z
2024-05-09T03:19:20Z
Machine Learning for Scalable and Optimal Load Shedding Under Power System Contingency
Prompt and effective corrective actions in response to unexpected contingencies are crucial for improving power system resilience and preventing cascading blackouts. The optimal load shedding (OLS) accounting for network limits has the potential to address the diverse system-wide impacts of contingency scenarios as compared to traditional local schemes. However, due to the fast cascading propagation of initial contingencies, real-time OLS solutions are challenging to attain in large systems with high computation and communication needs. In this paper, we propose a decentralized design that leverages offline training of a neural network (NN) model for individual load centers to autonomously construct the OLS solutions from locally available measurements. Our learning-for-OLS approach can greatly reduce the computation and communication needs during online emergency responses, thus preventing the cascading propagation of contingencies for enhanced power grid resilience. Numerical studies on both the IEEE 118-bus system and a synthetic Texas 2000-bus system have demonstrated the efficiency and effectiveness of our scalable OLS learning design for timely power system emergency operations.
[ "['Yuqi Zhou' 'Hao Zhu']" ]
null
null
2405.05525
null
null
http://arxiv.org/pdf/2405.05525v1
2024-05-09T03:28:16Z
2024-05-09T03:28:16Z
Ditto: Quantization-aware Secure Inference of Transformers upon MPC
Due to the rising privacy concerns on sensitive client data and trained models like Transformers, secure multi-party computation (MPC) techniques are employed to enable secure inference despite attendant overhead. Existing works attempt to reduce the overhead using more MPC-friendly non-linear function approximations. However, the integration of quantization widely used in plaintext inference into the MPC domain remains unclear. To bridge this gap, we propose the framework named Ditto to enable more efficient quantization-aware secure Transformer inference. Concretely, we first incorporate an MPC-friendly quantization into Transformer inference and employ a quantization-aware distillation procedure to maintain the model utility. Then, we propose novel MPC primitives to support the type conversions that are essential in quantization and implement the quantization-aware MPC execution of secure quantized inference. This approach significantly decreases both computation and communication overhead, leading to improvements in overall efficiency. We conduct extensive experiments on Bert and GPT2 models to evaluate the performance of Ditto. The results demonstrate that Ditto is about $3.14sim 4.40times$ faster than MPCFormer (ICLR 2023) and $1.44sim 2.35times$ faster than the state-of-the-art work PUMA with negligible utility degradation.
[ "['Haoqi Wu' 'Wenjing Fang' 'Yancheng Zheng' 'Junming Ma' 'Jin Tan'\n 'Yinggui Wang' 'Lei Wang']" ]
null
null
2405.05545
null
null
http://arxiv.org/pdf/2405.05545v1
2024-05-09T05:08:30Z
2024-05-09T05:08:30Z
Deep Hierarchical Graph Alignment Kernels
Typical R-convolution graph kernels invoke the kernel functions that decompose graphs into non-isomorphic substructures and compare them. However, overlooking implicit similarities and topological position information between those substructures limits their performances. In this paper, we introduce Deep Hierarchical Graph Alignment Kernels (DHGAK) to resolve this problem. Specifically, the relational substructures are hierarchically aligned to cluster distributions in their deep embedding space. The substructures belonging to the same cluster are assigned the same feature map in the Reproducing Kernel Hilbert Space (RKHS), where graph feature maps are derived by kernel mean embedding. Theoretical analysis guarantees that DHGAK is positive semi-definite and has linear separability in the RKHS. Comparison with state-of-the-art graph kernels on various benchmark datasets demonstrates the effectiveness and efficiency of DHGAK. The code is available at Github (https://github.com/EWesternRa/DHGAK).
[ "['Shuhao Tang' 'Hao Tian' 'Xiaofeng Cao' 'Wei Ye']" ]
null
null
2405.05564
null
null
http://arxiv.org/pdf/2405.05564v1
2024-05-09T05:51:33Z
2024-05-09T05:51:33Z
Joint Edge Optimization Deep Unfolding Network for Accelerated MRI Reconstruction
Magnetic Resonance Imaging (MRI) is a widely used imaging technique, however it has the limitation of long scanning time. Though previous model-based and learning-based MRI reconstruction methods have shown promising performance, most of them have not fully utilized the edge prior of MR images, and there is still much room for improvement. In this paper, we build a joint edge optimization model that not only incorporates individual regularizers specific to both the MR image and the edges, but also enforces a co-regularizer to effectively establish a stronger correlation between them. Specifically, the edge information is defined through a non-edge probability map to guide the image reconstruction during the optimization process. Meanwhile, the regularizers pertaining to images and edges are incorporated into a deep unfolding network to automatically learn their respective inherent a-priori information.Numerical experiments, consisting of multi-coil and single-coil MRI data with different sampling schemes at a variety of sampling factors, demonstrate that the proposed method outperforms other compared methods.
[ "['Yue Cai' 'Yu Luo' 'Jie Ling' 'Shun Yao']" ]
null
null
2405.05587
null
null
http://arxiv.org/pdf/2405.05587v1
2024-05-09T07:23:37Z
2024-05-09T07:23:37Z
Navigate Beyond Shortcuts: Debiased Learning through the Lens of Neural Collapse
Recent studies have noted an intriguing phenomenon termed Neural Collapse, that is, when the neural networks establish the right correlation between feature spaces and the training targets, their last-layer features, together with the classifier weights, will collapse into a stable and symmetric structure. In this paper, we extend the investigation of Neural Collapse to the biased datasets with imbalanced attributes. We observe that models will easily fall into the pitfall of shortcut learning and form a biased, non-collapsed feature space at the early period of training, which is hard to reverse and limits the generalization capability. To tackle the root cause of biased classification, we follow the recent inspiration of prime training, and propose an avoid-shortcut learning framework without additional training complexity. With well-designed shortcut primes based on Neural Collapse structure, the models are encouraged to skip the pursuit of simple shortcuts and naturally capture the intrinsic correlations. Experimental results demonstrate that our method induces better convergence properties during training, and achieves state-of-the-art generalization performance on both synthetic and real-world biased datasets.
[ "['Yining Wang' 'Junjie Sun' 'Chenyue Wang' 'Mi Zhang' 'Min Yang']" ]
null
null
2405.05588
null
null
http://arxiv.org/pdf/2405.05588v1
2024-05-09T07:24:28Z
2024-05-09T07:24:28Z
Model Inversion Robustness: Can Transfer Learning Help?
Model Inversion (MI) attacks aim to reconstruct private training data by abusing access to machine learning models. Contemporary MI attacks have achieved impressive attack performance, posing serious threats to privacy. Meanwhile, all existing MI defense methods rely on regularization that is in direct conflict with the training objective, resulting in noticeable degradation in model utility. In this work, we take a different perspective, and propose a novel and simple Transfer Learning-based Defense against Model Inversion (TL-DMI) to render MI-robust models. Particularly, by leveraging TL, we limit the number of layers encoding sensitive information from private training dataset, thereby degrading the performance of MI attack. We conduct an analysis using Fisher Information to justify our method. Our defense is remarkably simple to implement. Without bells and whistles, we show in extensive experiments that TL-DMI achieves state-of-the-art (SOTA) MI robustness. Our code, pre-trained models, demo and inverted data are available at: https://hosytuyen.github.io/projects/TL-DMI
[ "['Sy-Tuyen Ho' 'Koh Jun Hao' 'Keshigeyan Chandrasegaran' 'Ngoc-Bao Nguyen'\n 'Ngai-Man Cheung']" ]
null
null
2405.05590
null
null
http://arxiv.org/pdf/2405.05590v1
2024-05-09T07:25:38Z
2024-05-09T07:25:38Z
TroLLoc: Logic Locking and Layout Hardening for IC Security Closure against Hardware Trojans
Due to cost benefits, supply chains of integrated circuits (ICs) are largely outsourced nowadays. However, passing ICs through various third-party providers gives rise to many security threats, like piracy of IC intellectual property or insertion of hardware Trojans, i.e., malicious circuit modifications. In this work, we proactively and systematically protect the physical layouts of ICs against post-design insertion of Trojans. Toward that end, we propose TroLLoc, a novel scheme for IC security closure that employs, for the first time, logic locking and layout hardening in unison. TroLLoc is fully integrated into a commercial-grade design flow, and TroLLoc is shown to be effective, efficient, and robust. Our work provides in-depth layout and security analysis considering the challenging benchmarks of the ISPD'22/23 contests for security closure. We show that TroLLoc successfully renders layouts resilient, with reasonable overheads, against (i) general prospects for Trojan insertion as in the ISPD'22 contest, (ii) actual Trojan insertion as in the ISPD'23 contest, and (iii) potential second-order attacks where adversaries would first (i.e., before Trojan insertion) try to bypass the locking defense, e.g., using advanced machine learning attacks. Finally, we release all our artifacts for independent verification [2].
[ "['Fangzhou Wang' 'Qijing Wang' 'Lilas Alrahis' 'Bangqi Fu' 'Shui Jiang'\n 'Xiaopeng Zhang' 'Ozgur Sinanoglu' 'Tsung-Yi Ho' 'Evangeline F. Y. Young'\n 'Johann Knechtel']" ]
null
null
2405.05596
null
null
http://arxiv.org/pdf/2405.05596v1
2024-05-09T07:36:08Z
2024-05-09T07:36:08Z
Measuring Strategization in Recommendation: Users Adapt Their Behavior to Shape Future Content
Most modern recommendation algorithms are data-driven: they generate personalized recommendations by observing users' past behaviors. A common assumption in recommendation is that how a user interacts with a piece of content (e.g., whether they choose to "like" it) is a reflection of the content, but not of the algorithm that generated it. Although this assumption is convenient, it fails to capture user strategization: that users may attempt to shape their future recommendations by adapting their behavior to the recommendation algorithm. In this work, we test for user strategization by conducting a lab experiment and survey. To capture strategization, we adopt a model in which strategic users select their engagement behavior based not only on the content, but also on how their behavior affects downstream recommendations. Using a custom music player that we built, we study how users respond to different information about their recommendation algorithm as well as to different incentives about how their actions affect downstream outcomes. We find strong evidence of strategization across outcome metrics, including participants' dwell time and use of "likes." For example, participants who are told that the algorithm mainly pays attention to "likes" and "dislikes" use those functions 1.9x more than participants told that the algorithm mainly pays attention to dwell time. A close analysis of participant behavior (e.g., in response to our incentive conditions) rules out experimenter demand as the main driver of these trends. Further, in our post-experiment survey, nearly half of participants self-report strategizing "in the wild," with some stating that they ignore content they actually like to avoid over-recommendation of that content in the future. Together, our findings suggest that user strategization is common and that platforms cannot ignore the effect of their algorithms on user behavior.
[ "['Sarah H. Cen' 'Andrew Ilyas' 'Jennifer Allen' 'Hannah Li'\n 'Aleksander Madry']" ]
null
null
2405.05606
null
null
http://arxiv.org/abs/2405.05606v2
2024-05-13T07:41:46Z
2024-05-09T07:55:52Z
Optimizing E-commerce Search: Toward a Generalizable and Rank-Consistent Pre-Ranking Model
In large e-commerce platforms, search systems are typically composed of a series of modules, including recall, pre-ranking, and ranking phases. The pre-ranking phase, serving as a lightweight module, is crucial for filtering out the bulk of products in advance for the downstream ranking module. Industrial efforts on optimizing the pre-ranking model have predominantly focused on enhancing ranking consistency, model structure, and generalization towards long-tail items. Beyond these optimizations, meeting the system performance requirements presents a significant challenge. Contrasting with existing industry works, we propose a novel method: a Generalizable and RAnk-ConsistEnt Pre-Ranking Model (GRACE), which achieves: 1) Ranking consistency by introducing multiple binary classification tasks that predict whether a product is within the top-k results as estimated by the ranking model, which facilitates the addition of learning objectives on common point-wise ranking models; 2) Generalizability through contrastive learning of representation for all products by pre-training on a subset of ranking product embeddings; 3) Ease of implementation in feature construction and online deployment. Our extensive experiments demonstrate significant improvements in both offline metrics and online A/B test: a 0.75% increase in AUC and a 1.28% increase in CVR.
[ "['Enqiang Xu' 'Yiming Qiu' 'Junyang Bai' 'Ping Zhang' 'Dadong Miao'\n 'Songlin Wang' 'Guoyu Tang' 'Lin Liu' 'Mingming Li']" ]
null
null
2405.05610
null
null
http://arxiv.org/pdf/2405.05610v1
2024-05-09T08:15:21Z
2024-05-09T08:15:21Z
Chain of Attack: a Semantic-Driven Contextual Multi-Turn attacker for LLM
Large language models (LLMs) have achieved remarkable performance in various natural language processing tasks, especially in dialogue systems. However, LLM may also pose security and moral threats, especially in multi round conversations where large models are more easily guided by contextual content, resulting in harmful or biased responses. In this paper, we present a novel method to attack LLMs in multi-turn dialogues, called CoA (Chain of Attack). CoA is a semantic-driven contextual multi-turn attack method that adaptively adjusts the attack policy through contextual feedback and semantic relevance during multi-turn of dialogue with a large model, resulting in the model producing unreasonable or harmful content. We evaluate CoA on different LLMs and datasets, and show that it can effectively expose the vulnerabilities of LLMs, and outperform existing attack methods. Our work provides a new perspective and tool for attacking and defending LLMs, and contributes to the security and ethical assessment of dialogue systems.
[ "['Xikang Yang' 'Xuehai Tang' 'Songlin Hu' 'Jizhong Han']" ]
null
null
2405.05611
null
null
http://arxiv.org/pdf/2405.05611v1
2024-05-09T08:15:31Z
2024-05-09T08:15:31Z
Privacy-Preserving Edge Federated Learning for Intelligent Mobile-Health Systems
Machine Learning (ML) algorithms are generally designed for scenarios in which all data is stored in one data center, where the training is performed. However, in many applications, e.g., in the healthcare domain, the training data is distributed among several entities, e.g., different hospitals or patients' mobile devices/sensors. At the same time, transferring the data to a central location for learning is certainly not an option, due to privacy concerns and legal issues, and in certain cases, because of the communication and computation overheads. Federated Learning (FL) is the state-of-the-art collaborative ML approach for training an ML model across multiple parties holding local data samples, without sharing them. However, enabling learning from distributed data over such edge Internet of Things (IoT) systems (e.g., mobile-health and wearable technologies, involving sensitive personal/medical data) in a privacy-preserving fashion presents a major challenge mainly due to their stringent resource constraints, i.e., limited computing capacity, communication bandwidth, memory storage, and battery lifetime. In this paper, we propose a privacy-preserving edge FL framework for resource-constrained mobile-health and wearable technologies over the IoT infrastructure. We evaluate our proposed framework extensively and provide the implementation of our technique on Amazon's AWS cloud platform based on the seizure detection application in epilepsy monitoring using wearable technologies.
[ "['Amin Aminifar' 'Matin Shokri' 'Amir Aminifar']" ]
null
null
2405.05615
null
null
http://arxiv.org/pdf/2405.05615v1
2024-05-09T08:23:20Z
2024-05-09T08:23:20Z
Memory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning
Current solutions for efficiently constructing large vision-language (VL) models follow a two-step paradigm: projecting the output of pre-trained vision encoders to the input space of pre-trained language models as visual prompts; and then transferring the models to downstream VL tasks via end-to-end parameter-efficient fine-tuning (PEFT). However, this paradigm still exhibits inefficiency since it significantly increases the input length of the language models. In this paper, in contrast to integrating visual prompts into inputs, we regard visual prompts as additional knowledge that facilitates language models in addressing tasks associated with visual information. Motivated by the finding that Feed-Forward Network (FFN) of language models acts as "key-value memory", we introduce a novel approach termed memory-space visual prompting (MemVP), wherein visual prompts are concatenated with the weights of FFN for visual knowledge injection. Experimental results across various VL tasks and language models reveal that MemVP significantly reduces the training time and inference latency of the finetuned VL models and surpasses the performance of previous PEFT methods. Code: https://github.com/JieShibo/MemVP
[ "['Shibo Jie' 'Yehui Tang' 'Ning Ding' 'Zhi-Hong Deng' 'Kai Han'\n 'Yunhe Wang']" ]
null
null
2405.05618
null
null
http://arxiv.org/pdf/2405.05618v1
2024-05-09T08:32:55Z
2024-05-09T08:32:55Z
An Automatic Prompt Generation System for Tabular Data Tasks
Efficient processing of tabular data is important in various industries, especially when working with datasets containing a large number of columns. Large language models (LLMs) have demonstrated their ability on several tasks through carefully crafted prompts. However, creating effective prompts for tabular datasets is challenging due to the structured nature of the data and the need to manage numerous columns. This paper presents an innovative auto-prompt generation system suitable for multiple LLMs, with minimal training. It proposes two novel methods; 1) A Reinforcement Learning-based algorithm for identifying and sequencing task-relevant columns 2) Cell-level similarity-based approach for enhancing few-shot example selection. Our approach has been extensively tested across 66 datasets, demonstrating improved performance in three downstream tasks: data imputation, error detection, and entity matching using two distinct LLMs; Google flan-t5-xxl and Mixtral 8x7B.
[ "['Ashlesha Akella' 'Abhijit Manatkar' 'Brij Chavda' 'Hima Patel']" ]
null
null
2405.05619
null
null
http://arxiv.org/pdf/2405.05619v3
2024-05-16T05:31:45Z
2024-05-09T08:35:21Z
Rectified Gaussian kernel multi-view k-means clustering
In this paper, we show two new variants of multi-view k-means (MVKM) algorithms to address multi-view data. The general idea is to outline the distance between $h$-th view data points $x_i^h$ and $h$-th view cluster centers $a_k^h$ in a different manner of centroid-based approach. Unlike other methods, our proposed methods learn the multi-view data by calculating the similarity using Euclidean norm in the space of Gaussian-kernel, namely as multi-view k-means with exponent distance (MVKM-ED). By simultaneously aligning the stabilizer parameter $p$ and kernel coefficients $beta^h$, the compression of Gaussian-kernel based weighted distance in Euclidean norm reduce the sensitivity of MVKM-ED. To this end, this paper designated as Gaussian-kernel multi-view k-means (GKMVKM) clustering algorithm. Numerical evaluation of five real-world multi-view data demonstrates the robustness and efficiency of our proposed MVKM-ED and GKMVKM approaches.
[ "['Kristina P. Sinaga']" ]
null
null
2405.05630
null
null
http://arxiv.org/pdf/2405.05630v1
2024-05-09T09:08:09Z
2024-05-09T09:08:09Z
Policy Gradient with Active Importance Sampling
Importance sampling (IS) represents a fundamental technique for a large surge of off-policy reinforcement learning approaches. Policy gradient (PG) methods, in particular, significantly benefit from IS, enabling the effective reuse of previously collected samples, thus increasing sample efficiency. However, classically, IS is employed in RL as a passive tool for re-weighting historical samples. However, the statistical community employs IS as an active tool combined with the use of behavioral distributions that allow the reduction of the estimate variance even below the sample mean one. In this paper, we focus on this second setting by addressing the behavioral policy optimization (BPO) problem. We look for the best behavioral policy from which to collect samples to reduce the policy gradient variance as much as possible. We provide an iterative algorithm that alternates between the cross-entropy estimation of the minimum-variance behavioral policy and the actual policy optimization, leveraging on defensive IS. We theoretically analyze such an algorithm, showing that it enjoys a convergence rate of order $O(epsilon^{-4})$ to a stationary point, but depending on a more convenient variance term w.r.t. standard PG methods. We then provide a practical version that is numerically validated, showing the advantages in the policy gradient estimation variance and on the learning speed.
[ "['Matteo Papini' 'Giorgio Manganini' 'Alberto Maria Metelli'\n 'Marcello Restelli']" ]
null
null
2405.05638
null
null
http://arxiv.org/pdf/2405.05638v1
2024-05-09T09:27:18Z
2024-05-09T09:27:18Z
An Efficient Finite Difference Approximation via a Double Sample-Recycling Approach
Estimating stochastic gradients is pivotal in fields like service systems within operations research. The classical method for this estimation is the finite difference approximation, which entails generating samples at perturbed inputs. Nonetheless, practical challenges persist in determining the perturbation and obtaining an optimal finite difference estimator in the sense of possessing the smallest mean squared error (MSE). To tackle this problem, we propose a double sample-recycling approach in this paper. Firstly, pilot samples are recycled to estimate the optimal perturbation. Secondly, recycling these pilot samples again and generating new samples at the estimated perturbation, lead to an efficient finite difference estimator. We analyze its bias, variance and MSE. Our analyses demonstrate a reduction in asymptotic variance, and in some cases, a decrease in asymptotic bias, compared to the optimal finite difference estimator. Therefore, our proposed estimator consistently coincides with, or even outperforms the optimal finite difference estimator. In numerical experiments, we apply the estimator in several examples, and numerical results demonstrate its robustness, as well as coincidence with the theory presented, especially in the case of small sample sizes.
[ "['Guo Liang' 'Guangwu Liu' 'Kun Zhang']" ]
null
null
2405.05646
null
null
http://arxiv.org/pdf/2405.05646v2
2024-05-28T07:03:49Z
2024-05-09T09:40:56Z
Outlier-robust Kalman Filtering through Generalised Bayes
We derive a novel, provably robust, and closed-form Bayesian update rule for online filtering in state-space models in the presence of outliers and misspecified measurement models. Our method combines generalised Bayesian inference with filtering methods such as the extended and ensemble Kalman filter. We use the former to show robustness and the latter to ensure computational efficiency in the case of nonlinear models. Our method matches or outperforms other robust filtering methods (such as those based on variational Bayes) at a much lower computational cost. We show this empirically on a range of filtering problems with outlier measurements, such as object tracking, state estimation in high-dimensional chaotic systems, and online learning of neural networks.
[ "['Gerardo Duran-Martin' 'Matias Altamirano' 'Alexander Y. Shestopaloff'\n 'Leandro Sánchez-Betancourt' 'Jeremias Knoblauch' 'Matt Jones'\n 'François-Xavier Briol' 'Kevin Murphy']" ]
null
null
2405.05665
null
null
http://arxiv.org/pdf/2405.05665v1
2024-05-09T10:37:33Z
2024-05-09T10:37:33Z
SubGDiff: A Subgraph Diffusion Model to Improve Molecular Representation Learning
Molecular representation learning has shown great success in advancing AI-based drug discovery. The core of many recent works is based on the fact that the 3D geometric structure of molecules provides essential information about their physical and chemical characteristics. Recently, denoising diffusion probabilistic models have achieved impressive performance in 3D molecular representation learning. However, most existing molecular diffusion models treat each atom as an independent entity, overlooking the dependency among atoms within the molecular substructures. This paper introduces a novel approach that enhances molecular representation learning by incorporating substructural information within the diffusion process. We propose a novel diffusion model termed SubGDiff for involving the molecular subgraph information in diffusion. Specifically, SubGDiff adopts three vital techniques: i) subgraph prediction, ii) expectation state, and iii) k-step same subgraph diffusion, to enhance the perception of molecular substructure in the denoising network. Experimentally, extensive downstream tasks demonstrate the superior performance of our approach. The code is available at https://github.com/youjibiying/SubGDiff.
[ "['Jiying Zhang' 'Zijing Liu' 'Yu Wang' 'Yu Li']" ]
null
null
2405.05673
null
null
http://arxiv.org/pdf/2405.05673v1
2024-05-09T10:58:40Z
2024-05-09T10:58:40Z
Imprecise Multi-Armed Bandits
We introduce a novel multi-armed bandit framework, where each arm is associated with a fixed unknown credal set over the space of outcomes (which can be richer than just the reward). The arm-to-credal-set correspondence comes from a known class of hypotheses. We then define a notion of regret corresponding to the lower prevision defined by these credal sets. Equivalently, the setting can be regarded as a two-player zero-sum game, where, on each round, the agent chooses an arm and the adversary chooses the distribution over outcomes from a set of options associated with this arm. The regret is defined with respect to the value of game. For certain natural hypothesis classes, loosely analgous to stochastic linear bandits (which are a special case of the resulting setting), we propose an algorithm and prove a corresponding upper bound on regret. We also prove lower bounds on regret for particular special cases.
[ "['Vanessa Kosoy']" ]
null
null
2405.05695
null
null
http://arxiv.org/pdf/2405.05695v1
2024-05-09T11:50:19Z
2024-05-09T11:50:19Z
Aux-NAS: Exploiting Auxiliary Labels with Negligibly Extra Inference Cost
We aim at exploiting additional auxiliary labels from an independent (auxiliary) task to boost the primary task performance which we focus on, while preserving a single task inference cost of the primary task. While most existing auxiliary learning methods are optimization-based relying on loss weights/gradients manipulation, our method is architecture-based with a flexible asymmetric structure for the primary and auxiliary tasks, which produces different networks for training and inference. Specifically, starting from two single task networks/branches (each representing a task), we propose a novel method with evolving networks where only primary-to-auxiliary links exist as the cross-task connections after convergence. These connections can be removed during the primary task inference, resulting in a single-task inference cost. We achieve this by formulating a Neural Architecture Search (NAS) problem, where we initialize bi-directional connections in the search space and guide the NAS optimization converging to an architecture with only the single-side primary-to-auxiliary connections. Moreover, our method can be incorporated with optimization-based auxiliary learning approaches. Extensive experiments with six tasks on NYU v2, CityScapes, and Taskonomy datasets using VGG, ResNet, and ViT backbones validate the promising performance. The codes are available at https://github.com/ethanygao/Aux-NAS.
[ "['Yuan Gao' 'Weizhong Zhang' 'Wenhan Luo' 'Lin Ma' 'Jin-Gang Yu'\n 'Gui-Song Xia' 'Jiayi Ma']" ]
null
null
2405.05714
null
null
http://arxiv.org/pdf/2405.05714v2
2024-07-02T07:06:15Z
2024-05-08T12:13:40Z
Estimating Noisy Class Posterior with Part-level Labels for Noisy Label Learning
In noisy label learning, estimating noisy class posteriors plays a fundamental role for developing consistent classifiers, as it forms the basis for estimating clean class posteriors and the transition matrix. Existing methods typically learn noisy class posteriors by training a classification model with noisy labels. However, when labels are incorrect, these models may be misled to overemphasize the feature parts that do not reflect the instance characteristics, resulting in significant errors in estimating noisy class posteriors. To address this issue, this paper proposes to augment the supervised information with part-level labels, encouraging the model to focus on and integrate richer information from various parts. Specifically, our method first partitions features into distinct parts by cropping instances, yielding part-level labels associated with these various parts. Subsequently, we introduce a novel single-to-multiple transition matrix to model the relationship between the noisy and part-level labels, which incorporates part-level labels into a classifier-consistent framework. Utilizing this framework with part-level labels, we can learn the noisy class posteriors more precisely by guiding the model to integrate information from various parts, ultimately improving the classification performance. Our method is theoretically sound, while experiments show that it is empirically effective in synthetic and real-world noisy benchmarks.
[ "['Rui Zhao' 'Bin Shi' 'Jianfei Ruan' 'Tianze Pan' 'Bo Dong']" ]
null
null
2405.05722
null
null
http://arxiv.org/pdf/2405.05722v3
2024-06-18T08:08:17Z
2024-05-09T12:34:45Z
A Framework of SO(3)-equivariant Non-linear Representation Learning and its Application to Electronic-Structure Hamiltonian Prediction
We present both a theoretical and a methodological framework that addresses a critical challenge in applying deep learning to physical systems: the reconciliation of non-linear expressiveness with SO(3)-equivariance in predictions of SO(3)-equivariant quantities. Inspired by covariant theory in physics, we address this problem by exploring the mathematical relationships between SO(3)-invariant and SO(3)-equivariant quantities and their representations. We first construct theoretical SO(3)-invariant quantities derived from the SO(3)-equivariant regression targets, and use these invariant quantities as supervisory labels to guide the learning of high-quality SO(3)-invariant features. Given that SO(3)-invariance is preserved under non-linear operations, the encoding process for invariant features can extensively utilize non-linear mappings, thereby fully capturing the non-linear patterns inherent in physical systems. Building on this foundation, we propose a gradient-based mechanism to induce SO(3)-equivariant encodings of various degrees from the learned SO(3)-invariant features. This mechanism can incorporate non-linear expressive capabilities into SO(3)-equivariant representations, while theoretically preserving their equivariant properties as we prove. We apply our theory and method to the electronic-structure Hamiltonian prediction tasks, experimental results on eight benchmark databases covering multiple types of elements and challenging scenarios show dramatic breakthroughs on the state-of-the-art prediction accuracy, with improvements of up to 40% in predicting Hamiltonians and up to 76% in predicting downstream physical quantities such as occupied orbital energy. Our approach goes beyond handling physical systems and offers a promising general solution to the critical dilemma between equivariance and non-linear expressiveness for the deep learning paradigm.
[ "['Shi Yin' 'Xinyang Pan' 'Fengyan Wang' 'Feng Wu' 'Lixin He']" ]
null
null
2405.05733
null
null
http://arxiv.org/pdf/2405.05733v1
2024-05-09T12:50:16Z
2024-05-09T12:50:16Z
Batched Stochastic Bandit for Nondegenerate Functions
This paper studies batched bandit learning problems for nondegenerate functions. We introduce an algorithm that solves the batched bandit problem for nondegenerate functions near-optimally. More specifically, we introduce an algorithm, called Geometric Narrowing (GN), whose regret bound is of order $widetilde{{mathcal{O}}} ( A_{+}^d sqrt{T} )$. In addition, GN only needs $mathcal{O} (log log T)$ batches to achieve this regret. We also provide lower bound analysis for this problem. More specifically, we prove that over some (compact) doubling metric space of doubling dimension $d$: 1. For any policy $pi$, there exists a problem instance on which $pi$ admits a regret of order ${Omega} ( A_-^d sqrt{T})$; 2. No policy can achieve a regret of order $ A_-^d sqrt{T} $ over all problem instances, using less than $ Omega ( log log T ) $ rounds of communications. Our lower bound analysis shows that the GN algorithm achieves near optimal regret with minimal number of batches.
[ "['Yu Liu' 'Yunlu Shu' 'Tianyu Wang']" ]
null
null
2405.05736
null
null
http://arxiv.org/pdf/2405.05736v1
2024-05-09T12:52:22Z
2024-05-09T12:52:22Z
Optimal Baseline Corrections for Off-Policy Contextual Bandits
The off-policy learning paradigm allows for recommender systems and general ranking applications to be framed as decision-making problems, where we aim to learn decision policies that optimize an unbiased offline estimate of an online reward metric. With unbiasedness comes potentially high variance, and prevalent methods exist to reduce estimation variance. These methods typically make use of control variates, either additive (i.e., baseline corrections or doubly robust methods) or multiplicative (i.e., self-normalisation). Our work unifies these approaches by proposing a single framework built on their equivalence in learning scenarios. The foundation of our framework is the derivation of an equivalent baseline correction for all of the existing control variates. Consequently, our framework enables us to characterize the variance-optimal unbiased estimator and provide a closed-form solution for it. This optimal estimator brings significantly improved performance in both evaluation and learning, and minimizes data requirements. Empirical observations corroborate our theoretical findings.
[ "['Shashank Gupta' 'Olivier Jeunen' 'Harrie Oosterhuis' 'Maarten de Rijke']" ]
null
null
2405.05748
null
null
http://arxiv.org/pdf/2405.05748v1
2024-05-09T13:13:34Z
2024-05-09T13:13:34Z
Learning to Slice Wi-Fi Networks: A State-Augmented Primal-Dual Approach
Network slicing is a key feature in 5G/NG cellular networks that creates customized slices for different service types with various quality-of-service (QoS) requirements, which can achieve service differentiation and guarantee service-level agreement (SLA) for each service type. In Wi-Fi networks, there is limited prior work on slicing, and a potential solution is based on a multi-tenant architecture on a single access point (AP) that dedicates different channels to different slices. In this paper, we define a flexible, constrained learning framework to enable slicing in Wi-Fi networks subject to QoS requirements. We specifically propose an unsupervised learning-based network slicing method that leverages a state-augmented primal-dual algorithm, where a neural network policy is trained offline to optimize a Lagrangian function and the dual variable dynamics are updated online in the execution phase. We show that state augmentation is crucial for generating slicing decisions that meet the ergodic QoS requirements.
[ "['Yiğit Berkay Uslu' 'Roya Doostnejad' 'Alejandro Ribeiro'\n 'Navid NaderiAlizadeh']" ]
null
null
2405.05751
null
null
http://arxiv.org/pdf/2405.05751v1
2024-05-09T13:15:40Z
2024-05-09T13:15:40Z
A Multi-Level Superoptimizer for Tensor Programs
We introduce Mirage, the first multi-level superoptimizer for tensor programs. A key idea in Mirage is $mu$Graphs, a uniform representation of tensor programs at the kernel, thread block, and thread levels of the GPU compute hierarchy. $mu$Graphs enable Mirage to discover novel optimizations that combine algebraic transformations, schedule transformations, and generation of new custom kernels. To navigate the large search space, Mirage introduces a pruning technique based on abstraction that significantly reduces the search space and provides a certain optimality guarantee. To ensure that the optimized $mu$Graph is equivalent to the input program, Mirage introduces a probabilistic equivalence verification procedure with strong theoretical guarantees. Our evaluation shows that Mirage outperforms existing approaches by up to 3.5$times$ even for DNNs that are widely used and heavily optimized. Mirage is publicly available at https://github.com/mirage-project/mirage.
[ "['Mengdi Wu' 'Xinhao Cheng' 'Oded Padon' 'Zhihao Jia']" ]
null
null
2405.05755
null
null
http://arxiv.org/pdf/2405.05755v2
2024-05-13T08:17:13Z
2024-05-09T13:21:03Z
CSA-Net: Channel-wise Spatially Autocorrelated Attention Networks
In recent years, convolutional neural networks (CNNs) with channel-wise feature refining mechanisms have brought noticeable benefits to modelling channel dependencies. However, current attention paradigms fail to infer an optimal channel descriptor capable of simultaneously exploiting statistical and spatial relationships among feature maps. In this paper, to overcome this shortcoming, we present a novel channel-wise spatially autocorrelated (CSA) attention mechanism. Inspired by geographical analysis, the proposed CSA exploits the spatial relationships between channels of feature maps to produce an effective channel descriptor. To the best of our knowledge, this is the f irst time that the concept of geographical spatial analysis is utilized in deep CNNs. The proposed CSA imposes negligible learning parameters and light computational overhead to the deep model, making it a powerful yet efficient attention module of choice. We validate the effectiveness of the proposed CSA networks (CSA-Nets) through extensive experiments and analysis on ImageNet, and MS COCO benchmark datasets for image classification, object detection, and instance segmentation. The experimental results demonstrate that CSA-Nets are able to consistently achieve competitive performance and superior generalization than several state-of-the-art attention-based CNNs over different benchmark tasks and datasets.
[ "['Nick Nikzad' 'Yongsheng Gao' 'Jun Zhou']" ]
null
null
2405.05766
null
null
http://arxiv.org/pdf/2405.05766v1
2024-05-09T13:42:54Z
2024-05-09T13:42:54Z
To Trust or Not to Trust: Towards a novel approach to measure trust for XAI systems
The increasing reliance on Deep Learning models, combined with their inherent lack of transparency, has spurred the development of a novel field of study known as eXplainable AI (XAI) methods. These methods seek to enhance the trust of end-users in automated systems by providing insights into the rationale behind their decisions. This paper presents a novel approach for measuring user trust in XAI systems, allowing their refinement. Our proposed metric combines both performance metrics and trust indicators from an objective perspective. To validate this novel methodology, we conducted a case study in a realistic medical scenario: the usage of XAI system for the detection of pneumonia from x-ray images.
[ "['Miquel Miró-Nicolau' 'Gabriel Moyà-Alcover' 'Antoni Jaume-i-Capó'\n 'Manuel González-Hidalgo' 'Maria Gemma Sempere Campello'\n 'Juan Antonio Palmer Sancho']" ]
null
null
2405.05780
null
null
http://arxiv.org/pdf/2405.05780v1
2024-05-09T13:57:28Z
2024-05-09T13:57:28Z
Neural Network Learning of Black-Scholes Equation for Option Pricing
One of the most discussed problems in the financial world is stock option pricing. The Black-Scholes Equation is a Parabolic Partial Differential Equation which provides an option pricing model. The present work proposes an approach based on Neural Networks to solve the Black-Scholes Equations. Real-world data from the stock options market were used as the initial boundary to solve the Black-Scholes Equation. In particular, times series of call options prices of Brazilian companies Petrobras and Vale were employed. The results indicate that the network can learn to solve the Black-Sholes Equation for a specific real-world stock options time series. The experimental results showed that the Neural network option pricing based on the Black-Sholes Equation solution can reach an option pricing forecasting more accurate than the traditional Black-Sholes analytical solutions. The experimental results making it possible to use this methodology to make short-term call option price forecasts in options markets.
[ "['Daniel de Souza Santos' 'Tiago Alessandro Espinola Ferreira']" ]
null
null
2405.05784
null
null
http://arxiv.org/pdf/2405.05784v1
2024-05-09T14:03:52Z
2024-05-09T14:03:52Z
Link Stealing Attacks Against Inductive Graph Neural Networks
A graph neural network (GNN) is a type of neural network that is specifically designed to process graph-structured data. Typically, GNNs can be implemented in two settings, including the transductive setting and the inductive setting. In the transductive setting, the trained model can only predict the labels of nodes that were observed at the training time. In the inductive setting, the trained model can be generalized to new nodes/graphs. Due to its flexibility, the inductive setting is the most popular GNN setting at the moment. Previous work has shown that transductive GNNs are vulnerable to a series of privacy attacks. However, a comprehensive privacy analysis of inductive GNN models is still missing. This paper fills the gap by conducting a systematic privacy analysis of inductive GNNs through the lens of link stealing attacks, one of the most popular attacks that are specifically designed for GNNs. We propose two types of link stealing attacks, i.e., posterior-only attacks and combined attacks. We define threat models of the posterior-only attacks with respect to node topology and the combined attacks by considering combinations of posteriors, node attributes, and graph features. Extensive evaluation on six real-world datasets demonstrates that inductive GNNs leak rich information that enables link stealing attacks with advantageous properties. Even attacks with no knowledge about graph structures can be effective. We also show that our attacks are robust to different node similarities and different graph features. As a counterpart, we investigate two possible defenses and discover they are ineffective against our attacks, which calls for more effective defenses.
[ "['Yixin Wu' 'Xinlei He' 'Pascal Berrang' 'Mathias Humbert'\n 'Michael Backes' 'Neil Zhenqiang Gong' 'Yang Zhang']" ]
null
null
2405.05786
null
null
http://arxiv.org/pdf/2405.05786v1
2024-05-09T14:09:36Z
2024-05-09T14:09:36Z
FusionTransNet for Smart Urban Mobility: Spatiotemporal Traffic Forecasting Through Multimodal Network Integration
This study develops FusionTransNet, a framework designed for Origin-Destination (OD) flow predictions within smart and multimodal urban transportation systems. Urban transportation complexity arises from the spatiotemporal interactions among various traffic modes. Motivated by analyzing multimodal data from Shenzhen, a framework that can dissect complicated spatiotemporal interactions between these modes, from the microscopic local level to the macroscopic city-wide perspective, is essential. The framework contains three core components: the Intra-modal Learning Module, the Inter-modal Learning Module, and the Prediction Decoder. The Intra-modal Learning Module is designed to analyze spatial dependencies within individual transportation modes, facilitating a granular understanding of single-mode spatiotemporal dynamics. The Inter-modal Learning Module extends this analysis, integrating data across different modes to uncover cross-modal interdependencies, by breaking down the interactions at both local and global scales. Finally, the Prediction Decoder synthesizes insights from the preceding modules to generate accurate OD flow predictions, translating complex multimodal interactions into forecasts. Empirical evaluations conducted in metropolitan contexts, including Shenzhen and New York, demonstrate FusionTransNet's superior predictive accuracy compared to existing state-of-the-art methods. The implication of this study extends beyond urban transportation, as the method for transferring information across different spatiotemporal graphs at both local and global scales can be instrumental in other spatial systems, such as supply chain logistics and epidemics spreading.
[ "['Binwu Wang' 'Yan Leng' 'Guang Wang' 'Yang Wang']" ]
null
null
2405.05792
null
null
http://arxiv.org/pdf/2405.05792v1
2024-05-09T14:17:26Z
2024-05-09T14:17:26Z
RoboHop: Segment-based Topological Map Representation for Open-World Visual Navigation
Mapping is crucial for spatial reasoning, planning and robot navigation. Existing approaches range from metric, which require precise geometry-based optimization, to purely topological, where image-as-node based graphs lack explicit object-level reasoning and interconnectivity. In this paper, we propose a novel topological representation of an environment based on "image segments", which are semantically meaningful and open-vocabulary queryable, conferring several advantages over previous works based on pixel-level features. Unlike 3D scene graphs, we create a purely topological graph with segments as nodes, where edges are formed by a) associating segment-level descriptors between pairs of consecutive images and b) connecting neighboring segments within an image using their pixel centroids. This unveils a "continuous sense of a place", defined by inter-image persistence of segments along with their intra-image neighbours. It further enables us to represent and update segment-level descriptors through neighborhood aggregation using graph convolution layers, which improves robot localization based on segment-level retrieval. Using real-world data, we show how our proposed map representation can be used to i) generate navigation plans in the form of "hops over segments" and ii) search for target objects using natural language queries describing spatial relations of objects. Furthermore, we quantitatively analyze data association at the segment level, which underpins inter-image connectivity during mapping and segment-level localization when revisiting the same place. Finally, we show preliminary trials on segment-level `hopping' based zero-shot real-world navigation. Project page with supplementary details: oravus.github.io/RoboHop/
[ "['Sourav Garg' 'Krishan Rana' 'Mehdi Hosseinzadeh' 'Lachlan Mares'\n 'Niko Sünderhauf' 'Feras Dayoub' 'Ian Reid']" ]
null
null
2405.05795
null
null
http://arxiv.org/pdf/2405.05795v1
2024-05-09T14:25:25Z
2024-05-09T14:25:25Z
Enhancing Suicide Risk Detection on Social Media through Semi-Supervised Deep Label Smoothing
Suicide is a prominent issue in society. Unfortunately, many people at risk for suicide do not receive the support required. Barriers to people receiving support include social stigma and lack of access to mental health care. With the popularity of social media, people have turned to online forums, such as Reddit to express their feelings and seek support. This provides the opportunity to support people with the aid of artificial intelligence. Social media posts can be classified, using text classification, to help connect people with professional help. However, these systems fail to account for the inherent uncertainty in classifying mental health conditions. Unlike other areas of healthcare, mental health conditions have no objective measurements of disease often relying on expert opinion. Thus when formulating deep learning problems involving mental health, using hard, binary labels does not accurately represent the true nature of the data. In these settings, where human experts may disagree, fuzzy or soft labels may be more appropriate. The current work introduces a novel label smoothing method which we use to capture any uncertainty within the data. We test our approach on a five-label multi-class classification problem. We show, our semi-supervised deep label smoothing method improves classification accuracy above the existing state of the art. Where existing research reports an accuracy of 43% on the Reddit C-SSRS dataset, using empirical experiments to evaluate our novel label smoothing method, we improve upon this existing benchmark to 52%. These improvements in model performance have the potential to better support those experiencing mental distress. Future work should explore the use of probabilistic methods in both natural language processing and quantifying contributions of both epistemic and aleatoric uncertainty in noisy datasets.
[ "['Matthew Squires' 'Xiaohui Tao' 'Soman Elangovan' 'U Rajendra Acharya'\n 'Raj Gururajan' 'Haoran Xie' 'Xujuan Zhou']" ]
null
null
2405.05809
null
null
http://arxiv.org/pdf/2405.05809v1
2024-05-09T14:48:17Z
2024-05-09T14:48:17Z
Aequitas Flow: Streamlining Fair ML Experimentation
Aequitas Flow is an open-source framework for end-to-end Fair Machine Learning (ML) experimentation in Python. This package fills the existing integration gaps in other Fair ML packages of complete and accessible experimentation. It provides a pipeline for fairness-aware model training, hyperparameter optimization, and evaluation, enabling rapid and simple experiments and result analysis. Aimed at ML practitioners and researchers, the framework offers implementations of methods, datasets, metrics, and standard interfaces for these components to improve extensibility. By facilitating the development of fair ML practices, Aequitas Flow seeks to enhance the adoption of these concepts in AI technologies.
[ "['Sérgio Jesus' 'Pedro Saleiro' 'Inês Oliveira e Silva' 'Beatriz M. Jorge'\n 'Rita P. Ribeiro' 'João Gama' 'Pedro Bizarro' 'Rayid Ghani']" ]
null
null
2405.05818
null
null
http://arxiv.org/pdf/2405.05818v1
2024-05-09T14:56:49Z
2024-05-09T14:56:49Z
Fine-grained Analysis and Faster Algorithms for Iteratively Solving Linear Systems
While effective in practice, iterative methods for solving large systems of linear equations can be significantly affected by problem-dependent condition number quantities. This makes characterizing their time complexity challenging, particularly when we wish to make comparisons between deterministic and stochastic methods, that may or may not rely on preconditioning and/or fast matrix multiplication. In this work, we consider a fine-grained notion of complexity for iterative linear solvers which we call the spectral tail condition number, $kappa_ell$, defined as the ratio between the $ell$th largest and the smallest singular value of the matrix representing the system. Concretely, we prove the following main algorithmic result: Given an $ntimes n$ matrix $A$ and a vector $b$, we can find $tilde{x}$ such that $|Atilde{x}-b|leqepsilon|b|$ in time $tilde{O}(kappa_ellcdot n^2log 1/epsilon)$ for any $ell = O(n^{frac1{omega-1}})=O(n^{0.729})$, where $omega approx 2.372$ is the current fast matrix multiplication exponent. This guarantee is achieved by Sketch-and-Project with Nesterov's acceleration. Some of the implications of our result, and of the use of $kappa_ell$, include direct improvement over a fine-grained analysis of the Conjugate Gradient method, suggesting a stronger separation between deterministic and stochastic iterative solvers; and relating the complexity of iterative solvers to the ongoing algorithmic advances in fast matrix multiplication, since the bound on $ell$ improves with $omega$. Our main technical contributions are new sharp characterizations for the first and second moments of the random projection matrix that commonly arises in sketching algorithms, building on a combination of techniques from combinatorial sampling via determinantal point processes and Gaussian universality results from random matrix theory.
[ "['Michał Dereziński' 'Daniel LeJeune' 'Deanna Needell' 'Elizaveta Rebrova']" ]
null
null
2405.05836
null
null
http://arxiv.org/pdf/2405.05836v1
2024-05-09T15:15:34Z
2024-05-09T15:15:34Z
Informed Decision-Making through Advancements in Open Set Recognition and Unknown Sample Detection
Machine learning-based techniques open up many opportunities and improvements to derive deeper and more practical insights from data that can help businesses make informed decisions. However, the majority of these techniques focus on the conventional closed-set scenario, in which the label spaces for the training and test sets are identical. Open set recognition (OSR) aims to bring classification tasks in a situation that is more like reality, which focuses on classifying the known classes as well as handling unknown classes effectively. In such an open-set problem the gathered samples in the training set cannot encompass all the classes and the system needs to identify unknown samples at test time. On the other hand, building an accurate and comprehensive model in a real dynamic environment presents a number of obstacles, because it is prohibitively expensive to train for every possible example of unknown items, and the model may fail when tested in testbeds. This study provides an algorithm exploring a new representation of feature space to improve classification in OSR tasks. The efficacy and efficiency of business processes and decision-making can be improved by integrating OSR, which offers more precise and insightful predictions of outcomes. We demonstrate the performance of the proposed method on three established datasets. The results indicate that the proposed model outperforms the baseline methods in accuracy and F1-score.
[ "['Atefeh Mahdavi' 'Marco Carvalho']" ]
null
null
2405.05847
null
null
http://arxiv.org/pdf/2405.05847v2
2024-06-06T15:22:31Z
2024-05-09T15:34:15Z
Learned feature representations are biased by complexity, learning order, position, and more
Representation learning, and interpreting learned representations, are key areas of focus in machine learning and neuroscience. Both fields generally use representations as a means to understand or improve a system's computations. In this work, however, we explore surprising dissociations between representation and computation that may pose challenges for such efforts. We create datasets in which we attempt to match the computational role that different features play, while manipulating other properties of the features or the data. We train various deep learning architectures to compute these multiple abstract features about their inputs. We find that their learned feature representations are systematically biased towards representing some features more strongly than others, depending upon extraneous properties such as feature complexity, the order in which features are learned, and the distribution of features over the inputs. For example, features that are simpler to compute or learned first tend to be represented more strongly and densely than features that are more complex or learned later, even if all features are learned equally well. We also explore how these biases are affected by architectures, optimizers, and training regimes (e.g., in transformers, features decoded earlier in the output sequence also tend to be represented more strongly). Our results help to characterize the inductive biases of gradient-based representation learning. These results also highlight a key challenge for interpretability $-$ or for comparing the representations of models and brains $-$ disentangling extraneous biases from the computationally important aspects of a system's internal representations.
[ "['Andrew Kyle Lampinen' 'Stephanie C. Y. Chan' 'Katherine Hermann']" ]
null
null
2405.05852
null
null
http://arxiv.org/pdf/2405.05852v1
2024-05-09T15:39:54Z
2024-05-09T15:39:54Z
Pre-trained Text-to-Image Diffusion Models Are Versatile Representation Learners for Control
Embodied AI agents require a fine-grained understanding of the physical world mediated through visual and language inputs. Such capabilities are difficult to learn solely from task-specific data. This has led to the emergence of pre-trained vision-language models as a tool for transferring representations learned from internet-scale data to downstream tasks and new domains. However, commonly used contrastively trained representations such as in CLIP have been shown to fail at enabling embodied agents to gain a sufficiently fine-grained scene understanding -- a capability vital for control. To address this shortcoming, we consider representations from pre-trained text-to-image diffusion models, which are explicitly optimized to generate images from text prompts and as such, contain text-conditioned representations that reflect highly fine-grained visuo-spatial information. Using pre-trained text-to-image diffusion models, we construct Stable Control Representations which allow learning downstream control policies that generalize to complex, open-ended environments. We show that policies learned using Stable Control Representations are competitive with state-of-the-art representation learning approaches across a broad range of simulated control settings, encompassing challenging manipulation and navigation tasks. Most notably, we show that Stable Control Representations enable learning policies that exhibit state-of-the-art performance on OVMM, a difficult open-vocabulary navigation benchmark.
[ "['Gunshi Gupta' 'Karmesh Yadav' 'Yarin Gal' 'Dhruv Batra' 'Zsolt Kira'\n 'Cong Lu' 'Tim G. J. Rudner']" ]
null
null
2405.05855
null
null
http://arxiv.org/pdf/2405.05855v1
2024-05-09T15:44:11Z
2024-05-09T15:44:11Z
Compressed Bayesian Federated Learning for Reliable Passive Radio Sensing in Industrial IoT
Bayesian Federated Learning (FL) has been recently introduced to provide well-calibrated Machine Learning (ML) models quantifying the uncertainty of their predictions. Despite their advantages compared to frequentist FL setups, Bayesian FL tools implemented over decentralized networks are subject to high communication costs due to the iterated exchange of local posterior distributions among cooperating devices. Therefore, this paper proposes a communication-efficient decentralized Bayesian FL policy to reduce the communication overhead without sacrificing final learning accuracy and calibration. The proposed method integrates compression policies and allows devices to perform multiple optimization steps before sending the local posterior distributions. We integrate the developed tool in an Industrial Internet of Things (IIoT) use case where collaborating nodes equipped with autonomous radar sensors are tasked to reliably localize human operators in a workplace shared with robots. Numerical results show that the developed approach obtains highly accurate yet well-calibrated ML models compatible with the ones provided by conventional (uncompressed) Bayesian FL tools while substantially decreasing the communication overhead (i.e., up to 99%). Furthermore, the proposed approach is advantageous when compared with state-of-the-art compressed frequentist FL setups in terms of calibration, especially when the statistical distribution of the testing dataset changes.
[ "['Luca Barbieri' 'Stefano Savazzi' 'Monica Nicoli']" ]
null
null
2405.05860
null
null
http://arxiv.org/pdf/2405.05860v1
2024-05-09T15:48:07Z
2024-05-09T15:48:07Z
The Perspectivist Paradigm Shift: Assumptions and Challenges of Capturing Human Labels
Longstanding data labeling practices in machine learning involve collecting and aggregating labels from multiple annotators. But what should we do when annotators disagree? Though annotator disagreement has long been seen as a problem to minimize, new perspectivist approaches challenge this assumption by treating disagreement as a valuable source of information. In this position paper, we examine practices and assumptions surrounding the causes of disagreement--some challenged by perspectivist approaches, and some that remain to be addressed--as well as practical and normative challenges for work operating under these assumptions. We conclude with recommendations for the data labeling pipeline and avenues for future research engaging with subjectivity and disagreement.
[ "['Eve Fleisig' 'Su Lin Blodgett' 'Dan Klein' 'Zeerak Talat']" ]
null
null
2405.05861
null
null
http://arxiv.org/pdf/2405.05861v1
2024-05-09T15:48:39Z
2024-05-09T15:48:39Z
ExACT: An End-to-End Autonomous Excavator System Using Action Chunking With Transformers
Excavators are crucial for diverse tasks such as construction and mining, while autonomous excavator systems enhance safety and efficiency, address labor shortages, and improve human working conditions. Different from the existing modularized approaches, this paper introduces ExACT, an end-to-end autonomous excavator system that processes raw LiDAR, camera data, and joint positions to control excavator valves directly. Utilizing the Action Chunking with Transformers (ACT) architecture, ExACT employs imitation learning to take observations from multi-modal sensors as inputs and generate actionable sequences. In our experiment, we build a simulator based on the captured real-world data to model the relations between excavator valve states and joint velocities. With a few human-operated demonstration data trajectories, ExACT demonstrates the capability of completing different excavation tasks, including reaching, digging and dumping through imitation learning in validations with the simulator. To the best of our knowledge, ExACT represents the first instance towards building an end-to-end autonomous excavator system via imitation learning methods with a minimal set of human demonstrations. The video about this work can be accessed at https://youtu.be/NmzR_Rf-aEk.
[ "['Liangliang Chen' 'Shiyu Jin' 'Haoyu Wang' 'Liangjun Zhang']" ]
null
null
2405.05865
null
null
http://arxiv.org/pdf/2405.05865v1
2024-05-09T15:53:43Z
2024-05-09T15:53:43Z
Faster Linear Systems and Matrix Norm Approximation via Multi-level Sketched Preconditioning
We present a new class of preconditioned iterative methods for solving linear systems of the form $Ax = b$. Our methods are based on constructing a low-rank Nystr"om approximation to $A$ using sparse random sketching. This approximation is used to construct a preconditioner, which itself is inverted quickly using additional levels of random sketching and preconditioning. We prove that the convergence of our methods depends on a natural average condition number of $A$, which improves as the rank of the Nystr"om approximation increases. Concretely, this allows us to obtain faster runtimes for a number of fundamental linear algebraic problems: 1. We show how to solve any $ntimes n$ linear system that is well-conditioned except for $k$ outlying large singular values in $tilde{O}(n^{2.065} + k^omega)$ time, improving on a recent result of [Derezi'nski, Yang, STOC 2024] for all $k gtrsim n^{0.78}$. 2. We give the first $tilde{O}(n^2 + {d_lambda}^{omega}$) time algorithm for solving a regularized linear system $(A + lambda I)x = b$, where $A$ is positive semidefinite with effective dimension $d_lambda$. This problem arises in applications like Gaussian process regression. 3. We give faster algorithms for approximating Schatten $p$-norms and other matrix norms. For example, for the Schatten 1 (nuclear) norm, we give an algorithm that runs in $tilde{O}(n^{2.11})$ time, improving on an $tilde{O}(n^{2.18})$ method of [Musco et al., ITCS 2018]. Interestingly, previous state-of-the-art algorithms for most of the problems above relied on stochastic iterative methods, like stochastic coordinate and gradient descent. Our work takes a completely different approach, instead leveraging tools from matrix sketching.
[ "['Michał Dereziński' 'Christopher Musco' 'Jiaming Yang']" ]
null
null
2405.05876
null
null
http://arxiv.org/pdf/2405.05876v1
2024-05-09T16:04:14Z
2024-05-09T16:04:14Z
Composable Part-Based Manipulation
In this paper, we propose composable part-based manipulation (CPM), a novel approach that leverages object-part decomposition and part-part correspondences to improve learning and generalization of robotic manipulation skills. By considering the functional correspondences between object parts, we conceptualize functional actions, such as pouring and constrained placing, as combinations of different correspondence constraints. CPM comprises a collection of composable diffusion models, where each model captures a different inter-object correspondence. These diffusion models can generate parameters for manipulation skills based on the specific object parts. Leveraging part-based correspondences coupled with the task decomposition into distinct constraints enables strong generalization to novel objects and object categories. We validate our approach in both simulated and real-world scenarios, demonstrating its effectiveness in achieving robust and generalized manipulation capabilities.
[ "['Weiyu Liu' 'Jiayuan Mao' 'Joy Hsu' 'Tucker Hermans' 'Animesh Garg'\n 'Jiajun Wu']" ]
null
null
2405.05886
null
null
http://arxiv.org/abs/2405.05886v2
2024-05-17T12:16:35Z
2024-05-09T16:22:24Z
Exploiting Autoencoder's Weakness to Generate Pseudo Anomalies
Due to the rare occurrence of anomalous events, a typical approach to anomaly detection is to train an autoencoder (AE) with normal data only so that it learns the patterns or representations of the normal training data. At test time, the trained AE is expected to well reconstruct normal but to poorly reconstruct anomalous data. However, contrary to the expectation, anomalous data is often well reconstructed as well. In order to further separate the reconstruction quality between normal and anomalous data, we propose creating pseudo anomalies from learned adaptive noise by exploiting the aforementioned weakness of AE, i.e., reconstructing anomalies too well. The generated noise is added to the normal data to create pseudo anomalies. Extensive experiments on Ped2, Avenue, ShanghaiTech, CIFAR-10, and KDDCUP datasets demonstrate the effectiveness and generic applicability of our approach in improving the discriminative capability of AEs for anomaly detection.
[ "['Marcella Astrid' 'Muhammad Zaigham Zaheer' 'Djamila Aouada'\n 'Seung-Ik Lee']" ]