categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
null | null |
2403.10423
| null | null |
http://arxiv.org/pdf/2403.10423v1
|
2024-03-15T15:58:20Z
|
2024-03-15T15:58:20Z
|
Quantization Avoids Saddle Points in Distributed Optimization
|
Distributed nonconvex optimization underpins key functionalities of numerous distributed systems, ranging from power systems, smart buildings, cooperative robots, vehicle networks to sensor networks. Recently, it has also merged as a promising solution to handle the enormous growth in data and model sizes in deep learning. A fundamental problem in distributed nonconvex optimization is avoiding convergence to saddle points, which significantly degrade optimization accuracy. We discover that the process of quantization, which is necessary for all digital communications, can be exploited to enable saddle-point avoidance. More specifically, we propose a stochastic quantization scheme and prove that it can effectively escape saddle points and ensure convergence to a second-order stationary point in distributed nonconvex optimization. With an easily adjustable quantization granularity, the approach allows a user to control the number of bits sent per iteration and, hence, to aggressively reduce the communication overhead. Numerical experimental results using distributed optimization and learning problems on benchmark datasets confirm the effectiveness of the approach.
|
[
"['Yanan Bo' 'Yongqiang Wang']"
] |
null | null |
2403.10424
| null | null |
http://arxiv.org/pdf/2403.10424v2
|
2024-03-29T13:48:44Z
|
2024-03-15T15:58:37Z
|
Structured Evaluation of Synthetic Tabular Data
|
Tabular data is common yet typically incomplete, small in volume, and access-restricted due to privacy concerns. Synthetic data generation offers potential solutions. Many metrics exist for evaluating the quality of synthetic tabular data; however, we lack an objective, coherent interpretation of the many metrics. To address this issue, we propose an evaluation framework with a single, mathematical objective that posits that the synthetic data should be drawn from the same distribution as the observed data. Through various structural decomposition of the objective, this framework allows us to reason for the first time the completeness of any set of metrics, as well as unifies existing metrics, including those that stem from fidelity considerations, downstream application, and model-based approaches. Moreover, the framework motivates model-free baselines and a new spectrum of metrics. We evaluate structurally informed synthesizers and synthesizers powered by deep learning. Using our structured framework, we show that synthetic data generators that explicitly represent tabular structure outperform other methods, especially on smaller datasets.
|
[
"['Scott Cheng-Hsin Yang' 'Baxter Eaves' 'Michael Schmidt' 'Ken Swanson'\n 'Patrick Shafto']"
] |
null | null |
2403.10444
| null | null |
http://arxiv.org/pdf/2403.10444v1
|
2024-03-15T16:28:22Z
|
2024-03-15T16:28:22Z
|
Optimal Block-Level Draft Verification for Accelerating Speculative
Decoding
|
Speculative decoding has shown to be an effective method for lossless acceleration of large language models (LLMs) during inference. In each iteration, the algorithm first uses a smaller model to draft a block of tokens. The tokens are then verified by the large model in parallel and only a subset of tokens will be kept to guarantee that the final output follows the distribution of the large model. In all of the prior speculative decoding works, the draft verification is performed token-by-token independently. In this work, we propose a better draft verification algorithm that provides additional wall-clock speedup without incurring additional computation cost and draft tokens. We first formulate the draft verification step as a block-level optimal transport problem. The block-level formulation allows us to consider a wider range of draft verification algorithms and obtain a higher number of accepted tokens in expectation in one draft block. We propose a verification algorithm that achieves the optimal accepted length for the block-level transport problem. We empirically evaluate our proposed block-level verification algorithm in a wide range of tasks and datasets, and observe consistent improvements in wall-clock speedup when compared to token-level verification algorithm. To the best of our knowledge, our work is the first to establish improvement over speculative decoding through a better draft verification algorithm.
|
[
"['Ziteng Sun' 'Jae Hun Ro' 'Ahmad Beirami' 'Ananda Theertha Suresh']"
] |
null | null |
2403.10446
| null | null |
http://arxiv.org/pdf/2403.10446v1
|
2024-03-15T16:30:14Z
|
2024-03-15T16:30:14Z
|
Enhancing LLM Factual Accuracy with RAG to Counter Hallucinations: A
Case Study on Domain-Specific Queries in Private Knowledge-Bases
|
We proposed an end-to-end system design towards utilizing Retrieval Augmented Generation (RAG) to improve the factual accuracy of Large Language Models (LLMs) for domain-specific and time-sensitive queries related to private knowledge-bases. Our system integrates RAG pipeline with upstream datasets processing and downstream performance evaluation. Addressing the challenge of LLM hallucinations, we finetune models with a curated dataset which originates from CMU's extensive resources and annotated with the teacher model. Our experiments demonstrate the system's effectiveness in generating more accurate answers to domain-specific and time-sensitive inquiries. The results also revealed the limitations of fine-tuning LLMs with small-scale and skewed datasets. This research highlights the potential of RAG systems in augmenting LLMs with external datasets for improved performance in knowledge-intensive tasks. Our code and models are available on Github.
|
[
"['Jiarui Li' 'Ye Yuan' 'Zehua Zhang']"
] |
null | null |
2403.10459
| null | null |
http://arxiv.org/pdf/2403.10459v1
|
2024-03-15T16:51:24Z
|
2024-03-15T16:51:24Z
|
Understanding the Double Descent Phenomenon in Deep Learning
|
Combining empirical risk minimization with capacity control is a classical strategy in machine learning when trying to control the generalization gap and avoid overfitting, as the model class capacity gets larger. Yet, in modern deep learning practice, very large over-parameterized models (e.g. neural networks) are optimized to fit perfectly the training data and still obtain great generalization performance. Past the interpolation point, increasing model complexity seems to actually lower the test error. In this tutorial, we explain the concept of double descent and its mechanisms. The first section sets the classical statistical learning framework and introduces the double descent phenomenon. By looking at a number of examples, section 2 introduces inductive biases that appear to have a key role in double descent by selecting, among the multiple interpolating solutions, a smooth empirical risk minimizer. Finally, section 3 explores the double descent with two linear models, and gives other points of view from recent related works.
|
[
"['Marc Lafon' 'Alexandre Thomas']"
] |
null | null |
2403.10461
| null | null |
http://arxiv.org/pdf/2403.10461v2
|
2024-05-29T14:23:35Z
|
2024-03-15T16:52:25Z
|
Introducing Adaptive Continuous Adversarial Training (ACAT) to Enhance
ML Robustness
|
Adversarial training enhances the robustness of Machine Learning (ML) models against adversarial attacks. However, obtaining labeled training and adversarial training data in network/cybersecurity domains is challenging and costly. Therefore, this letter introduces Adaptive Continuous Adversarial Training (ACAT), a method that integrates adversarial training samples into the model during continuous learning sessions using real-world detected adversarial data. Experimental results with a SPAM detection dataset demonstrate that ACAT reduces the time required for adversarial sample detection compared to traditional processes. Moreover, the accuracy of the under-attack ML-based SPAM filter increased from 69% to over 88% after just three retraining sessions.
|
[
"['Mohamed elShehaby' 'Aditya Kotha' 'Ashraf Matrawy']"
] |
null | null |
2403.10476
| null | null |
http://arxiv.org/pdf/2403.10476v1
|
2024-03-15T17:07:39Z
|
2024-03-15T17:07:39Z
|
Approximate Nullspace Augmented Finetuning for Robust Vision
Transformers
|
Enhancing the robustness of deep learning models, particularly in the realm of vision transformers (ViTs), is crucial for their real-world deployment. In this work, we provide a finetuning approach to enhance the robustness of vision transformers inspired by the concept of nullspace from linear algebra. Our investigation centers on whether a vision transformer can exhibit resilience to input variations akin to the nullspace property in linear mappings, implying that perturbations sampled from this nullspace do not influence the model's output when added to the input. Firstly, we show that for many pretrained ViTs, a non-trivial nullspace exists due to the presence of the patch embedding layer. Secondly, as nullspace is a concept associated with linear algebra, we demonstrate that it is possible to synthesize approximate nullspace elements for the non-linear blocks of ViTs employing an optimisation strategy. Finally, we propose a fine-tuning strategy for ViTs wherein we augment the training data with synthesized approximate nullspace noise. After finetuning, we find that the model demonstrates robustness to adversarial and natural image perbutations alike.
|
[
"['Haoyang Liu' 'Aditya Singh' 'Yijiang Li' 'Haohan Wang']"
] |
null | null |
2403.10488
| null | null |
http://arxiv.org/pdf/2403.10488v3
|
2024-04-20T16:24:44Z
|
2024-03-15T17:23:38Z
|
Joint Multimodal Transformer for Emotion Recognition in the Wild
|
Multimodal emotion recognition (MMER) systems typically outperform unimodal systems by leveraging the inter- and intra-modal relationships between, e.g., visual, textual, physiological, and auditory modalities. This paper proposes an MMER method that relies on a joint multimodal transformer (JMT) for fusion with key-based cross-attention. This framework can exploit the complementary nature of diverse modalities to improve predictive accuracy. Separate backbones capture intra-modal spatiotemporal dependencies within each modality over video sequences. Subsequently, our JMT fusion architecture integrates the individual modality embeddings, allowing the model to effectively capture inter- and intra-modal relationships. Extensive experiments on two challenging expression recognition tasks -- (1) dimensional emotion recognition on the Affwild2 dataset (with face and voice) and (2) pain estimation on the Biovid dataset (with face and biosensors) -- indicate that our JMT fusion can provide a cost-effective solution for MMER. Empirical results show that MMER systems with our proposed fusion allow us to outperform relevant baseline and state-of-the-art methods.
|
[
"['Paul Waligora' 'Haseeb Aslam' 'Osama Zeeshan' 'Soufiane Belharbi'\n 'Alessandro Lameiras Koerich' 'Marco Pedersoli' 'Simon Bacon'\n 'Eric Granger']"
] |
null | null |
2403.10497
| null | null |
http://arxiv.org/pdf/2403.10497v1
|
2024-03-15T17:32:02Z
|
2024-03-15T17:32:02Z
|
Data-Driven Distributionally Robust Safety Verification Using Barrier
Certificates and Conditional Mean Embeddings
|
Algorithmic verification of realistic systems to satisfy safety and other temporal requirements has suffered from poor scalability of the employed formal approaches. To design systems with rigorous guarantees, many approaches still rely on exact models of the underlying systems. Since this assumption can rarely be met in practice, models have to be inferred from measurement data or are bypassed completely. Whilst former usually requires the model structure to be known a-priori and immense amounts of data to be available, latter gives rise to a plethora of restrictive mathematical assumptions about the unknown dynamics. In a pursuit of developing scalable formal verification algorithms without shifting the problem to unrealistic assumptions, we employ the concept of barrier certificates, which can guarantee safety of the system, and learn the certificate directly from a compact set of system trajectories. We use conditional mean embeddings to embed data from the system into a reproducing kernel Hilbert space (RKHS) and construct an RKHS ambiguity set that can be inflated to robustify the result w.r.t. a set of plausible transition kernels. We show how to solve the resulting program efficiently using sum-of-squares optimization and a Gaussian process envelope. Our approach lifts the need for restrictive assumptions on the system dynamics and uncertainty, and suggests an improvement in the sample complexity of verifying the safety of a system on a tested case study compared to a state-of-the-art approach.
|
[
"['Oliver Schön' 'Zhengang Zhong' 'Sadegh Soudjani']"
] |
null | null |
2403.10499
| null | null |
http://arxiv.org/pdf/2403.10499v1
|
2024-03-15T17:33:49Z
|
2024-03-15T17:33:49Z
|
Benchmarking Zero-Shot Robustness of Multimodal Foundation Models: A
Pilot Study
|
Pre-training image representations from the raw text about images enables zero-shot vision transfer to downstream tasks. Through pre-training on millions of samples collected from the internet, multimodal foundation models, such as CLIP, produce state-of-the-art zero-shot results that often reach competitiveness with fully supervised methods without the need for task-specific training. Besides the encouraging performance on classification accuracy, it is reported that these models close the robustness gap by matching the performance of supervised models trained on ImageNet under natural distribution shift. Because robustness is critical to real-world applications, especially safety-critical ones, in this paper, we present a comprehensive evaluation based on a large-scale robustness benchmark covering 7 natural, 3 synthetic distribution shifts, and 11 adversarial attacks. We use CLIP as a pilot study. We show that CLIP leads to a significant robustness drop compared to supervised ImageNet models on our benchmark, especially under synthetic distribution shift and adversarial attacks. Furthermore, data overlap analysis suggests that the observed robustness under natural distribution shifts could be attributed, at least in part, to data overlap. In summary, our evaluation shows a comprehensive evaluation of robustness is necessary; and there is a significant need to improve the robustness of zero-shot multimodal models.
|
[
"['Chenguang Wang' 'Ruoxi Jia' 'Xin Liu' 'Dawn Song']"
] |
null | null |
2403.10506
| null | null |
http://arxiv.org/pdf/2403.10506v2
|
2024-06-18T18:11:07Z
|
2024-03-15T17:45:44Z
|
HumanoidBench: Simulated Humanoid Benchmark for Whole-Body Locomotion
and Manipulation
|
Humanoid robots hold great promise in assisting humans in diverse environments and tasks, due to their flexibility and adaptability leveraging human-like morphology. However, research in humanoid robots is often bottlenecked by the costly and fragile hardware setups. To accelerate algorithmic research in humanoid robots, we present a high-dimensional, simulated robot learning benchmark, HumanoidBench, featuring a humanoid robot equipped with dexterous hands and a variety of challenging whole-body manipulation and locomotion tasks. Our findings reveal that state-of-the-art reinforcement learning algorithms struggle with most tasks, whereas a hierarchical learning approach achieves superior performance when supported by robust low-level policies, such as walking or reaching. With HumanoidBench, we provide the robotics community with a platform to identify the challenges arising when solving diverse tasks with humanoid robots, facilitating prompt verification of algorithms and ideas. The open-source code is available at https://humanoid-bench.github.io.
|
[
"['Carmelo Sferrazza' 'Dun-Ming Huang' 'Xingyu Lin' 'Youngwoon Lee'\n 'Pieter Abbeel']"
] |
null | null |
2403.10516
| null | null |
http://arxiv.org/pdf/2403.10516v2
|
2024-04-01T20:57:45Z
|
2024-03-15T17:57:06Z
|
FeatUp: A Model-Agnostic Framework for Features at Any Resolution
|
Deep features are a cornerstone of computer vision research, capturing image semantics and enabling the community to solve downstream tasks even in the zero- or few-shot regime. However, these features often lack the spatial resolution to directly perform dense prediction tasks like segmentation and depth prediction because models aggressively pool information over large areas. In this work, we introduce FeatUp, a task- and model-agnostic framework to restore lost spatial information in deep features. We introduce two variants of FeatUp: one that guides features with high-resolution signal in a single forward pass, and one that fits an implicit model to a single image to reconstruct features at any resolution. Both approaches use a multi-view consistency loss with deep analogies to NeRFs. Our features retain their original semantics and can be swapped into existing applications to yield resolution and performance gains even without re-training. We show that FeatUp significantly outperforms other feature upsampling and image super-resolution approaches in class activation map generation, transfer learning for segmentation and depth prediction, and end-to-end training for semantic segmentation.
|
[
"['Stephanie Fu' 'Mark Hamilton' 'Laura Brandt' 'Axel Feldman'\n 'Zhoutong Zhang' 'William T. Freeman']"
] |
null | null |
2403.10520
| null | null |
http://arxiv.org/pdf/2403.10520v1
|
2024-03-15T17:59:44Z
|
2024-03-15T17:59:44Z
|
Strong and Controllable Blind Image Decomposition
|
Blind image decomposition aims to decompose all components present in an image, typically used to restore a multi-degraded input image. While fully recovering the clean image is appealing, in some scenarios, users might want to retain certain degradations, such as watermarks, for copyright protection. To address this need, we add controllability to the blind image decomposition process, allowing users to enter which types of degradation to remove or retain. We design an architecture named controllable blind image decomposition network. Inserted in the middle of U-Net structure, our method first decomposes the input feature maps and then recombines them according to user instructions. Advantageously, this functionality is implemented at minimal computational cost: decomposition and recombination are all parameter-free. Experimentally, our system excels in blind image decomposition tasks and can outputs partially or fully restored images that well reflect user intentions. Furthermore, we evaluate and configure different options for the network structure and loss functions. This, combined with the proposed decomposition-and-recombination method, yields an efficient and competitive system for blind image decomposition, compared with current state-of-the-art methods.
|
[
"['Zeyu Zhang' 'Junlin Han' 'Chenhui Gou' 'Hongdong Li' 'Liang Zheng']"
] |
null | null |
2403.10538
| null | null |
http://arxiv.org/pdf/2403.10538v1
|
2024-03-03T10:31:46Z
|
2024-03-03T10:31:46Z
|
MATADOR: Automated System-on-Chip Tsetlin Machine Design Generation for
Edge Applications
|
System-on-Chip Field-Programmable Gate Arrays (SoC-FPGAs) offer significant throughput gains for machine learning (ML) edge inference applications via the design of co-processor accelerator systems. However, the design effort for training and translating ML models into SoC-FPGA solutions can be substantial and requires specialist knowledge aware trade-offs between model performance, power consumption, latency and resource utilization. Contrary to other ML algorithms, Tsetlin Machine (TM) performs classification by forming logic proposition between boolean actions from the Tsetlin Automata (the learning elements) and boolean input features. A trained TM model, usually, exhibits high sparsity and considerable overlapping of these logic propositions both within and among the classes. The model, thus, can be translated to RTL-level design using a miniscule number of AND and NOT gates. This paper presents MATADOR, an automated boolean-to-silicon tool with GUI interface capable of implementing optimized accelerator design of the TM model onto SoC-FPGA for inference at the edge. It offers automation of the full development pipeline: model training, system level design generation, design verification and deployment. It makes use of the logic sharing that ensues from propositional overlap and creates a compact design by effectively utilizing the TM model's sparsity. MATADOR accelerator designs are shown to be up to 13.4x faster, up to 7x more resource frugal and up to 2x more power efficient when compared to the state-of-the-art Quantized and Binary Deep Neural Network implementations.
|
[
"['Tousif Rahman' 'Gang Mao' 'Sidharth Maheshwari' 'Rishad Shafik'\n 'Alex Yakovlev']"
] |
null | null |
2403.10543
| null | null |
http://arxiv.org/pdf/2403.10543v2
|
2024-06-11T12:35:27Z
|
2024-03-11T08:48:54Z
|
Mitigating Oversmoothing Through Reverse Process of GNNs for
Heterophilic Graphs
|
Graph Neural Network (GNN) resembles the diffusion process, leading to the over-smoothing of learned representations when stacking many layers. Hence, the reverse process of message passing can produce the distinguishable node representations by inverting the forward message propagation. The distinguishable representations can help us to better classify neighboring nodes with different labels, such as in heterophilic graphs. In this work, we apply the design principle of the reverse process to the three variants of the GNNs. Through the experiments on heterophilic graph data, where adjacent nodes need to have different representations for successful classification, we show that the reverse process significantly improves the prediction performance in many cases. Additional analysis reveals that the reverse mechanism can mitigate the over-smoothing over hundreds of layers. Our code is available at https://github.com/ml-postech/reverse-gnn.
|
[
"['MoonJeong Park' 'Jaeseung Heo' 'Dongwoo Kim']"
] |
null | null |
2403.10547
| null | null |
http://arxiv.org/pdf/2403.10547v1
|
2024-03-12T01:27:44Z
|
2024-03-12T01:27:44Z
|
Robust Second-Order Nonconvex Optimization and Its Application to Low
Rank Matrix Sensing
|
Finding an approximate second-order stationary point (SOSP) is a well-studied and fundamental problem in stochastic nonconvex optimization with many applications in machine learning. However, this problem is poorly understood in the presence of outliers, limiting the use of existing nonconvex algorithms in adversarial settings. In this paper, we study the problem of finding SOSPs in the strong contamination model, where a constant fraction of datapoints are arbitrarily corrupted. We introduce a general framework for efficiently finding an approximate SOSP with emph{dimension-independent} accuracy guarantees, using $widetilde{O}({D^2}/{epsilon})$ samples where $D$ is the ambient dimension and $epsilon$ is the fraction of corrupted datapoints. As a concrete application of our framework, we apply it to the problem of low rank matrix sensing, developing efficient and provably robust algorithms that can tolerate corruptions in both the sensing matrices and the measurements. In addition, we establish a Statistical Query lower bound providing evidence that the quadratic dependence on $D$ in the sample complexity is necessary for computationally efficient algorithms.
|
[
"['Shuyao Li' 'Yu Cheng' 'Ilias Diakonikolas' 'Jelena Diakonikolas'\n 'Rong Ge' 'Stephen J. Wright']"
] |
null | null |
2403.10549
| null | null |
http://arxiv.org/pdf/2403.10549v1
|
2024-03-12T19:54:35Z
|
2024-03-12T19:54:35Z
|
On-Device Domain Learning for Keyword Spotting on Low-Power Extreme Edge
Embedded Systems
|
Keyword spotting accuracy degrades when neural networks are exposed to noisy environments. On-site adaptation to previously unseen noise is crucial to recovering accuracy loss, and on-device learning is required to ensure that the adaptation process happens entirely on the edge device. In this work, we propose a fully on-device domain adaptation system achieving up to 14% accuracy gains over already-robust keyword spotting models. We enable on-device learning with less than 10 kB of memory, using only 100 labeled utterances to recover 5% accuracy after adapting to the complex speech noise. We demonstrate that domain adaptation can be achieved on ultra-low-power microcontrollers with as little as 806 mJ in only 14 s on always-on, battery-operated devices.
|
[
"['Cristian Cioflan' 'Lukas Cavigelli' 'Manuele Rusci' 'Miguel de Prado'\n 'Luca Benini']"
] |
null | null |
2403.10550
| null | null |
http://arxiv.org/pdf/2403.10550v1
|
2024-03-13T02:10:32Z
|
2024-03-13T02:10:32Z
|
Semi-Supervised Learning for Anomaly Traffic Detection via Bidirectional
Normalizing Flows
|
With the rapid development of the Internet, various types of anomaly traffic are threatening network security. We consider the problem of anomaly network traffic detection and propose a three-stage anomaly detection framework using only normal traffic. Our framework can generate pseudo anomaly samples without prior knowledge of anomalies to achieve the detection of anomaly data. Firstly, we employ a reconstruction method to learn the deep representation of normal samples. Secondly, these representations are normalized to a standard normal distribution using a bidirectional flow module. To simulate anomaly samples, we add noises to the normalized representations which are then passed through the generation direction of the bidirectional flow module. Finally, a simple classifier is trained to differentiate the normal samples and pseudo anomaly samples in the latent space. During inference, our framework requires only two modules to detect anomalous samples, leading to a considerable reduction in model size. According to the experiments, our method achieves the state of-the-art results on the common benchmarking datasets of anomaly network traffic detection. The code is given in the https://github.com/ZxuanDang/ATD-via-Flows.git
|
[
"['Zhangxuan Dang' 'Yu Zheng' 'Xinglin Lin' 'Chunlei Peng' 'Qiuyu Chen'\n 'Xinbo Gao']"
] |
null | null |
2403.10552
| null | null |
http://arxiv.org/pdf/2403.10552v1
|
2024-03-13T03:20:47Z
|
2024-03-13T03:20:47Z
|
Training Self-localization Models for Unseen Unfamiliar Places via
Teacher-to-Student Data-Free Knowledge Transfer
|
A typical assumption in state-of-the-art self-localization models is that an annotated training dataset is available in the target workspace. However, this does not always hold when a robot travels in a general open-world. This study introduces a novel training scheme for open-world distributed robot systems. In our scheme, a robot ("student") can ask the other robots it meets at unfamiliar places ("teachers") for guidance. Specifically, a pseudo-training dataset is reconstructed from the teacher model and thereafter used for continual learning of the student model. Unlike typical knowledge transfer schemes, our scheme introduces only minimal assumptions on the teacher model, such that it can handle various types of open-set teachers, including uncooperative, untrainable (e.g., image retrieval engines), and blackbox teachers (i.e., data privacy). Rather than relying on the availability of private data of teachers as in existing methods, we propose to exploit an assumption that holds universally in self-localization tasks: "The teacher model is a self-localization system" and to reuse the self-localization system of a teacher as a sole accessible communication channel. We particularly focus on designing an excellent student/questioner whose interactions with teachers can yield effective question-and-answer sequences that can be used as pseudo-training datasets for the student self-localization model. When applied to a generic recursive knowledge distillation scenario, our approach exhibited stable and consistent performance improvement.
|
[
"['Kenta Tsukahara' 'Kanji Tanaka' 'Daiki Iwata']"
] |
null | null |
2403.10553
| null | null |
http://arxiv.org/pdf/2403.10553v1
|
2024-03-13T03:43:39Z
|
2024-03-13T03:43:39Z
|
Learning to Watermark LLM-generated Text via Reinforcement Learning
|
We study how to watermark LLM outputs, i.e. embedding algorithmically detectable signals into LLM-generated text to track misuse. Unlike the current mainstream methods that work with a fixed LLM, we expand the watermark design space by including the LLM tuning stage in the watermark pipeline. While prior works focus on token-level watermark that embeds signals into the output, we design a model-level watermark that embeds signals into the LLM weights, and such signals can be detected by a paired detector. We propose a co-training framework based on reinforcement learning that iteratively (1) trains a detector to detect the generated watermarked text and (2) tunes the LLM to generate text easily detectable by the detector while keeping its normal utility. We empirically show that our watermarks are more accurate, robust, and adaptable (to new attacks). It also allows watermarked model open-sourcing. In addition, if used together with alignment, the extra overhead introduced is low - only training an extra reward model (i.e. our detector). We hope our work can bring more effort into studying a broader watermark design that is not limited to working with a fixed LLM. We open-source the code: https://github.com/xiaojunxu/learning-to-watermark-llm .
|
[
"['Xiaojun Xu' 'Yuanshun Yao' 'Yang Liu']"
] |
null | null |
2403.10555
| null | null |
http://arxiv.org/pdf/2403.10555v1
|
2024-03-13T06:41:37Z
|
2024-03-13T06:41:37Z
|
KARINA: An Efficient Deep Learning Model for Global Weather Forecast
|
Deep learning-based, data-driven models are gaining prevalence in climate research, particularly for global weather prediction. However, training the global weather data at high resolution requires massive computational resources. Therefore, we present a new model named KARINA to overcome the substantial computational demands typical of this field. This model achieves forecasting accuracy comparable to higher-resolution counterparts with significantly less computational resources, requiring only 4 NVIDIA A100 GPUs and less than 12 hours of training. KARINA combines ConvNext, SENet, and Geocyclic Padding to enhance weather forecasting at a 2.5{deg} resolution, which could filter out high-frequency noise. Geocyclic Padding preserves pixels at the lateral boundary of the input image, thereby maintaining atmospheric flow continuity in the spherical Earth. SENet dynamically improves feature response, advancing atmospheric process modeling, particularly in the vertical column process as numerous channels. In this vein, KARINA sets new benchmarks in weather forecasting accuracy, surpassing existing models like the ECMWF S2S reforecasts at a lead time of up to 7 days. Remarkably, KARINA achieved competitive performance even when compared to the recently developed models (Pangu-Weather, GraphCast, ClimaX, and FourCastNet) trained with high-resolution data having 100 times larger pixels. Conclusively, KARINA significantly advances global weather forecasting by efficiently modeling Earth's atmosphere with improved accuracy and resource efficiency.
|
[
"['Minjong Cheon' 'Yo-Hwan Choi' 'Seon-Yu Kang' 'Yumi Choi' 'Jeong-Gil Lee'\n 'Daehyun Kang']"
] |
null | null |
2403.10557
| null | null |
http://arxiv.org/pdf/2403.10557v1
|
2024-03-13T18:57:30Z
|
2024-03-13T18:57:30Z
|
Second-Order Information Matters: Revisiting Machine Unlearning for
Large Language Models
|
With the rapid development of Large Language Models (LLMs), we have witnessed intense competition among the major LLM products like ChatGPT, LLaMa, and Gemini. However, various issues (e.g. privacy leakage and copyright violation) of the training corpus still remain underexplored. For example, the Times sued OpenAI and Microsoft for infringing on its copyrights by using millions of its articles for training. From the perspective of LLM practitioners, handling such unintended privacy violations can be challenging. Previous work addressed the ``unlearning" problem of LLMs using gradient information, while they mostly introduced significant overheads like data preprocessing or lacked robustness. In this paper, contrasting with the methods based on first-order information, we revisit the unlearning problem via the perspective of second-order information (Hessian). Our unlearning algorithms, which are inspired by classic Newton update, are not only data-agnostic/model-agnostic but also proven to be robust in terms of utility preservation or privacy guarantee. Through a comprehensive evaluation with four NLP datasets as well as a case study on real-world datasets, our methods consistently show superiority over the first-order methods.
|
[
"['Kang Gu' 'Md Rafi Ur Rashid' 'Najrin Sultana' 'Shagufta Mehnaz']"
] |
null | null |
2403.10558
| null | null |
http://arxiv.org/pdf/2403.10558v2
|
2024-04-23T09:59:42Z
|
2024-03-14T02:17:57Z
|
Adaptive Hybrid Masking Strategy for Privacy-Preserving Face Recognition
Against Model Inversion Attack
|
The utilization of personal sensitive data in training face recognition (FR) models poses significant privacy concerns, as adversaries can employ model inversion attacks (MIA) to infer the original training data. Existing defense methods, such as data augmentation and differential privacy, have been employed to mitigate this issue. However, these methods often fail to strike an optimal balance between privacy and accuracy. To address this limitation, this paper introduces an adaptive hybrid masking algorithm against MIA. Specifically, face images are masked in the frequency domain using an adaptive MixUp strategy. Unlike the traditional MixUp algorithm, which is predominantly used for data augmentation, our modified approach incorporates frequency domain mixing. Previous studies have shown that increasing the number of images mixed in MixUp can enhance privacy preservation but at the expense of reduced face recognition accuracy. To overcome this trade-off, we develop an enhanced adaptive MixUp strategy based on reinforcement learning, which enables us to mix a larger number of images while maintaining satisfactory recognition accuracy. To optimize privacy protection, we propose maximizing the reward function (i.e., the loss function of the FR system) during the training of the strategy network. While the loss function of the FR network is minimized in the phase of training the FR network. The strategy network and the face recognition network can be viewed as antagonistic entities in the training process, ultimately reaching a more balanced trade-off. Experimental results demonstrate that our proposed hybrid masking scheme outperforms existing defense algorithms in terms of privacy preservation and recognition accuracy against MIA.
|
[
"['Yinggui Wang' 'Yuanqing Huang' 'Jianshu Li' 'Le Yang' 'Kai Song'\n 'Lei Wang']"
] |
null | null |
2403.10559
| null | null |
http://arxiv.org/pdf/2403.10559v1
|
2024-03-14T06:51:26Z
|
2024-03-14T06:51:26Z
|
Generative Models and Connected and Automated Vehicles: A Survey in
Exploring the Intersection of Transportation and AI
|
This report investigates the history and impact of Generative Models and Connected and Automated Vehicles (CAVs), two groundbreaking forces pushing progress in technology and transportation. By focusing on the application of generative models within the context of CAVs, the study aims to unravel how this integration could enhance predictive modeling, simulation accuracy, and decision-making processes in autonomous vehicles. This thesis discusses the benefits and challenges of integrating generative models and CAV technology in transportation. It aims to highlight the progress made, the remaining obstacles, and the potential for advancements in safety and innovation.
|
[
"['Dong Shu' 'Zhouyao Zhu']"
] |
null | null |
2403.10561
| null | null |
http://arxiv.org/pdf/2403.10561v1
|
2024-03-14T08:46:07Z
|
2024-03-14T08:46:07Z
|
A collection of the accepted papers for the Human-Centric Representation
Learning workshop at AAAI 2024
|
This non-archival index is not complete, as some accepted papers chose to opt-out of inclusion. The list of all accepted papers is available on the workshop website.
|
[
"['Dimitris Spathis' 'Aaqib Saeed' 'Ali Etemad' 'Sana Tonekaboni'\n 'Stefanos Laskaridis' 'Shohreh Deldari' 'Chi Ian Tang' 'Patrick Schwab'\n 'Shyam Tailor']"
] |
null | null |
2403.10562
| null | null |
http://arxiv.org/pdf/2403.10562v1
|
2024-03-14T10:59:54Z
|
2024-03-14T10:59:54Z
|
Counter-Samples: A Stateless Strategy to Neutralize Black Box
Adversarial Attacks
|
Our paper presents a novel defence against black box attacks, where attackers use the victim model as an oracle to craft their adversarial examples. Unlike traditional preprocessing defences that rely on sanitizing input samples, our stateless strategy counters the attack process itself. For every query we evaluate a counter-sample instead, where the counter-sample is the original sample optimized against the attacker's objective. By countering every black box query with a targeted white box optimization, our strategy effectively introduces an asymmetry to the game to the defender's advantage. This defence not only effectively misleads the attacker's search for an adversarial example, it also preserves the model's accuracy on legitimate inputs and is generic to multiple types of attacks. We demonstrate that our approach is remarkably effective against state-of-the-art black box attacks and outperforms existing defences for both the CIFAR-10 and ImageNet datasets. Additionally, we also show that the proposed defence is robust against strong adversaries as well.
|
[
"['Roey Bokobza' 'Yisroel Mirsky']"
] |
null | null |
2403.10566
| null | null |
http://arxiv.org/pdf/2403.10566v1
|
2024-03-14T16:51:51Z
|
2024-03-14T16:51:51Z
|
Cooling-Guide Diffusion Model for Battery Cell Arrangement
|
Our study introduces a Generative AI method that employs a cooling-guided diffusion model to optimize the layout of battery cells, a crucial step for enhancing the cooling performance and efficiency of battery thermal management systems. Traditional design processes, which rely heavily on iterative optimization and extensive guesswork, are notoriously slow and inefficient, often leading to suboptimal solutions. In contrast, our innovative method uses a parametric denoising diffusion probabilistic model (DDPM) with classifier and cooling guidance to generate optimized cell layouts with enhanced cooling paths, significantly lowering the maximum temperature of the cells. By incorporating position-based classifier guidance, we ensure the feasibility of generated layouts. Meanwhile, cooling guidance directly optimizes cooling-efficiency, making our approach uniquely effective. When compared to two advanced models, the Tabular Denoising Diffusion Probabilistic Model (TabDDPM) and the Conditional Tabular GAN (CTGAN), our cooling-guided diffusion model notably outperforms both. It is five times more effective than TabDDPM and sixty-six times better than CTGAN across key metrics such as feasibility, diversity, and cooling efficiency. This research marks a significant leap forward in the field, aiming to optimize battery cell layouts for superior cooling efficiency, thus setting the stage for the development of more effective and dependable battery thermal management systems.
|
[
"['Nicholas Sung' 'Liu Zheng' 'Pingfeng Wang' 'Faez Ahmed']"
] |
null | null |
2403.10567
| null | null |
http://arxiv.org/pdf/2403.10567v1
|
2024-03-14T17:45:56Z
|
2024-03-14T17:45:56Z
|
Uncertainty estimation in spatial interpolation of satellite
precipitation with ensemble learning
|
Predictions in the form of probability distributions are crucial for decision-making. Quantile regression enables this within spatial interpolation settings for merging remote sensing and gauge precipitation data. However, ensemble learning of quantile regression algorithms remains unexplored in this context. Here, we address this gap by introducing nine quantile-based ensemble learners and applying them to large precipitation datasets. We employed a novel feature engineering strategy, reducing predictors to distance-weighted satellite precipitation at relevant locations, combined with location elevation. Our ensemble learners include six stacking and three simple methods (mean, median, best combiner), combining six individual algorithms: quantile regression (QR), quantile regression forests (QRF), generalized random forests (GRF), gradient boosting machines (GBM), light gradient boosting machines (LightGBM), and quantile regression neural networks (QRNN). These algorithms serve as both base learners and combiners within different stacking methods. We evaluated performance against QR using quantile scoring functions in a large dataset comprising 15 years of monthly gauge-measured and satellite precipitation in contiguous US (CONUS). Stacking with QR and QRNN yielded the best results across quantile levels of interest (0.025, 0.050, 0.075, 0.100, 0.200, 0.300, 0.400, 0.500, 0.600, 0.700, 0.800, 0.900, 0.925, 0.950, 0.975), surpassing the reference method by 3.91% to 8.95%. This demonstrates the potential of stacking to improve probabilistic predictions in spatial interpolation and beyond.
|
[
"['Georgia Papacharalampous' 'Hristos Tyralis' 'Nikolaos Doulamis'\n 'Anastasios Doulamis']"
] |
null | null |
2403.10568
| null | null |
http://arxiv.org/pdf/2403.10568v1
|
2024-03-14T17:47:10Z
|
2024-03-14T17:47:10Z
|
MoPE: Parameter-Efficient and Scalable Multimodal Fusion via Mixture of
Prompt Experts
|
Prompt-tuning has demonstrated parameter-efficiency in fusing unimodal foundation models for multimodal tasks. However, its limited adaptivity and expressiveness lead to suboptimal performance when compared with other tuning methods. In this paper, we address this issue by disentangling the vanilla prompts to adaptively capture dataset-level and instance-level features. Building upon this disentanglement, we introduce the mixture of prompt experts (MoPE) technique to enhance expressiveness. MoPE leverages multimodal pairing priors to route the most effective prompt on a per-instance basis. Compared to vanilla prompting, our MoPE-based conditional prompting exhibits greater expressiveness for multimodal fusion, scaling better with the training data and the overall number of trainable parameters. We also study a regularization term for expert routing, leading to emergent expert specialization, where different experts focus on different concepts, enabling interpretable soft prompting. Extensive experiments across three multimodal datasets demonstrate that our method achieves state-of-the-art results, matching or even surpassing the performance of fine-tuning, while requiring only 0.8% of the trainable parameters. Code will be released: https://github.com/songrise/MoPE.
|
[
"['Ruixiang Jiang' 'Lingbo Liu' 'Changwen Chen']"
] |
null | null |
2403.10569
| null | null |
http://arxiv.org/pdf/2403.10569v1
|
2024-03-14T19:40:58Z
|
2024-03-14T19:40:58Z
|
Achieving Pareto Optimality using Efficient Parameter Reduction for DNNs
in Resource-Constrained Edge Environment
|
This paper proposes an optimization of an existing Deep Neural Network (DNN) that improves its hardware utilization and facilitates on-device training for resource-constrained edge environments. We implement efficient parameter reduction strategies on Xception that shrink the model size without sacrificing accuracy, thus decreasing memory utilization during training. We evaluate our model in two experiments: Caltech-101 image classification and PCB defect detection and compare its performance against the original Xception and lightweight models, EfficientNetV2B1 and MobileNetV2. The results of the Caltech-101 image classification show that our model has a better test accuracy (76.21%) than Xception (75.89%), uses less memory on average (847.9MB) than Xception (874.6MB), and has faster training and inference times. The lightweight models overfit with EfficientNetV2B1 having a 30.52% test accuracy and MobileNetV2 having a 58.11% test accuracy. Both lightweight models have better memory usage than our model and Xception. On the PCB defect detection, our model has the best test accuracy (90.30%), compared to Xception (88.10%), EfficientNetV2B1 (55.25%), and MobileNetV2 (50.50%). MobileNetV2 has the least average memory usage (849.4MB), followed by our model (865.8MB), then EfficientNetV2B1 (874.8MB), and Xception has the highest (893.6MB). We further experiment with pre-trained weights and observe that memory usage decreases thereby showing the benefits of transfer learning. A Pareto analysis of the models' performance shows that our optimized model architecture satisfies accuracy and low memory utilization objectives.
|
[
"['Atah Nuh Mih' 'Alireza Rahimi' 'Asfia Kawnine' 'Francis Palma'\n 'Monica Wachowicz' 'Rickey Dubay' 'Hung Cao']"
] |
null | null |
2403.10571
| null | null |
http://arxiv.org/pdf/2403.10571v1
|
2024-03-14T20:32:31Z
|
2024-03-14T20:32:31Z
|
JaxDecompiler: Redefining Gradient-Informed Software Design
|
Among numerical libraries capable of computing gradient descent optimization, JAX stands out by offering more features, accelerated by an intermediate representation known as Jaxpr language. However, editing the Jaxpr code is not directly possible. This article introduces JaxDecompiler, a tool that transforms any JAX function into an editable Python code, especially useful for editing the JAX function generated by the gradient function. JaxDecompiler simplifies the processes of reverse engineering, understanding, customizing, and interoperability of software developed by JAX. We highlight its capabilities, emphasize its practical applications especially in deep learning and more generally gradient-informed software, and demonstrate that the decompiled code speed performance is similar to the original.
|
[
"['Pierrick Pochelu']"
] |
null | null |
2403.10572
| null | null |
http://arxiv.org/pdf/2403.10572v1
|
2024-03-15T02:25:45Z
|
2024-03-15T02:25:45Z
|
Discovering Invariant Neighborhood Patterns for Heterophilic Graphs
|
This paper studies the problem of distribution shifts on non-homophilous graphs Mosting existing graph neural network methods rely on the homophilous assumption that nodes from the same class are more likely to be linked. However, such assumptions of homophily do not always hold in real-world graphs, which leads to more complex distribution shifts unaccounted for in previous methods. The distribution shifts of neighborhood patterns are much more diverse on non-homophilous graphs. We propose a novel Invariant Neighborhood Pattern Learning (INPL) to alleviate the distribution shifts problem on non-homophilous graphs. Specifically, we propose the Adaptive Neighborhood Propagation (ANP) module to capture the adaptive neighborhood information, which could alleviate the neighborhood pattern distribution shifts problem on non-homophilous graphs. We propose Invariant Non-Homophilous Graph Learning (INHGL) module to constrain the ANP and learn invariant graph representation on non-homophilous graphs. Extensive experimental results on real-world non-homophilous graphs show that INPL could achieve state-of-the-art performance for learning on large non-homophilous graphs.
|
[
"['Ruihao Zhang' 'Zhengyu Chen' 'Teng Xiao' 'Yueyang Wang' 'Kun Kuang']"
] |
null | null |
2403.10573
| null | null |
http://arxiv.org/pdf/2403.10573v2
|
2024-07-07T13:36:22Z
|
2024-03-15T02:35:36Z
|
Medical Unlearnable Examples: Securing Medical Data from Unauthorized
Training via Sparsity-Aware Local Masking
|
The rapid expansion of AI in healthcare has led to a surge in medical data generation and storage, boosting medical AI development. However, fears of unauthorized use, like training commercial AI models, hinder researchers from sharing their valuable datasets. To encourage data sharing, one promising solution is to introduce imperceptible noise into the data. This method aims to safeguard the data against unauthorized training by inducing degradation in the generalization ability of the trained model. However, they are not effective and efficient when applied to medical data, mainly due to the ignorance of the sparse nature of medical images. To address this problem, we propose the Sparsity-Aware Local Masking (SALM) method, a novel approach that selectively perturbs significant pixel regions rather than the entire image as previously. This simple yet effective approach, by focusing on local areas, significantly narrows down the search space for disturbances and fully leverages the characteristics of sparsity. Our extensive experiments across various datasets and model architectures demonstrate that SALM effectively prevents unauthorized training of different models and outperforms previous SoTA data protection methods.
|
[
"['Weixiang Sun' 'Yixin Liu' 'Zhiling Yan' 'Kaidi Xu' 'Lichao Sun']"
] |
null | null |
2403.10576
| null | null |
http://arxiv.org/pdf/2403.10576v2
|
2024-04-02T08:46:42Z
|
2024-03-15T05:35:02Z
|
Ignore Me But Don't Replace Me: Utilizing Non-Linguistic Elements for
Pretraining on the Cybersecurity Domain
|
Cybersecurity information is often technically complex and relayed through unstructured text, making automation of cyber threat intelligence highly challenging. For such text domains that involve high levels of expertise, pretraining on in-domain corpora has been a popular method for language models to obtain domain expertise. However, cybersecurity texts often contain non-linguistic elements (such as URLs and hash values) that could be unsuitable with the established pretraining methodologies. Previous work in other domains have removed or filtered such text as noise, but the effectiveness of these methods have not been investigated, especially in the cybersecurity domain. We propose different pretraining methodologies and evaluate their effectiveness through downstream tasks and probing tasks. Our proposed strategy (selective MLM and jointly training NLE token classification) outperforms the commonly taken approach of replacing non-linguistic elements (NLEs). We use our domain-customized methodology to train CyBERTuned, a cybersecurity domain language model that outperforms other cybersecurity PLMs on most tasks.
|
[
"['Eugene Jang' 'Jian Cui' 'Dayeon Yim' 'Youngjin Jin' 'Jin-Woo Chung'\n 'Seungwon Shin' 'Yongjae Lee']"
] |
null | null |
2403.10578
| null | null |
http://arxiv.org/pdf/2403.10578v1
|
2024-03-15T09:30:29Z
|
2024-03-15T09:30:29Z
|
Generative Modelling of Stochastic Rotating Shallow Water Noise
|
In recent work, the authors have developed a generic methodology for calibrating the noise in fluid dynamics stochastic partial differential equations where the stochasticity was introduced to parametrize subgrid-scale processes. The stochastic parameterization of sub-grid scale processes is required in the estimation of uncertainty in weather and climate predictions, to represent systematic model errors arising from subgrid-scale fluctuations. The previous methodology used a principal component analysis (PCA) technique based on the ansatz that the increments of the stochastic parametrization are normally distributed. In this paper, the PCA technique is replaced by a generative model technique. This enables us to avoid imposing additional constraints on the increments. The methodology is tested on a stochastic rotating shallow water model with the elevation variable of the model used as input data. The numerical simulations show that the noise is indeed non-Gaussian. The generative modelling technology gives good RMSE, CRPS score and forecast rank histogram results.
|
[
"['Dan Crisan' 'Oana Lang' 'Alexander Lobbe']"
] |
null | null |
2403.10581
| null | null |
http://arxiv.org/pdf/2403.10581v2
|
2024-03-22T16:00:24Z
|
2024-03-15T13:25:09Z
|
Large Language Model-informed ECG Dual Attention Network for Heart
Failure Risk Prediction
|
Heart failure (HF) poses a significant public health challenge, with a rising global mortality rate. Early detection and prevention of HF could significantly reduce its impact. We introduce a novel methodology for predicting HF risk using 12-lead electrocardiograms (ECGs). We present a novel, lightweight dual-attention ECG network designed to capture complex ECG features essential for early HF risk prediction, despite the notable imbalance between low and high-risk groups. This network incorporates a cross-lead attention module and twelve lead-specific temporal attention modules, focusing on cross-lead interactions and each lead's local dynamics. To further alleviate model overfitting, we leverage a large language model (LLM) with a public ECG-Report dataset for pretraining on an ECG-report alignment task. The network is then fine-tuned for HF risk prediction using two specific cohorts from the UK Biobank study, focusing on patients with hypertension (UKB-HYP) and those who have had a myocardial infarction (UKB-MI).The results reveal that LLM-informed pre-training substantially enhances HF risk prediction in these cohorts. The dual-attention design not only improves interpretability but also predictive accuracy, outperforming existing competitive methods with C-index scores of 0.6349 for UKB-HYP and 0.5805 for UKB-MI. This demonstrates our method's potential in advancing HF risk assessment with clinical complex ECG data.
|
[
"['Chen Chen' 'Lei Li' 'Marcel Beetz' 'Abhirup Banerjee' 'Ramneek Gupta'\n 'Vicente Grau']"
] |
null | null |
2403.10582
| null | null |
http://arxiv.org/pdf/2403.10582v1
|
2024-03-15T15:20:21Z
|
2024-03-15T15:20:21Z
|
How Suboptimal is Training rPPG Models with Videos and Targets from
Different Body Sites?
|
Remote camera measurement of the blood volume pulse via photoplethysmography (rPPG) is a compelling technology for scalable, low-cost, and accessible assessment of cardiovascular information. Neural networks currently provide the state-of-the-art for this task and supervised training or fine-tuning is an important step in creating these models. However, most current models are trained on facial videos using contact PPG measurements from the fingertip as targets/ labels. One of the reasons for this is that few public datasets to date have incorporated contact PPG measurements from the face. Yet there is copious evidence that the PPG signals at different sites on the body have very different morphological features. Is training a facial video rPPG model using contact measurements from another site on the body suboptimal? Using a recently released unique dataset with synchronized contact PPG and video measurements from both the hand and face, we can provide precise and quantitative answers to this question. We obtain up to 40 % lower mean squared errors between the waveforms of the predicted and the ground truth PPG signals using state-of-the-art neural models when using PPG signals from the forehead compared to using PPG signals from the fingertip. We also show qualitatively that the neural models learn to predict the morphology of the ground truth PPG signal better when trained on the forehead PPG signals. However, while models trained from the forehead PPG produce a more faithful waveform, models trained from a finger PPG do still learn the dominant frequency (i.e., the heart rate) well.
|
[
"['Björn Braun' 'Daniel McDuff' 'Christian Holz']"
] |
null | null |
2403.10585
| null | null |
http://arxiv.org/pdf/2403.10585v1
|
2024-03-15T16:38:47Z
|
2024-03-15T16:38:47Z
|
Solving General Noisy Inverse Problem via Posterior Sampling: A Policy
Gradient Viewpoint
|
Solving image inverse problems (e.g., super-resolution and inpainting) requires generating a high fidelity image that matches the given input (the low-resolution image or the masked image). By using the input image as guidance, we can leverage a pretrained diffusion generative model to solve a wide range of image inverse tasks without task specific model fine-tuning. To precisely estimate the guidance score function of the input image, we propose Diffusion Policy Gradient (DPG), a tractable computation method by viewing the intermediate noisy images as policies and the target image as the states selected by the policy. Experiments show that our method is robust to both Gaussian and Poisson noise degradation on multiple linear and non-linear inverse tasks, resulting into a higher image restoration quality on FFHQ, ImageNet and LSUN datasets.
|
[
"['Haoyue Tang' 'Tian Xie' 'Aosong Feng' 'Hanyu Wang' 'Chenyang Zhang'\n 'Yang Bai']"
] |
null | null |
2403.10586
| null | null |
http://arxiv.org/pdf/2403.10586v1
|
2024-03-15T17:03:45Z
|
2024-03-15T17:03:45Z
|
From Algorithms to Outcomes: Reviewing AI's Role in Non-Muscle-Invasive
Bladder Cancer Recurrence Prediction
|
Bladder cancer, the leading urinary tract cancer, is responsible for 15 deaths daily in the UK. This cancer predominantly manifests as non-muscle-invasive bladder cancer (NMIBC), characterised by tumours not yet penetrating the muscle layer of the bladder wall. NMIBC is plagued by a very high recurrence rate of 70-80% and hence the costliest treatments. Current tools for predicting recurrence use scoring systems that overestimate risk and have poor accuracy. Inaccurate and delayed prediction of recurrence significantly elevates the likelihood of mortality. Accurate prediction of recurrence is hence vital for cost-effective management and treatment planning. This is where Machine learning (ML) techniques have emerged as a promising approach for predicting NMIBC recurrence by leveraging molecular and clinical data. This review provides a comprehensive analysis of ML approaches for predicting NMIBC recurrence. Our systematic evaluation demonstrates the potential of diverse ML algorithms and markers, including radiomic, clinical, histopathological, genomic, and biochemical data in enhancing recurrence prediction and personalised patient management. We summarise various prediction tasks, data modalities, and ML models used, highlighting their performance, limitations, and future directions of incorporating cost-effectiveness. Challenges related to generalisability and interpretability of artificial intelligent models are discussed, emphasising the need for collaborative efforts and robust datasets.
|
[
"['Saram Abbas' 'Rishad Shafik' 'Naeem Soomro' 'Rakesh Heer'\n 'Kabita Adhikari']"
] |
null | null |
2403.10603
| null | null |
http://arxiv.org/pdf/2403.10603v1
|
2024-03-15T18:00:11Z
|
2024-03-15T18:00:11Z
|
SurvRNC: Learning Ordered Representations for Survival Prediction using
Rank-N-Contrast
|
Predicting the likelihood of survival is of paramount importance for individuals diagnosed with cancer as it provides invaluable information regarding prognosis at an early stage. This knowledge enables the formulation of effective treatment plans that lead to improved patient outcomes. In the past few years, deep learning models have provided a feasible solution for assessing medical images, electronic health records, and genomic data to estimate cancer risk scores. However, these models often fall short of their potential because they struggle to learn regression-aware feature representations. In this study, we propose Survival Rank-N Contrast (SurvRNC) method, which introduces a loss function as a regularizer to obtain an ordered representation based on the survival times. This function can handle censored data and can be incorporated into any survival model to ensure that the learned representation is ordinal. The model was extensively evaluated on a HEad & NeCK TumOR (HECKTOR) segmentation and the outcome-prediction task dataset. We demonstrate that using the SurvRNC method for training can achieve higher performance on different deep survival models. Additionally, it outperforms state-of-the-art methods by 3.6% on the concordance index. The code is publicly available on https://github.com/numanai/SurvRNC
|
[
"['Numan Saeed' 'Muhammad Ridzuan' 'Fadillah Adamsyah Maani'\n 'Hussain Alasmawi' 'Karthik Nandakumar' 'Mohammad Yaqub']"
] |
null | null |
2403.10610
| null | null |
http://arxiv.org/pdf/2403.10610v1
|
2024-03-15T18:13:48Z
|
2024-03-15T18:13:48Z
|
Sequential Monte Carlo for Inclusive KL Minimization in Amortized
Variational Inference
|
For training an encoder network to perform amortized variational inference, the Kullback-Leibler (KL) divergence from the exact posterior to its approximation, known as the inclusive or forward KL, is an increasingly popular choice of variational objective due to the mass-covering property of its minimizer. However, minimizing this objective is challenging. A popular existing approach, Reweighted Wake-Sleep (RWS), suffers from heavily biased gradients and a circular pathology that results in highly concentrated variational distributions. As an alternative, we propose SMC-Wake, a procedure for fitting an amortized variational approximation that uses likelihood-tempered sequential Monte Carlo samplers to estimate the gradient of the inclusive KL divergence. We propose three gradient estimators, all of which are asymptotically unbiased in the number of iterations and two of which are strongly consistent. Our method interleaves stochastic gradient updates, SMC samplers, and iterative improvement to an estimate of the normalizing constant to reduce bias from self-normalization. In experiments with both simulated and real datasets, SMC-Wake fits variational distributions that approximate the posterior more accurately than existing methods.
|
[
"['Declan McNamara' 'Jackson Loper' 'Jeffrey Regier']"
] |
null | null |
2403.10615
| null | null |
http://arxiv.org/pdf/2403.10615v2
|
2024-03-25T09:42:13Z
|
2024-03-15T18:26:33Z
|
LightIt: Illumination Modeling and Control for Diffusion Models
|
We introduce LightIt, a method for explicit illumination control for image generation. Recent generative methods lack lighting control, which is crucial to numerous artistic aspects of image generation such as setting the overall mood or cinematic appearance. To overcome these limitations, we propose to condition the generation on shading and normal maps. We model the lighting with single bounce shading, which includes cast shadows. We first train a shading estimation module to generate a dataset of real-world images and shading pairs. Then, we train a control network using the estimated shading and normals as input. Our method demonstrates high-quality image generation and lighting control in numerous scenes. Additionally, we use our generated dataset to train an identity-preserving relighting model, conditioned on an image and a target shading. Our method is the first that enables the generation of images with controllable, consistent lighting and performs on par with specialized relighting state-of-the-art methods.
|
[
"['Peter Kocsis' 'Julien Philip' 'Kalyan Sunkavalli' 'Matthias Nießner'\n 'Yannick Hold-Geoffroy']"
] |
null | null |
2403.10616
| null | null |
http://arxiv.org/pdf/2403.10616v1
|
2024-03-15T18:26:51Z
|
2024-03-15T18:26:51Z
|
DiPaCo: Distributed Path Composition
|
Progress in machine learning (ML) has been fueled by scaling neural network models. This scaling has been enabled by ever more heroic feats of engineering, necessary for accommodating ML approaches that require high bandwidth communication between devices working in parallel. In this work, we propose a co-designed modular architecture and training approach for ML models, dubbed DIstributed PAth COmposition (DiPaCo). During training, DiPaCo distributes computation by paths through a set of shared modules. Together with a Local-SGD inspired optimization (DiLoCo) that keeps modules in sync with drastically reduced communication, Our approach facilitates training across poorly connected and heterogeneous workers, with a design that ensures robustness to worker failures and preemptions. At inference time, only a single path needs to be executed for each input, without the need for any model compression. We consider this approach as a first prototype towards a new paradigm of large-scale learning, one that is less synchronous and more modular. Our experiments on the widely used C4 benchmark show that, for the same amount of training steps but less wall-clock time, DiPaCo exceeds the performance of a 1 billion-parameter dense transformer language model by choosing one of 256 possible paths, each with a size of 150 million parameters.
|
[
"['Arthur Douillard' 'Qixuan Feng' 'Andrei A. Rusu' 'Adhiguna Kuncoro'\n 'Yani Donchev' 'Rachita Chhaparia' 'Ionel Gog' \"Marc'Aurelio Ranzato\"\n 'Jiajun Shen' 'Arthur Szlam']"
] |
null | null |
2403.10618
| null | null |
http://arxiv.org/pdf/2403.10618v1
|
2024-03-15T18:30:06Z
|
2024-03-15T18:30:06Z
|
Limits of Approximating the Median Treatment Effect
|
Average Treatment Effect (ATE) estimation is a well-studied problem in causal inference. However, it does not necessarily capture the heterogeneity in the data, and several approaches have been proposed to tackle the issue, including estimating the Quantile Treatment Effects. In the finite population setting containing $n$ individuals, with treatment and control values denoted by the potential outcome vectors $mathbf{a}, mathbf{b}$, much of the prior work focused on estimating median$(mathbf{a}) -$ median$(mathbf{b})$, where median($mathbf x$) denotes the median value in the sorted ordering of all the values in vector $mathbf x$. It is known that estimating the difference of medians is easier than the desired estimand of median$(mathbf{a-b})$, called the Median Treatment Effect (MTE). The fundamental problem of causal inference -- for every individual $i$, we can only observe one of the potential outcome values, i.e., either the value $a_i$ or $b_i$, but not both, makes estimating MTE particularly challenging. In this work, we argue that MTE is not estimable and detail a novel notion of approximation that relies on the sorted order of the values in $mathbf{a-b}$. Next, we identify a quantity called variability that exactly captures the complexity of MTE estimation. By drawing connections to instance-optimality studied in theoretical computer science, we show that every algorithm for estimating the MTE obtains an approximation error that is no better than the error of an algorithm that computes variability. Finally, we provide a simple linear time algorithm for computing the variability exactly. Unlike much prior work, a particular highlight of our work is that we make no assumptions about how the potential outcome vectors are generated or how they are correlated, except that the potential outcome values are $k$-ary, i.e., take one of $k$ discrete values.
|
[
"['Raghavendra Addanki' 'Siddharth Bhandari']"
] |
null | null |
2403.10638
| null | null |
http://arxiv.org/pdf/2403.10638v1
|
2024-03-15T19:12:28Z
|
2024-03-15T19:12:28Z
|
A resource-constrained stochastic scheduling algorithm for homeless
street outreach and gleaning edible food
|
We developed a common algorithmic solution addressing the problem of resource-constrained outreach encountered by social change organizations with different missions and operations: Breaking Ground -- an organization that helps individuals experiencing homelessness in New York transition to permanent housing and Leket -- the national food bank of Israel that rescues food from farms and elsewhere to feed the hungry. Specifically, we developed an estimation and optimization approach for partially-observed episodic restless bandits under $k$-step transitions. The results show that our Thompson sampling with Markov chain recovery (via Stein variational gradient descent) algorithm significantly outperforms baselines for the problems of both organizations. We carried out this work in a prospective manner with the express goal of devising a flexible-enough but also useful-enough solution that can help overcome a lack of sustainable impact in data science for social good.
|
[
"['Conor M. Artman' 'Aditya Mate' 'Ezinne Nwankwo' 'Aliza Heching'\n 'Tsuyoshi Idé' 'Jiří Navrátil' 'Karthikeyan Shanmugam' 'Wei Sun'\n 'Kush R. Varshney' 'Lauri Goldkind' 'Gidi Kroch' 'Jaclyn Sawyer'\n 'Ian Watson']"
] |
null | null |
2403.10642
| null | null |
http://arxiv.org/pdf/2403.10642v2
|
2024-06-12T05:27:43Z
|
2024-03-15T19:21:27Z
|
Using Uncertainty Quantification to Characterize and Improve
Out-of-Domain Learning for PDEs
|
Existing work in scientific machine learning (SciML) has shown that data-driven learning of solution operators can provide a fast approximate alternative to classical numerical partial differential equation (PDE) solvers. Of these, Neural Operators (NOs) have emerged as particularly promising. We observe that several uncertainty quantification (UQ) methods for NOs fail for test inputs that are even moderately out-of-domain (OOD), even when the model approximates the solution well for in-domain tasks. To address this limitation, we show that ensembling several NOs can identify high-error regions and provide good uncertainty estimates that are well-correlated with prediction errors. Based on this, we propose a cost-effective alternative, DiverseNO, that mimics the properties of the ensemble by encouraging diverse predictions from its multiple heads in the last feed-forward layer. We then introduce Operator-ProbConserv, a method that uses these well-calibrated UQ estimates within the ProbConserv framework to update the model. Our empirical results show that Operator-ProbConserv enhances OOD model performance for a variety of challenging PDE problems and satisfies physical constraints such as conservation laws.
|
[
"['S. Chandra Mouli' 'Danielle C. Maddix' 'Shima Alizadeh' 'Gaurav Gupta'\n 'Andrew Stuart' 'Michael W. Mahoney' 'Yuyang Wang']"
] |
null | null |
2403.10646
| null | null |
http://arxiv.org/pdf/2403.10646v1
|
2024-03-15T19:27:48Z
|
2024-03-15T19:27:48Z
|
A Survey of Source Code Representations for Machine Learning-Based
Cybersecurity Tasks
|
Machine learning techniques for cybersecurity-related software engineering tasks are becoming increasingly popular. The representation of source code is a key portion of the technique that can impact the way the model is able to learn the features of the source code. With an increasing number of these techniques being developed, it is valuable to see the current state of the field to better understand what exists and what's not there yet. This paper presents a study of these existing ML-based approaches and demonstrates what type of representations were used for different cybersecurity tasks and programming languages. Additionally, we study what types of models are used with different representations. We have found that graph-based representations are the most popular category of representation, and Tokenizers and Abstract Syntax Trees (ASTs) are the two most popular representations overall. We also found that the most popular cybersecurity task is vulnerability detection, and the language that is covered by the most techniques is C. Finally, we found that sequence-based models are the most popular category of models, and Support Vector Machines (SVMs) are the most popular model overall.
|
[
"['Beatrice Casey' 'Joanna C. S. Santos' 'George Perry']"
] |
null | null |
2403.10650
| null | null |
http://arxiv.org/pdf/2403.10650v1
|
2024-03-15T19:35:10Z
|
2024-03-15T19:35:10Z
|
PALM: Pushing Adaptive Learning Rate Mechanisms for Continual Test-Time
Adaptation
|
Real-world vision models in dynamic environments face rapid shifts in domain distributions, leading to decreased recognition performance. Continual test-time adaptation (CTTA) directly adjusts a pre-trained source discriminative model to these changing domains using test data. A highly effective CTTA method involves applying layer-wise adaptive learning rates, and selectively adapting pre-trained layers. However, it suffers from the poor estimation of domain shift and the inaccuracies arising from the pseudo-labels. In this work, we aim to overcome these limitations by identifying layers through the quantification of model prediction uncertainty without relying on pseudo-labels. We utilize the magnitude of gradients as a metric, calculated by backpropagating the KL divergence between the softmax output and a uniform distribution, to select layers for further adaptation. Subsequently, for the parameters exclusively belonging to these selected layers, with the remaining ones frozen, we evaluate their sensitivity in order to approximate the domain shift, followed by adjusting their learning rates accordingly. Overall, this approach leads to a more robust and stable optimization than prior approaches. We conduct extensive image classification experiments on CIFAR-10C, CIFAR-100C, and ImageNet-C and demonstrate the efficacy of our method against standard benchmarks and prior methods.
|
[
"['Sarthak Kumar Maharana' 'Baoming Zhang' 'Yunhui Guo']"
] |
null | null |
2403.10652
| null | null |
http://arxiv.org/pdf/2403.10652v1
|
2024-03-15T19:36:56Z
|
2024-03-15T19:36:56Z
|
Improving Fairness in Credit Lending Models using Subgroup Threshold
Optimization
|
In an effort to improve the accuracy of credit lending decisions, many financial intuitions are now using predictions from machine learning models. While such predictions enjoy many advantages, recent research has shown that the predictions have the potential to be biased and unfair towards certain subgroups of the population. To combat this, several techniques have been introduced to help remove the bias and improve the overall fairness of the predictions. We introduce a new fairness technique, called textit{Subgroup Threshold Optimizer} (textit{STO}), that does not require any alternations to the input training data nor does it require any changes to the underlying machine learning algorithm, and thus can be used with any existing machine learning pipeline. STO works by optimizing the classification thresholds for individual subgroups in order to minimize the overall discrimination score between them. Our experiments on a real-world credit lending dataset show that STO can reduce gender discrimination by over 90%.
|
[
"['Cecilia Ying' 'Stephen Thomas']"
] |
null | null |
2403.10658
| null | null |
http://arxiv.org/pdf/2403.10658v1
|
2024-03-15T19:54:10Z
|
2024-03-15T19:54:10Z
|
InterLUDE: Interactions between Labeled and Unlabeled Data to Enhance
Semi-Supervised Learning
|
Semi-supervised learning (SSL) seeks to enhance task performance by training on both labeled and unlabeled data. Mainstream SSL image classification methods mostly optimize a loss that additively combines a supervised classification objective with a regularization term derived solely from unlabeled data. This formulation neglects the potential for interaction between labeled and unlabeled images. In this paper, we introduce InterLUDE, a new approach to enhance SSL made of two parts that each benefit from labeled-unlabeled interaction. The first part, embedding fusion, interpolates between labeled and unlabeled embeddings to improve representation learning. The second part is a new loss, grounded in the principle of consistency regularization, that aims to minimize discrepancies in the model's predictions between labeled versus unlabeled inputs. Experiments on standard closed-set SSL benchmarks and a medical SSL task with an uncurated unlabeled set show clear benefits to our approach. On the STL-10 dataset with only 40 labels, InterLUDE achieves 3.2% error rate, while the best previous method reports 14.9%.
|
[
"['Zhe Huang' 'Xiaowei Yu' 'Dajiang Zhu' 'Michael C. Hughes']"
] |
null | null |
2403.10663
| null | null |
http://arxiv.org/pdf/2403.10663v1
|
2024-03-15T20:12:41Z
|
2024-03-15T20:12:41Z
|
Not Just Change the Labels, Learn the Features: Watermarking Deep Neural
Networks with Multi-View Data
|
With the increasing prevalence of Machine Learning as a Service (MLaaS) platforms, there is a growing focus on deep neural network (DNN) watermarking techniques. These methods are used to facilitate the verification of ownership for a target DNN model to protect intellectual property. One of the most widely employed watermarking techniques involves embedding a trigger set into the source model. Unfortunately, existing methodologies based on trigger sets are still susceptible to functionality-stealing attacks, potentially enabling adversaries to steal the functionality of the source model without a reliable means of verifying ownership. In this paper, we first introduce a novel perspective on trigger set-based watermarking methods from a feature learning perspective. Specifically, we demonstrate that by selecting data exhibiting multiple features, also referred to as $textit{multi-view data}$, it becomes feasible to effectively defend functionality stealing attacks. Based on this perspective, we introduce a novel watermarking technique based on Multi-view dATa, called MAT, for efficiently embedding watermarks within DNNs. This approach involves constructing a trigger set with multi-view data and incorporating a simple feature-based regularization method for training the source model. We validate our method across various benchmarks and demonstrate its efficacy in defending against model extraction attacks, surpassing relevant baselines by a significant margin.
|
[
"['Yuxuan Li' 'Sarthak Kumar Maharana' 'Yunhui Guo']"
] |
null | null |
2403.10671
| null | null |
http://arxiv.org/pdf/2403.10671v1
|
2024-03-15T20:47:39Z
|
2024-03-15T20:47:39Z
|
Hessian-Free Laplace in Bayesian Deep Learning
|
The Laplace approximation (LA) of the Bayesian posterior is a Gaussian distribution centered at the maximum a posteriori estimate. Its appeal in Bayesian deep learning stems from the ability to quantify uncertainty post-hoc (i.e., after standard network parameter optimization), the ease of sampling from the approximate posterior, and the analytic form of model evidence. However, an important computational bottleneck of LA is the necessary step of calculating and inverting the Hessian matrix of the log posterior. The Hessian may be approximated in a variety of ways, with quality varying with a number of factors including the network, dataset, and inference task. In this paper, we propose an alternative framework that sidesteps Hessian calculation and inversion. The Hessian-free Laplace (HFL) approximation uses curvature of both the log posterior and network prediction to estimate its variance. Only two point estimates are needed: the standard maximum a posteriori parameter and the optimal parameter under a loss regularized by the network prediction. We show that, under standard assumptions of LA in Bayesian deep learning, HFL targets the same variance as LA, and can be efficiently amortized in a pre-trained network. Experiments demonstrate comparable performance to that of exact and approximate Hessians, with excellent coverage for in-between uncertainty.
|
[
"['James McInerney' 'Nathan Kallus']"
] |
null | null |
2403.10672
| null | null |
http://arxiv.org/pdf/2403.10672v1
|
2024-03-15T20:48:41Z
|
2024-03-15T20:48:41Z
|
Riemannian Flow Matching Policy for Robot Motion Learning
|
We introduce Riemannian Flow Matching Policies (RFMP), a novel model for learning and synthesizing robot visuomotor policies. RFMP leverages the efficient training and inference capabilities of flow matching methods. By design, RFMP inherits the strengths of flow matching: the ability to encode high-dimensional multimodal distributions, commonly encountered in robotic tasks, and a very simple and fast inference process. We demonstrate the applicability of RFMP to both state-based and vision-conditioned robot motion policies. Notably, as the robot state resides on a Riemannian manifold, RFMP inherently incorporates geometric awareness, which is crucial for realistic robotic tasks. To evaluate RFMP, we conduct two proof-of-concept experiments, comparing its performance against Diffusion Policies. Although both approaches successfully learn the considered tasks, our results show that RFMP provides smoother action trajectories with significantly lower inference times.
|
[
"['Max Braun' 'Noémie Jaquier' 'Leonel Rozo' 'Tamim Asfour']"
] |
null | null |
2403.10682
| null | null |
http://arxiv.org/pdf/2403.10682v2
|
2024-03-19T17:37:39Z
|
2024-03-15T21:03:34Z
|
Evaluation of GlassNet for physics-informed machine learning of glass
stability and glass-forming ability
|
Glasses form the basis of many modern applications and also hold great potential for future medical and environmental applications. However, their structural complexity and large composition space make design and optimization challenging for certain applications. Of particular importance for glass processing is an estimate of a given composition's glass-forming ability (GFA). However, there remain many open questions regarding the physical mechanisms of glass formation, especially in oxide glasses. It is apparent that a proxy for GFA would be highly useful in glass processing and design, but identifying such a surrogate property has proven itself to be difficult. Here, we explore the application of an open-source pre-trained NN model, GlassNet, that can predict the characteristic temperatures necessary to compute glass stability (GS) and assess the feasibility of using these physics-informed ML (PIML)-predicted GS parameters to estimate GFA. In doing so, we track the uncertainties at each step of the computation - from the original ML prediction errors, to the compounding of errors during GS estimation, and finally to the final estimation of GFA. While GlassNet exhibits reasonable accuracy on all individual properties, we observe a large compounding of error in the combination of these individual predictions for the prediction of GS, finding that random forest models offer similar accuracy to GlassNet. We also breakdown the ML performance on different glass families and find that the error in GS prediction is correlated with the error in crystallization peak temperature prediction. Lastly, we utilize this finding to assess the relationship between top-performing GS parameters and GFA for two ternary glass systems: sodium borosilicate and sodium iron phosphate glasses. We conclude that to obtain true ML predictive capability of GFA, significantly more data needs to be collected.
|
[
"['Sarah I. Allec' 'Xiaonan Lu' 'Daniel R. Cassar' 'Xuan T. Nguyen'\n 'Vinay I. Hegde' 'Thiruvillamalai Mahadevan' 'Miroslava Peterson'\n 'Jincheng Du' 'Brian J. Riley' 'John D. Vienna' 'James E. Saal']"
] |
null | null |
2403.10686
| null | null |
http://arxiv.org/pdf/2403.10686v1
|
2024-03-15T21:14:44Z
|
2024-03-15T21:14:44Z
|
AutoHLS: Learning to Accelerate Design Space Exploration for HLS Designs
|
High-level synthesis (HLS) is a design flow that leverages modern language features and flexibility, such as complex data structures, inheritance, templates, etc., to prototype hardware designs rapidly. However, exploring various design space parameters can take much time and effort for hardware engineers to meet specific design specifications. This paper proposes a novel framework called AutoHLS, which integrates a deep neural network (DNN) with Bayesian optimization (BO) to accelerate HLS hardware design optimization. Our tool focuses on HLS pragma exploration and operation transformation. It utilizes integrated DNNs to predict synthesizability within a given FPGA resource budget. We also investigate the potential of emerging quantum neural networks (QNNs) instead of classical DNNs for the AutoHLS pipeline. Our experimental results demonstrate up to a 70-fold speedup in exploration time.
|
[
"['Md Rubel Ahmed' 'Toshiaki Koike-Akino' 'Kieran Parsons' 'Ye Wang']"
] |
null | null |
2403.10689
| null | null |
http://arxiv.org/pdf/2403.10689v1
|
2024-03-15T21:18:14Z
|
2024-03-15T21:18:14Z
|
Latent Object Characteristics Recognition with Visual to Haptic-Audio
Cross-modal Transfer Learning
|
Recognising the characteristics of objects while a robot handles them is crucial for adjusting motions that ensure stable and efficient interactions with containers. Ahead of realising stable and efficient robot motions for handling/transferring the containers, this work aims to recognise the latent unobservable object characteristics. While vision is commonly used for object recognition by robots, it is ineffective for detecting hidden objects. However, recognising objects indirectly using other sensors is a challenging task. To address this challenge, we propose a cross-modal transfer learning approach from vision to haptic-audio. We initially train the model with vision, directly observing the target object. Subsequently, we transfer the latent space learned from vision to a second module, trained only with haptic-audio and motor data. This transfer learning framework facilitates the representation of object characteristics using indirect sensor data, thereby improving recognition accuracy. For evaluating the recognition accuracy of our proposed learning framework we selected shape, position, and orientation as the object characteristics. Finally, we demonstrate online recognition of both trained and untrained objects using the humanoid robot Nextage Open.
|
[
"['Namiko Saito' 'Joao Moura' 'Hiroki Uchida' 'Sethu Vijayakumar']"
] |
null | null |
2403.10691
| null | null |
http://arxiv.org/pdf/2403.10691v1
|
2024-03-15T21:21:11Z
|
2024-03-15T21:21:11Z
|
MYTE: Morphology-Driven Byte Encoding for Better and Fairer Multilingual
Language Modeling
|
A major consideration in multilingual language modeling is how to best represent languages with diverse vocabularies and scripts. Although contemporary text encoding methods cover most of the world's writing systems, they exhibit bias towards the high-resource languages of the Global West. As a result, texts of underrepresented languages tend to be segmented into long sequences of linguistically meaningless units. To address the disparities, we introduce a new paradigm that encodes the same information with segments of consistent size across diverse languages. Our encoding convention (MYTE) is based on morphemes, as their inventories are more balanced across languages than characters, which are used in previous methods. We show that MYTE produces shorter encodings for all 99 analyzed languages, with the most notable improvements for non-European languages and non-Latin scripts. This, in turn, improves multilingual LM performance and diminishes the perplexity gap throughout diverse languages.
|
[
"['Tomasz Limisiewicz' 'Terra Blevins' 'Hila Gonen' 'Orevaoghene Ahia'\n 'Luke Zettlemoyer']"
] |
null | null |
2403.10696
| null | null |
http://arxiv.org/pdf/2403.10696v1
|
2024-03-15T21:29:33Z
|
2024-03-15T21:29:33Z
|
On the low-shot transferability of [V]-Mamba
|
The strength of modern large-scale neural networks lies in their ability to efficiently adapt to new tasks with few examples. Although extensive research has investigated the transferability of Vision Transformers (ViTs) to various downstream tasks under diverse constraints, this study shifts focus to explore the transfer learning potential of [V]-Mamba. We compare its performance with ViTs across different few-shot data budgets and efficient transfer methods. Our analysis yields three key insights into [V]-Mamba's few-shot transfer performance: (a) [V]-Mamba demonstrates superior or equivalent few-shot learning capabilities compared to ViTs when utilizing linear probing (LP) for transfer, (b) Conversely, [V]-Mamba exhibits weaker or similar few-shot learning performance compared to ViTs when employing visual prompting (VP) as the transfer method, and (c) We observe a weak positive correlation between the performance gap in transfer via LP and VP and the scale of the [V]-Mamba model. This preliminary analysis lays the foundation for more comprehensive studies aimed at furthering our understanding of the capabilities of [V]-Mamba variants and their distinctions from ViTs.
|
[
"['Diganta Misra' 'Jay Gala' 'Antonio Orvieto']"
] |
null | null |
2403.10704
| null | null |
http://arxiv.org/pdf/2403.10704v1
|
2024-03-15T21:43:46Z
|
2024-03-15T21:43:46Z
|
PERL: Parameter Efficient Reinforcement Learning from Human Feedback
|
Reinforcement Learning from Human Feedback (RLHF) has proven to be a strong method to align Pretrained Large Language Models (LLMs) with human preferences. But training models with RLHF is computationally expensive, and an overall complex process. In this work, we study RLHF where the underlying models are trained using the parameter efficient method of Low-Rank Adaptation (LoRA) introduced by Hu et al. [2021]. We investigate the setup of "Parameter Efficient Reinforcement Learning" (PERL), in which we perform reward model training and reinforcement learning using LoRA. We compare PERL to conventional fine-tuning (full-tuning) across various configurations for 7 benchmarks, including 2 novel datasets, of reward modeling and reinforcement learning. We find that PERL performs on par with the conventional RLHF setting, while training faster, and with less memory. This enables the high performance of RLHF, while reducing the computational burden that limits its adoption as an alignment technique for Large Language Models. We also release 2 novel thumbs up/down preference datasets: "Taskmaster Coffee", and "Taskmaster Ticketing" to promote research around RLHF.
|
[
"['Hakim Sidahmed' 'Samrat Phatale' 'Alex Hutcheson' 'Zhuonan Lin'\n 'Zhang Chen' 'Zac Yu' 'Jarvis Jin' 'Roman Komarytsia'\n 'Christiane Ahlheim' 'Yonghao Zhu' 'Simral Chaudhary' 'Bowen Li'\n 'Saravanan Ganesh' 'Bill Byrne' 'Jessica Hoffmann' 'Hassan Mansoor'\n 'Wei Li' 'Abhinav Rastogi' 'Lucas Dixon']"
] |
null | null |
2403.10707
| null | null |
http://arxiv.org/pdf/2403.10707v2
|
2024-07-15T12:14:13Z
|
2024-03-15T21:54:00Z
|
Discovering Latent Themes in Social Media Messaging: A
Machine-in-the-Loop Approach Integrating LLMs
|
Grasping the themes of social media content is key to understanding the narratives that influence public opinion and behavior. The thematic analysis goes beyond traditional topic-level analysis, which often captures only the broadest patterns, providing deeper insights into specific and actionable themes such as "public sentiment towards vaccination", "political discourse surrounding climate policies," etc. In this paper, we introduce a novel approach to uncovering latent themes in social media messaging. Recognizing the limitations of the traditional topic-level analysis, which tends to capture only overarching patterns, this study emphasizes the need for a finer-grained, theme-focused exploration. Traditional theme discovery methods typically involve manual processes and a human-in-the-loop approach. While valuable, these methods face challenges in scalability, consistency, and resource intensity in terms of time and cost. To address these challenges, we propose a machine-in-the-loop approach that leverages the advanced capabilities of Large Language Models (LLMs). To demonstrate our approach, we apply our framework to contentious topics, such as climate debate and vaccine debate. We use two publicly available datasets: (1) the climate campaigns dataset of 21k Facebook ads and (2) the COVID-19 vaccine campaigns dataset of 9k Facebook ads. Our quantitative and qualitative analysis shows that our methodology yields more accurate and interpretable results compared to the baselines. Our results not only demonstrate the effectiveness of our approach in uncovering latent themes but also illuminate how these themes are tailored for demographic targeting in social media contexts. Additionally, our work sheds light on the dynamic nature of social media, revealing the shifts in the thematic focus of messaging in response to real-world events.
|
[
"['Tunazzina Islam' 'Dan Goldwasser']"
] |
null | null |
2403.10717
| null | null |
http://arxiv.org/pdf/2403.10717v1
|
2024-03-15T22:35:07Z
|
2024-03-15T22:35:07Z
|
Backdoor Secrets Unveiled: Identifying Backdoor Data with Optimized
Scaled Prediction Consistency
|
Modern machine learning (ML) systems demand substantial training data, often resorting to external sources. Nevertheless, this practice renders them vulnerable to backdoor poisoning attacks. Prior backdoor defense strategies have primarily focused on the identification of backdoored models or poisoned data characteristics, typically operating under the assumption of access to clean data. In this work, we delve into a relatively underexplored challenge: the automatic identification of backdoor data within a poisoned dataset, all under realistic conditions, i.e., without the need for additional clean data or without manually defining a threshold for backdoor detection. We draw an inspiration from the scaled prediction consistency (SPC) technique, which exploits the prediction invariance of poisoned data to an input scaling factor. Based on this, we pose the backdoor data identification problem as a hierarchical data splitting optimization problem, leveraging a novel SPC-based loss function as the primary optimization objective. Our innovation unfolds in several key aspects. First, we revisit the vanilla SPC method, unveiling its limitations in addressing the proposed backdoor identification problem. Subsequently, we develop a bi-level optimization-based approach to precisely identify backdoor data by minimizing the advanced SPC loss. Finally, we demonstrate the efficacy of our proposal against a spectrum of backdoor attacks, encompassing basic label-corrupted attacks as well as more sophisticated clean-label attacks, evaluated across various benchmark datasets. Experiment results show that our approach often surpasses the performance of current baselines in identifying backdoor data points, resulting in about 4%-36% improvement in average AUROC. Codes are available at https://github.com/OPTML-Group/BackdoorMSPC.
|
[
"['Soumyadeep Pal' 'Yuguang Yao' 'Ren Wang' 'Bingquan Shen' 'Sijia Liu']"
] |
null | null |
2403.10730
| null | null |
http://arxiv.org/pdf/2403.10730v1
|
2024-03-15T23:29:32Z
|
2024-03-15T23:29:32Z
|
Counterfactual Analysis of Neural Networks Used to Create Fertilizer
Management Zones
|
In Precision Agriculture, the utilization of management zones (MZs) that take into account within-field variability facilitates effective fertilizer management. This approach enables the optimization of nitrogen (N) rates to maximize crop yield production and enhance agronomic use efficiency. However, existing works often neglect the consideration of responsivity to fertilizer as a factor influencing MZ determination. In response to this gap, we present a MZ clustering method based on fertilizer responsivity. We build upon the statement that the responsivity of a given site to the fertilizer rate is described by the shape of its corresponding N fertilizer-yield response (N-response) curve. Thus, we generate N-response curves for all sites within the field using a convolutional neural network (CNN). The shape of the approximated N-response curves is then characterized using functional principal component analysis. Subsequently, a counterfactual explanation (CFE) method is applied to discern the impact of various variables on MZ membership. The genetic algorithm-based CFE solves a multi-objective optimization problem and aims to identify the minimum combination of features needed to alter a site's cluster assignment. Results from two yield prediction datasets indicate that the features with the greatest influence on MZ membership are associated with terrain characteristics that either facilitate or impede fertilizer runoff, such as terrain slope or topographic aspect.
|
[
"['Giorgio Morales' 'John Sheppard']"
] |
null | null |
2403.10731
| null | null |
http://arxiv.org/pdf/2403.10731v2
|
2024-04-30T10:29:22Z
|
2024-03-15T23:31:41Z
|
Giving a Hand to Diffusion Models: a Two-Stage Approach to Improving
Conditional Human Image Generation
|
Recent years have seen significant progress in human image generation, particularly with the advancements in diffusion models. However, existing diffusion methods encounter challenges when producing consistent hand anatomy and the generated images often lack precise control over the hand pose. To address this limitation, we introduce a novel approach to pose-conditioned human image generation, dividing the process into two stages: hand generation and subsequent body outpainting around the hands. We propose training the hand generator in a multi-task setting to produce both hand images and their corresponding segmentation masks, and employ the trained model in the first stage of generation. An adapted ControlNet model is then used in the second stage to outpaint the body around the generated hands, producing the final result. A novel blending technique is introduced to preserve the hand details during the second stage that combines the results of both stages in a coherent way. This involves sequential expansion of the outpainted region while fusing the latent representations, to ensure a seamless and cohesive synthesis of the final image. Experimental evaluations demonstrate the superiority of our proposed method over state-of-the-art techniques, in both pose accuracy and image quality, as validated on the HaGRID dataset. Our approach not only enhances the quality of the generated hands but also offers improved control over hand pose, advancing the capabilities of pose-conditioned human image generation. The source code of the proposed approach is available at https://github.com/apelykh/hand-to-diffusion.
|
[
"['Anton Pelykh' 'Ozge Mercanoglu Sincan' 'Richard Bowden']"
] |
null | null |
2403.10732
| null | null |
http://arxiv.org/pdf/2403.10732v1
|
2024-03-15T23:36:55Z
|
2024-03-15T23:36:55Z
|
Variance-Dependent Regret Bounds for Non-stationary Linear Bandits
|
We investigate the non-stationary stochastic linear bandit problem where the reward distribution evolves each round. Existing algorithms characterize the non-stationarity by the total variation budget $B_K$, which is the summation of the change of the consecutive feature vectors of the linear bandits over $K$ rounds. However, such a quantity only measures the non-stationarity with respect to the expectation of the reward distribution, which makes existing algorithms sub-optimal under the general non-stationary distribution setting. In this work, we propose algorithms that utilize the variance of the reward distribution as well as the $B_K$, and show that they can achieve tighter regret upper bounds. Specifically, we introduce two novel algorithms: Restarted Weighted$text{OFUL}^+$ and Restarted $text{SAVE}^+$. These algorithms address cases where the variance information of the rewards is known and unknown, respectively. Notably, when the total variance $V_K$ is much smaller than $K$, our algorithms outperform previous state-of-the-art results on non-stationary stochastic linear bandits under different settings. Experimental evaluations further validate the superior performance of our proposed algorithms over existing works.
|
[
"['Zhiyong Wang' 'Jize Xie' 'Yi Chen' 'John C. S. Lui' 'Dongruo Zhou']"
] |
null | null |
2403.10738
| null | null |
http://arxiv.org/pdf/2403.10738v1
|
2024-03-15T23:50:58Z
|
2024-03-15T23:50:58Z
|
Horizon-Free Regret for Linear Markov Decision Processes
|
A recent line of works showed regret bounds in reinforcement learning (RL) can be (nearly) independent of planning horizon, a.k.a.~the horizon-free bounds. However, these regret bounds only apply to settings where a polynomial dependency on the size of transition model is allowed, such as tabular Markov Decision Process (MDP) and linear mixture MDP. We give the first horizon-free bound for the popular linear MDP setting where the size of the transition model can be exponentially large or even uncountable. In contrast to prior works which explicitly estimate the transition model and compute the inhomogeneous value functions at different time steps, we directly estimate the value functions and confidence sets. We obtain the horizon-free bound by: (1) maintaining multiple weighted least square estimators for the value functions; and (2) a structural lemma which shows the maximal total variation of the inhomogeneous value functions is bounded by a polynomial factor of the feature dimension.
|
[
"['Zihan Zhang' 'Jason D. Lee' 'Yuxin Chen' 'Simon S. Du']"
] |
null | null |
2403.10748
| null | null |
http://arxiv.org/pdf/2403.10748v1
|
2024-03-16T00:45:06Z
|
2024-03-16T00:45:06Z
|
A Comprehensive Review of Latent Space Dynamics Identification
Algorithms for Intrusive and Non-Intrusive Reduced-Order-Modeling
|
Numerical solvers of partial differential equations (PDEs) have been widely employed for simulating physical systems. However, the computational cost remains a major bottleneck in various scientific and engineering applications, which has motivated the development of reduced-order models (ROMs). Recently, machine-learning-based ROMs have gained significant popularity and are promising for addressing some limitations of traditional ROM methods, especially for advection dominated systems. In this chapter, we focus on a particular framework known as Latent Space Dynamics Identification (LaSDI), which transforms the high-fidelity data, governed by a PDE, to simpler and low-dimensional latent-space data, governed by ordinary differential equations (ODEs). These ODEs can be learned and subsequently interpolated to make ROM predictions. Each building block of LaSDI can be easily modulated depending on the application, which makes the LaSDI framework highly flexible. In particular, we present strategies to enforce the laws of thermodynamics into LaSDI models (tLaSDI), enhance robustness in the presence of noise through the weak form (WLaSDI), select high-fidelity training data efficiently through active learning (gLaSDI, GPLaSDI), and quantify the ROM prediction uncertainty through Gaussian processes (GPLaSDI). We demonstrate the performance of different LaSDI approaches on Burgers equation, a non-linear heat conduction problem, and a plasma physics problem, showing that LaSDI algorithms can achieve relative errors of less than a few percent and up to thousands of times speed-ups.
|
[
"['Christophe Bonneville' 'Xiaolong He' 'April Tran' 'Jun Sur Park'\n 'William Fries' 'Daniel A. Messenger' 'Siu Wun Cheung' 'Yeonjong Shin'\n 'David M. Bortz' 'Debojyoti Ghosh' 'Jiun-Shyan Chen' 'Jonathan Belof'\n 'Youngsoo Choi']"
] |
null | null |
2403.10761
| null | null |
http://arxiv.org/pdf/2403.10761v1
|
2024-03-16T01:51:42Z
|
2024-03-16T01:51:42Z
|
Scheduling Drone and Mobile Charger via Hybrid-Action Deep Reinforcement
Learning
|
Recently there has been a growing interest in industry and academia, regarding the use of wireless chargers to prolong the operational longevity of unmanned aerial vehicles (commonly knowns as drones). In this paper we consider a charger-assisted drone application: a drone is deployed to observe a set points of interest, while a charger can move to recharge the drone's battery. We focus on the route and charging schedule of the drone and the mobile charger, to obtain high observation utility with the shortest possible time, while ensuring the drone remains operational during task execution. Essentially, this proposed drone-charger scheduling problem is a multi-stage decision-making process, in which the drone and the mobile charger act as two agents who cooperate to finish a task. The discrete-continuous hybrid action space of the two agents poses a significant challenge in our problem. To address this issue, we present a hybrid-action deep reinforcement learning framework, called HaDMC, which uses a standard policy learning algorithm to generate latent continuous actions. Motivated by representation learning, we specifically design and train an action decoder. It involves two pipelines to convert the latent continuous actions into original discrete and continuous actions, by which the drone and the charger can directly interact with environment. We embed a mutual learning scheme in model training, emphasizing the collaborative rather than individual actions. We conduct extensive numerical experiments to evaluate HaDMC and compare it with state-of-the-art deep reinforcement learning approaches. The experimental results show the effectiveness and efficiency of our solution.
|
[
"['Jizhe Dou' 'Haotian Zhang' 'Guodong Sun']"
] |
null | null |
2403.10763
| null | null |
http://arxiv.org/pdf/2403.10763v1
|
2024-03-16T02:06:14Z
|
2024-03-16T02:06:14Z
|
A Primal-Dual Algorithm for Faster Distributionally Robust Optimization
|
We consider the penalized distributionally robust optimization (DRO) problem with a closed, convex uncertainty set, a setting that encompasses the $f$-DRO, Wasserstein-DRO, and spectral/$L$-risk formulations used in practice. We present Drago, a stochastic primal-dual algorithm that achieves a state-of-the-art linear convergence rate on strongly convex-strongly concave DRO problems. The method combines both randomized and cyclic components with mini-batching, which effectively handles the unique asymmetric nature of the primal and dual problems in DRO. We support our theoretical results with numerical benchmarks in classification and regression.
|
[
"['Ronak Mehta' 'Jelena Diakonikolas' 'Zaid Harchaoui']"
] |
null | null |
2403.10766
| null | null |
http://arxiv.org/pdf/2403.10766v1
|
2024-03-16T02:07:45Z
|
2024-03-16T02:07:45Z
|
ODE Discovery for Longitudinal Heterogeneous Treatment Effects Inference
|
Inferring unbiased treatment effects has received widespread attention in the machine learning community. In recent years, our community has proposed numerous solutions in standard settings, high-dimensional treatment settings, and even longitudinal settings. While very diverse, the solution has mostly relied on neural networks for inference and simultaneous correction of assignment bias. New approaches typically build on top of previous approaches by proposing new (or refined) architectures and learning algorithms. However, the end result -- a neural-network-based inference machine -- remains unchallenged. In this paper, we introduce a different type of solution in the longitudinal setting: a closed-form ordinary differential equation (ODE). While we still rely on continuous optimization to learn an ODE, the resulting inference machine is no longer a neural network. Doing so yields several advantages such as interpretability, irregular sampling, and a different set of identification assumptions. Above all, we consider the introduction of a completely new type of solution to be our most important contribution as it may spark entirely new innovations in treatment effects in general. We facilitate this by formulating our contribution as a framework that can transform any ODE discovery method into a treatment effects method.
|
[
"['Krzysztof Kacprzyk' 'Samuel Holt' 'Jeroen Berrevoets' 'Zhaozhi Qian'\n 'Mihaela van der Schaar']"
] |
null | null |
2403.10771
| null | null |
http://arxiv.org/pdf/2403.10771v1
|
2024-03-16T02:19:21Z
|
2024-03-16T02:19:21Z
|
A Probabilistic Approach for Alignment with Human Comparisons
|
A growing trend involves integrating human knowledge into learning frameworks, leveraging subtle human feedback to refine AI models. Despite these advances, no comprehensive theoretical framework describing the specific conditions under which human comparisons improve the traditional supervised fine-tuning process has been developed. To bridge this gap, this paper studies the effective use of human comparisons to address limitations arising from noisy data and high-dimensional models. We propose a two-stage "Supervised Fine Tuning+Human Comparison" (SFT+HC) framework connecting machine learning with human feedback through a probabilistic bisection approach. The two-stage framework first learns low-dimensional representations from noisy-labeled data via an SFT procedure, and then uses human comparisons to improve the model alignment. To examine the efficacy of the alignment phase, we introduce a novel concept termed the "label-noise-to-comparison-accuracy" (LNCA) ratio. This paper theoretically identifies the conditions under which the "SFT+HC" framework outperforms pure SFT approach, leveraging this ratio to highlight the advantage of incorporating human evaluators in reducing sample complexity. We validate that the proposed conditions for the LNCA ratio are met in a case study conducted via an Amazon Mechanical Turk experiment.
|
[
"['Junyu Cao' 'Mohsen Bayati']"
] |
null | null |
2403.10776
| null | null |
http://arxiv.org/pdf/2403.10776v1
|
2024-03-16T02:29:42Z
|
2024-03-16T02:29:42Z
|
From Melting Pots to Misrepresentations: Exploring Harms in Generative
AI
|
With the widespread adoption of advanced generative models such as Gemini and GPT, there has been a notable increase in the incorporation of such models into sociotechnical systems, categorized under AI-as-a-Service (AIaaS). Despite their versatility across diverse sectors, concerns persist regarding discriminatory tendencies within these models, particularly favoring selected `majority' demographics across various sociodemographic dimensions. Despite widespread calls for diversification of media representations, marginalized racial and ethnic groups continue to face persistent distortion, stereotyping, and neglect within the AIaaS context. In this work, we provide a critical summary of the state of research in the context of social harms to lead the conversation to focus on their implications. We also present open-ended research questions, guided by our discussion, to help define future research pathways.
|
[
"['Sanjana Gautam' 'Pranav Narayanan Venkit' 'Sourojit Ghosh']"
] |
null | null |
2403.10786
| null | null |
http://arxiv.org/pdf/2403.10786v1
|
2024-03-16T03:33:52Z
|
2024-03-16T03:33:52Z
|
ContourDiff: Unpaired Image Translation with Contour-Guided Diffusion
Models
|
Accurately translating medical images across different modalities (e.g., CT to MRI) has numerous downstream clinical and machine learning applications. While several methods have been proposed to achieve this, they often prioritize perceptual quality with respect to output domain features over preserving anatomical fidelity. However, maintaining anatomy during translation is essential for many tasks, e.g., when leveraging masks from the input domain to develop a segmentation model with images translated to the output domain. To address these challenges, we propose ContourDiff, a novel framework that leverages domain-invariant anatomical contour representations of images. These representations are simple to extract from images, yet form precise spatial constraints on their anatomical content. We introduce a diffusion model that converts contour representations of images from arbitrary input domains into images in the output domain of interest. By applying the contour as a constraint at every diffusion sampling step, we ensure the preservation of anatomical content. We evaluate our method by training a segmentation model on images translated from CT to MRI with their original CT masks and testing its performance on real MRIs. Our method outperforms other unpaired image translation methods by a significant margin, furthermore without the need to access any input domain information during training.
|
[
"['Yuwen Chen' 'Nicholas Konz' 'Hanxue Gu' 'Haoyu Dong' 'Yaqian Chen'\n 'Lin Li' 'Jisoo Lee' 'Maciej A. Mazurowski']"
] |
null | null |
2403.10787
| null | null |
http://arxiv.org/pdf/2403.10787v1
|
2024-03-16T03:37:19Z
|
2024-03-16T03:37:19Z
|
Time Series Representation Learning with Supervised Contrastive Temporal
Transformer
|
Finding effective representations for time series data is a useful but challenging task. Several works utilize self-supervised or unsupervised learning methods to address this. However, there still remains the open question of how to leverage available label information for better representations. To answer this question, we exploit pre-existing techniques in time series and representation learning domains and develop a simple, yet novel fusion model, called: textbf{S}upervised textbf{CO}ntrastive textbf{T}emporal textbf{T}ransformer (SCOTT). We first investigate suitable augmentation methods for various types of time series data to assist with learning change-invariant representations. Secondly, we combine Transformer and Temporal Convolutional Networks in a simple way to efficiently learn both global and local features. Finally, we simplify Supervised Contrastive Loss for representation learning of labelled time series data. We preliminarily evaluate SCOTT on a downstream task, Time Series Classification, using 45 datasets from the UCR archive. The results show that with the representations learnt by SCOTT, even a weak classifier can perform similar to or better than existing state-of-the-art models (best performance on 23/45 datasets and highest rank against 9 baseline models). Afterwards, we investigate SCOTT's ability to address a real-world task, online Change Point Detection (CPD), on two datasets: a human activity dataset and a surgical patient dataset. We show that the model performs with high reliability and efficiency on the online CPD problem ($sim$98% and $sim$97% area under precision-recall curve respectively). Furthermore, we demonstrate the model's potential in tackling early detection and show it performs best compared to other candidates.
|
[
"['Yuansan Liu' 'Sudanthi Wijewickrema' 'Christofer Bester'\n \"Stephen O'Leary\" 'James Bailey']"
] |
null | null |
2403.10790
| null | null |
http://arxiv.org/pdf/2403.10790v1
|
2024-03-16T03:42:29Z
|
2024-03-16T03:42:29Z
|
QuantumLeak: Stealing Quantum Neural Networks from Cloud-based NISQ
Machines
|
Variational quantum circuits (VQCs) have become a powerful tool for implementing Quantum Neural Networks (QNNs), addressing a wide range of complex problems. Well-trained VQCs serve as valuable intellectual assets hosted on cloud-based Noisy Intermediate Scale Quantum (NISQ) computers, making them susceptible to malicious VQC stealing attacks. However, traditional model extraction techniques designed for classical machine learning models encounter challenges when applied to NISQ computers due to significant noise in current devices. In this paper, we introduce QuantumLeak, an effective and accurate QNN model extraction technique from cloud-based NISQ machines. Compared to existing classical model stealing techniques, QuantumLeak improves local VQC accuracy by 4.99%$sim$7.35% across diverse datasets and VQC architectures.
|
[
"['Zhenxiao Fu' 'Min Yang' 'Cheng Chu' 'Yilun Xu' 'Gang Huang' 'Fan Chen']"
] |
null | null |
2403.10794
| null | null |
http://arxiv.org/pdf/2403.10794v1
|
2024-03-16T03:53:55Z
|
2024-03-16T03:53:55Z
|
Diffusion-Reinforcement Learning Hierarchical Motion Planning in
Adversarial Multi-agent Games
|
Reinforcement Learning- (RL-)based motion planning has recently shown the potential to outperform traditional approaches from autonomous navigation to robot manipulation. In this work, we focus on a motion planning task for an evasive target in a partially observable multi-agent adversarial pursuit-evasion games (PEG). These pursuit-evasion problems are relevant to various applications, such as search and rescue operations and surveillance robots, where robots must effectively plan their actions to gather intelligence or accomplish mission tasks while avoiding detection or capture themselves. We propose a hierarchical architecture that integrates a high-level diffusion model to plan global paths responsive to environment data while a low-level RL algorithm reasons about evasive versus global path-following behavior. Our approach outperforms baselines by 51.2% by leveraging the diffusion model to guide the RL algorithm for more efficient exploration and improves the explanability and predictability.
|
[
"['Zixuan Wu' 'Sean Ye' 'Manisha Natarajan' 'Matthew C. Gombolay']"
] |
null | null |
2403.10795
| null | null |
http://arxiv.org/pdf/2403.10795v1
|
2024-03-16T03:54:38Z
|
2024-03-16T03:54:38Z
|
From Words to Routes: Applying Large Language Models to Vehicle Routing
|
LLMs have shown impressive progress in robotics (e.g., manipulation and navigation) with natural language task descriptions. The success of LLMs in these tasks leads us to wonder: What is the ability of LLMs to solve vehicle routing problems (VRPs) with natural language task descriptions? In this work, we study this question in three steps. First, we construct a dataset with 21 types of single- or multi-vehicle routing problems. Second, we evaluate the performance of LLMs across four basic prompt paradigms of text-to-code generation, each involving different types of text input. We find that the basic prompt paradigm, which generates code directly from natural language task descriptions, performs the best for GPT-4, achieving 56% feasibility, 40% optimality, and 53% efficiency. Third, based on the observation that LLMs may not be able to provide correct solutions at the initial attempt, we propose a framework that enables LLMs to refine solutions through self-reflection, including self-debugging and self-verification. With GPT-4, our proposed framework achieves a 16% increase in feasibility, a 7% increase in optimality, and a 15% increase in efficiency. Moreover, we examine the sensitivity of GPT-4 to task descriptions, specifically focusing on how its performance changes when certain details are omitted from the task descriptions, yet the core meaning is preserved. Our findings reveal that such omissions lead to a notable decrease in performance: 4% in feasibility, 4% in optimality, and 5% in efficiency. Website: https://sites.google.com/view/words-to-routes/
|
[
"['Zhehui Huang' 'Guangyao Shi' 'Gaurav S. Sukhatme']"
] |
null | null |
2403.10799
| null | null |
http://arxiv.org/pdf/2403.10799v3
|
2024-05-15T02:20:54Z
|
2024-03-16T04:12:50Z
|
Efficient Pruning of Large Language Model with Adaptive Estimation
Fusion
|
Large language models (LLMs) have become crucial for many generative downstream tasks, leading to an inevitable trend and significant challenge to deploy them efficiently on resource-constrained devices. Structured pruning is a widely used method to address this challenge. However, when dealing with the complex structure of the multiple decoder layers, general methods often employ common estimation approaches for pruning. These approaches lead to a decline in accuracy for specific downstream tasks. In this paper, we introduce a simple yet efficient method that adaptively models the importance of each substructure. Meanwhile, it can adaptively fuse coarse-grained and finegrained estimations based on the results from complex and multilayer structures. All aspects of our design seamlessly integrate into the endto-end pruning framework. Our experimental results, compared with state-of-the-art methods on mainstream datasets, demonstrate average accuracy improvements of 1.1%, 1.02%, 2.0%, and 1.2% for LLaMa-7B,Vicuna-7B, Baichuan-7B, and Bloom-7b1, respectively.
|
[
"['Jun Liu' 'Chao Wu' 'Changdi Yang' 'Hao Tang' 'Zhenglun Kong' 'Geng Yuan'\n 'Wei Niu' 'Dong Huang' 'Yanzhi Wang']"
] |
null | null |
2403.10800
| null | null |
http://arxiv.org/pdf/2403.10800v2
|
2024-03-29T20:34:26Z
|
2024-03-16T04:19:48Z
|
Model Reprogramming Outperforms Fine-tuning on Out-of-distribution Data
in Text-Image Encoders
|
When evaluating the performance of a pre-trained model transferred to a downstream task, it is imperative to assess not only the in-distribution (ID) accuracy of the downstream model but also its capacity to generalize and identify out-of-distribution (OOD) samples. In this paper, we unveil the hidden costs associated with intrusive fine-tuning techniques. Specifically, we demonstrate that commonly used fine-tuning methods not only distort the representations necessary for generalizing to covariate-shifted OOD samples (OOD generalization) but also distort the representations necessary for detecting semantically-shifted OOD samples (OOD detection). To address these challenges, we introduce a new model reprogramming approach for fine-tuning, which we name Reprogrammer. Reprogrammer aims to improve the holistic performance of the downstream model across ID, OOD generalization, and OOD detection tasks. Our empirical evidence reveals that Reprogrammer is less intrusive and yields superior downstream models. Furthermore, we demonstrate that by appending an additional representation residual connection to Reprogrammer, we can further preserve pre-training representations, resulting in an even more safe and robust downstream model capable of excelling in many ID classification, OOD generalization, and OOD detection settings.
|
[
"['Andrew Geng' 'Pin-Yu Chen']"
] |
null | null |
2403.10802
| null | null |
http://arxiv.org/pdf/2403.10802v1
|
2024-03-16T04:29:21Z
|
2024-03-16T04:29:21Z
|
Anomaly Detection Based on Isolation Mechanisms: A Survey
|
Anomaly detection is a longstanding and active research area that has many applications in domains such as finance, security, and manufacturing. However, the efficiency and performance of anomaly detection algorithms are challenged by the large-scale, high-dimensional, and heterogeneous data that are prevalent in the era of big data. Isolation-based unsupervised anomaly detection is a novel and effective approach for identifying anomalies in data. It relies on the idea that anomalies are few and different from normal instances, and thus can be easily isolated by random partitioning. Isolation-based methods have several advantages over existing methods, such as low computational complexity, low memory usage, high scalability, robustness to noise and irrelevant features, and no need for prior knowledge or heavy parameter tuning. In this survey, we review the state-of-the-art isolation-based anomaly detection methods, including their data partitioning strategies, anomaly score functions, and algorithmic details. We also discuss some extensions and applications of isolation-based methods in different scenarios, such as detecting anomalies in streaming data, time series, trajectory, and image datasets. Finally, we identify some open challenges and future directions for isolation-based anomaly detection research.
|
[
"['Yang Cao' 'Haolong Xiang' 'Hang Zhang' 'Ye Zhu' 'Kai Ming Ting']"
] |
null | null |
2403.10803
| null | null |
http://arxiv.org/pdf/2403.10803v1
|
2024-03-16T04:35:04Z
|
2024-03-16T04:35:04Z
|
Enhancing Out-of-Distribution Detection with Multitesting-based
Layer-wise Feature Fusion
|
Deploying machine learning in open environments presents the challenge of encountering diverse test inputs that differ significantly from the training data. These out-of-distribution samples may exhibit shifts in local or global features compared to the training distribution. The machine learning (ML) community has responded with a number of methods aimed at distinguishing anomalous inputs from original training data. However, the majority of previous studies have primarily focused on the output layer or penultimate layer of pre-trained deep neural networks. In this paper, we propose a novel framework, Multitesting-based Layer-wise Out-of-Distribution (OOD) Detection (MLOD), to identify distributional shifts in test samples at different levels of features through rigorous multiple testing procedure. Our approach distinguishes itself from existing methods as it does not require modifying the structure or fine-tuning of the pre-trained classifier. Through extensive experiments, we demonstrate that our proposed framework can seamlessly integrate with any existing distance-based inspection method while efficiently utilizing feature extractors of varying depths. Our scheme effectively enhances the performance of out-of-distribution detection when compared to baseline methods. In particular, MLOD-Fisher achieves superior performance in general. When trained using KNN on CIFAR10, MLOD-Fisher significantly lowers the false positive rate (FPR) from 24.09% to 7.47% on average compared to merely utilizing the features of the last layer.
|
[
"['Jiawei Li' 'Sitong Li' 'Shanshan Wang' 'Yicheng Zeng' 'Falong Tan'\n 'Chuanlong Xie']"
] |
null | null |
2403.10807
| null | null |
http://arxiv.org/pdf/2403.10807v1
|
2024-03-16T04:43:46Z
|
2024-03-16T04:43:46Z
|
FlyKD: Graph Knowledge Distillation on the Fly with Curriculum Learning
|
Knowledge Distillation (KD) aims to transfer a more capable teacher model's knowledge to a lighter student model in order to improve the efficiency of the model, making it faster and more deployable. However, the student model's optimization process over the noisy pseudo labels (generated by the teacher model) is tricky and the amount of pseudo labels one can generate is limited due to Out of Memory (OOM) error. In this paper, we propose FlyKD (Knowledge Distillation on the Fly) which enables the generation of virtually unlimited number of pseudo labels, coupled with Curriculum Learning that greatly alleviates the optimization process over the noisy pseudo labels. Empirically, we observe that FlyKD outperforms vanilla KD and the renown Local Structure Preserving Graph Convolutional Network (LSPGCN). Lastly, with the success of Curriculum Learning, we shed light on a new research direction of improving optimization over noisy pseudo labels.
|
[
"['Eugene Ku']"
] |
null | null |
2403.10819
| null | null |
http://arxiv.org/pdf/2403.10819v1
|
2024-03-16T06:06:44Z
|
2024-03-16T06:06:44Z
|
Incentivized Exploration of Non-Stationary Stochastic Bandits
|
We study incentivized exploration for the multi-armed bandit (MAB) problem with non-stationary reward distributions, where players receive compensation for exploring arms other than the greedy choice and may provide biased feedback on the reward. We consider two different non-stationary environments: abruptly-changing and continuously-changing, and propose respective incentivized exploration algorithms. We show that the proposed algorithms achieve sublinear regret and compensation over time, thus effectively incentivizing exploration despite the nonstationarity and the biased or drifted feedback.
|
[
"['Sourav Chakraborty' 'Lijun Chen']"
] |
null | null |
2403.10824
| null | null |
http://arxiv.org/pdf/2403.10824v1
|
2024-03-16T06:25:53Z
|
2024-03-16T06:25:53Z
|
LookALike: Human Mimicry based collaborative decision making
|
Artificial General Intelligence falls short when communicating role specific nuances to other systems. This is more pronounced when building autonomous LLM agents capable and designed to communicate with each other for real world problem solving. Humans can communicate context and domain specific nuances along with knowledge, and that has led to refinement of skills. In this work we propose and evaluate a novel method that leads to knowledge distillation among LLM agents leading to realtime human role play preserving unique contexts without relying on any stored data or pretraining. We also evaluate how our system performs better in simulated real world tasks compared to state of the art.
|
[
"['Rabimba Karanjai' 'Weidong Shi']"
] |
null | null |
2403.10834
| null | null |
http://arxiv.org/pdf/2403.10834v1
|
2024-03-16T07:05:47Z
|
2024-03-16T07:05:47Z
|
SF(DA)$^2$: Source-free Domain Adaptation Through the Lens of Data
Augmentation
|
In the face of the deep learning model's vulnerability to domain shift, source-free domain adaptation (SFDA) methods have been proposed to adapt models to new, unseen target domains without requiring access to source domain data. Although the potential benefits of applying data augmentation to SFDA are attractive, several challenges arise such as the dependence on prior knowledge of class-preserving transformations and the increase in memory and computational requirements. In this paper, we propose Source-free Domain Adaptation Through the Lens of Data Augmentation (SF(DA)$^2$), a novel approach that leverages the benefits of data augmentation without suffering from these challenges. We construct an augmentation graph in the feature space of the pretrained model using the neighbor relationships between target features and propose spectral neighborhood clustering to identify partitions in the prediction space. Furthermore, we propose implicit feature augmentation and feature disentanglement as regularization loss functions that effectively utilize class semantic information within the feature space. These regularizers simulate the inclusion of an unlimited number of augmented target features into the augmentation graph while minimizing computational and memory demands. Our method shows superior adaptation performance in SFDA scenarios, including 2D image and 3D point cloud datasets and a highly imbalanced dataset.
|
[
"['Uiwon Hwang' 'Jonghyun Lee' 'Juhyeon Shin' 'Sungroh Yoon']"
] |
null | null |
2403.10842
| null | null |
http://arxiv.org/pdf/2403.10842v3
|
2024-06-21T07:04:49Z
|
2024-03-16T07:40:23Z
|
Twin Transformer using Gated Dynamic Learnable Attention mechanism for
Fault Detection and Diagnosis in the Tennessee Eastman Process
|
Fault detection and diagnosis (FDD) is a crucial task for ensuring the safety and efficiency of industrial processes. We propose a novel FDD methodology for the Tennessee Eastman Process (TEP), a widely used benchmark for chemical process control. The model employs two separate Transformer branches, enabling independent processing of input data and potential extraction of diverse information. A novel attention mechanism, Gated Dynamic Learnable Attention (GDLAttention), is introduced which integrates a gating mechanism and dynamic learning capabilities. The gating mechanism modulates the attention weights, allowing the model to focus on the most relevant parts of the input. The dynamic learning approach adapts the attention strategy during training, potentially leading to improved performance. The attention mechanism uses a bilinear similarity function, providing greater flexibility in capturing complex relationships between query and key vectors. In order to assess the effectiveness of our approach, we tested it against 21 and 18 distinct fault scenarios in TEP, and compared its performance with several established FDD techniques. The outcomes indicate that the method outperforms others in terms of accuracy, false alarm rate, and misclassification rate. This underscores the robustness and efficacy of the approach for FDD in intricate industrial processes.
|
[
"['Mohammad Ali Labbaf-Khaniki' 'Mohammad Manthouri']"
] |
null | null |
2403.10853
| null | null |
http://arxiv.org/pdf/2403.10853v2
|
2024-04-30T15:20:54Z
|
2024-03-16T08:28:42Z
|
Just Say the Name: Online Continual Learning with Category Names Only
via Data Generation
|
In real-world scenarios, extensive manual annotation for continual learning is impractical due to prohibitive costs. Although prior arts, influenced by large-scale webly supervised training, suggest leveraging web-scraped data in continual learning, this poses challenges such as data imbalance, usage restrictions, and privacy concerns. Addressing the risks of continual webly supervised training, we present an online continual learning framework - Generative Name only Continual Learning (G-NoCL). The proposed G-NoCL uses a set of generators G along with the learner. When encountering new concepts (i.e., classes), G-NoCL employs the novel sample complexity-guided data ensembling technique DIverSity and COmplexity enhancing ensemBlER (DISCOBER) to optimally sample training data from generated data. Through extensive experimentation, we demonstrate superior performance of DISCOBER in G-NoCL online CL benchmarks, covering both In-Distribution (ID) and Out-of-Distribution (OOD) generalization evaluations, compared to naive generator-ensembling, web-supervised, and manually annotated data.
|
[
"['Minhyuk Seo' 'Diganta Misra' 'Seongwon Cho' 'Minjae Lee' 'Jonghyun Choi']"
] |
null | null |
2403.10855
| null | null |
http://arxiv.org/pdf/2403.10855v2
|
2024-03-25T16:07:24Z
|
2024-03-16T08:30:55Z
|
Reinforcement Learning with Options and State Representation
|
The current thesis aims to explore the reinforcement learning field and build on existing methods to produce improved ones to tackle the problem of learning in high-dimensional and complex environments. It addresses such goals by decomposing learning tasks in a hierarchical fashion known as Hierarchical Reinforcement Learning. We start in the first chapter by getting familiar with the Markov Decision Process framework and presenting some of its recent techniques that the following chapters use. We then proceed to build our Hierarchical Policy learning as an answer to the limitations of a single primitive policy. The hierarchy is composed of a manager agent at the top and employee agents at the lower level. In the last chapter, which is the core of this thesis, we attempt to learn lower-level elements of the hierarchy independently of the manager level in what is known as the "Eigenoption". Based on the graph structure of the environment, Eigenoptions allow us to build agents that are aware of the geometric and dynamic properties of the environment. Their decision-making has a special property: it is invariant to symmetric transformations of the environment, allowing as a consequence to greatly reduce the complexity of the learning task.
|
[
"['Ayoub Ghriss' 'Masashi Sugiyama' 'Alessandro Lazaric']"
] |
null | null |
2403.10859
| null | null |
http://arxiv.org/pdf/2403.10859v1
|
2024-03-16T08:51:02Z
|
2024-03-16T08:51:02Z
|
Neural-Kernel Conditional Mean Embeddings
|
Kernel conditional mean embeddings (CMEs) offer a powerful framework for representing conditional distribution, but they often face scalability and expressiveness challenges. In this work, we propose a new method that effectively combines the strengths of deep learning with CMEs in order to address these challenges. Specifically, our approach leverages the end-to-end neural network (NN) optimization framework using a kernel-based objective. This design circumvents the computationally expensive Gram matrix inversion required by current CME methods. To further enhance performance, we provide efficient strategies to optimize the remaining kernel hyperparameters. In conditional density estimation tasks, our NN-CME hybrid achieves competitive performance and often surpasses existing deep learning-based methods. Lastly, we showcase its remarkable versatility by seamlessly integrating it into reinforcement learning (RL) contexts. Building on Q-learning, our approach naturally leads to a new variant of distributional RL methods, which demonstrates consistent effectiveness across different environments.
|
[
"['Eiki Shimizu' 'Kenji Fukumizu' 'Dino Sejdinovic']"
] |
null | null |
2403.10861
| null | null |
http://arxiv.org/pdf/2403.10861v1
|
2024-03-16T08:58:03Z
|
2024-03-16T08:58:03Z
|
FedQNN: Federated Learning using Quantum Neural Networks
|
In this study, we explore the innovative domain of Quantum Federated Learning (QFL) as a framework for training Quantum Machine Learning (QML) models via distributed networks. Conventional machine learning models frequently grapple with issues about data privacy and the exposure of sensitive information. Our proposed Federated Quantum Neural Network (FedQNN) framework emerges as a cutting-edge solution, integrating the singular characteristics of QML with the principles of classical federated learning. This work thoroughly investigates QFL, underscoring its capability to secure data handling in a distributed environment and facilitate cooperative learning without direct data sharing. Our research corroborates the concept through experiments across varied datasets, including genomics and healthcare, thereby validating the versatility and efficacy of our FedQNN framework. The results consistently exceed 86% accuracy across three distinct datasets, proving its suitability for conducting various QML tasks. Our research not only identifies the limitations of classical paradigms but also presents a novel framework to propel the field of QML into a new era of secure and collaborative innovation.
|
[
"['Nouhaila Innan' 'Muhammad Al-Zafar Khan' 'Alberto Marchisio'\n 'Muhammad Shafique' 'Mohamed Bennai']"
] |
null | null |
2403.10863
| null | null |
http://arxiv.org/pdf/2403.10863v1
|
2024-03-16T09:06:38Z
|
2024-03-16T09:06:38Z
|
stMCDI: Masked Conditional Diffusion Model with Graph Neural Network for
Spatial Transcriptomics Data Imputation
|
Spatially resolved transcriptomics represents a significant advancement in single-cell analysis by offering both gene expression data and their corresponding physical locations. However, this high degree of spatial resolution entails a drawback, as the resulting spatial transcriptomic data at the cellular level is notably plagued by a high incidence of missing values. Furthermore, most existing imputation methods either overlook the spatial information between spots or compromise the overall gene expression data distribution. To address these challenges, our primary focus is on effectively utilizing the spatial location information within spatial transcriptomic data to impute missing values, while preserving the overall data distribution. We introduce textbf{stMCDI}, a novel conditional diffusion model for spatial transcriptomics data imputation, which employs a denoising network trained using randomly masked data portions as guidance, with the unmasked data serving as conditions. Additionally, it utilizes a GNN encoder to integrate the spatial position information, thereby enhancing model performance. The results obtained from spatial transcriptomics datasets elucidate the performance of our methods relative to existing approaches.
|
[
"['Xiaoyu Li' 'Wenwen Min' 'Shunfang Wang' 'Changmiao Wang' 'Taosheng Xu']"
] |
null | null |
2403.10875
| null | null |
http://arxiv.org/pdf/2403.10875v1
|
2024-03-16T09:58:49Z
|
2024-03-16T09:58:49Z
|
Probabilistic World Modeling with Asymmetric Distance Measure
|
Representation learning is a fundamental task in machine learning, aiming at uncovering structures from data to facilitate subsequent tasks. However, what is a good representation for planning and reasoning in a stochastic world remains an open problem. In this work, we posit that learning a distance function is essential to allow planning and reasoning in the representation space. We show that a geometric abstraction of the probabilistic world dynamics can be embedded into the representation space through asymmetric contrastive learning. Unlike previous approaches that focus on learning mutual similarity or compatibility measures, we instead learn an asymmetric similarity function that reflects the state reachability and allows multi-way probabilistic inference. Moreover, by conditioning on a common reference state (e.g. the observer's current state), the learned representation space allows us to discover the geometrically salient states that only a handful of paths can lead through. These states can naturally serve as subgoals to break down long-horizon planning tasks. We evaluate our method in gridworld environments with various layouts and demonstrate its effectiveness in discovering the subgoals.
|
[
"['Meng Song']"
] |
null | null |
2403.10889
| null | null |
http://arxiv.org/pdf/2403.10889v1
|
2024-03-16T10:49:27Z
|
2024-03-16T10:49:27Z
|
List Sample Compression and Uniform Convergence
|
List learning is a variant of supervised classification where the learner outputs multiple plausible labels for each instance rather than just one. We investigate classical principles related to generalization within the context of list learning. Our primary goal is to determine whether classical principles in the PAC setting retain their applicability in the domain of list PAC learning. We focus on uniform convergence (which is the basis of Empirical Risk Minimization) and on sample compression (which is a powerful manifestation of Occam's Razor). In classical PAC learning, both uniform convergence and sample compression satisfy a form of `completeness': whenever a class is learnable, it can also be learned by a learning rule that adheres to these principles. We ask whether the same completeness holds true in the list learning setting. We show that uniform convergence remains equivalent to learnability in the list PAC learning setting. In contrast, our findings reveal surprising results regarding sample compression: we prove that when the label space is $Y={0,1,2}$, then there are 2-list-learnable classes that cannot be compressed. This refutes the list version of the sample compression conjecture by Littlestone and Warmuth (1986). We prove an even stronger impossibility result, showing that there are $2$-list-learnable classes that cannot be compressed even when the reconstructed function can work with lists of arbitrarily large size. We prove a similar result for (1-list) PAC learnable classes when the label space is unbounded. This generalizes a recent result by arXiv:2308.06424.
|
[
"['Steve Hanneke' 'Shay Moran' 'Tom Waknine']"
] |
null | null |
2403.10903
| null | null |
http://arxiv.org/pdf/2403.10903v4
|
2024-05-12T17:20:11Z
|
2024-03-16T11:38:31Z
|
DTOR: Decision Tree Outlier Regressor to explain anomalies
|
Explaining outliers occurrence and mechanism of their occurrence can be extremely important in a variety of domains. Malfunctions, frauds, threats, in addition to being correctly identified, oftentimes need a valid explanation in order to effectively perform actionable counteracts. The ever more widespread use of sophisticated Machine Learning approach to identify anomalies make such explanations more challenging. We present the Decision Tree Outlier Regressor (DTOR), a technique for producing rule-based explanations for individual data points by estimating anomaly scores generated by an anomaly detection model. This is accomplished by first applying a Decision Tree Regressor, which computes the estimation score, and then extracting the relative path associated with the data point score. Our results demonstrate the robustness of DTOR even in datasets with a large number of features. Additionally, in contrast to other rule-based approaches, the generated rules are consistently satisfied by the points to be explained. Furthermore, our evaluation metrics indicate comparable performance to Anchors in outlier explanation tasks, with reduced execution time.
|
[
"['Riccardo Crupi' 'Daniele Regoli' 'Alessandro Damiano Sabatino'\n 'Immacolata Marano' 'Massimiliano Brinis' 'Luca Albertazzi'\n 'Andrea Cirillo' 'Andrea Claudio Cosentini']"
] |
null | null |
2403.10910
| null | null |
http://arxiv.org/pdf/2403.10910v1
|
2024-03-16T12:10:01Z
|
2024-03-16T12:10:01Z
|
Graph Regularized NMF with L20-norm for Unsupervised Feature Learning
|
Nonnegative Matrix Factorization (NMF) is a widely applied technique in the fields of machine learning and data mining. Graph Regularized Non-negative Matrix Factorization (GNMF) is an extension of NMF that incorporates graph regularization constraints. GNMF has demonstrated exceptional performance in clustering and dimensionality reduction, effectively discovering inherent low-dimensional structures embedded within high-dimensional spaces. However, the sensitivity of GNMF to noise limits its stability and robustness in practical applications. In order to enhance feature sparsity and mitigate the impact of noise while mining row sparsity patterns in the data for effective feature selection, we introduce the $ell_{2,0}$-norm constraint as the sparsity constraints for GNMF. We propose an unsupervised feature learning framework based on GNMF_$ell_{20}$ and devise an algorithm based on PALM and its accelerated version to address this problem. Additionally, we establish the convergence of the proposed algorithms and validate the efficacy and superiority of our approach through experiments conducted on both simulated and real image data.
|
[
"['Zhen Wang' 'Wenwen Min']"
] |
null | null |
2403.10912
| null | null |
http://arxiv.org/pdf/2403.10912v1
|
2024-03-16T12:25:30Z
|
2024-03-16T12:25:30Z
|
Automatic location detection based on deep learning
|
The proliferation of digital images and the advancements in deep learning have paved the way for innovative solutions in various domains, especially in the field of image classification. Our project presents an in-depth study and implementation of an image classification system specifically tailored to identify and classify images of Indian cities. Drawing from an extensive dataset, our model classifies images into five major Indian cities: Ahmedabad, Delhi, Kerala, Kolkata, and Mumbai to recognize the distinct features and characteristics of each city/state. To achieve high precision and recall rates, we adopted two approaches. The first, a vanilla Convolutional Neural Network (CNN) and then we explored the power of transfer learning by leveraging the VGG16 model. The vanilla CNN achieved commendable accuracy and the VGG16 model achieved a test accuracy of 63.6%. Evaluations highlighted the strengths and potential areas of improvement, positioning our model as not only competitive but also scalable for broader applications. With an emphasis on open-source ethos, our work aims to contribute to the community, encouraging further development and diverse applications. Our findings demonstrate the potential applications in tourism, urban planning, and even real-time location identification systems, among others.
|
[
"['Anjali Karangiya' 'Anirudh Sharma' 'Divax Shah' 'Kartavya Badgujar'\n 'Dr. Chintan Thacker' 'Dainik Dave']"
] |
null | null |
2403.10923
| null | null |
http://arxiv.org/pdf/2403.10923v1
|
2024-03-16T13:35:15Z
|
2024-03-16T13:35:15Z
|
Interpretable Machine Learning for TabPFN
|
The recently developed Prior-Data Fitted Networks (PFNs) have shown very promising results for applications in low-data regimes. The TabPFN model, a special case of PFNs for tabular data, is able to achieve state-of-the-art performance on a variety of classification tasks while producing posterior predictive distributions in mere seconds by in-context learning without the need for learning parameters or hyperparameter tuning. This makes TabPFN a very attractive option for a wide range of domain applications. However, a major drawback of the method is its lack of interpretability. Therefore, we propose several adaptations of popular interpretability methods that we specifically design for TabPFN. By taking advantage of the unique properties of the model, our adaptations allow for more efficient computations than existing implementations. In particular, we show how in-context learning facilitates the estimation of Shapley values by avoiding approximate retraining and enables the use of Leave-One-Covariate-Out (LOCO) even when working with large-scale Transformers. In addition, we demonstrate how data valuation methods can be used to address scalability challenges of TabPFN. Our proposed methods are implemented in a package tabpfn_iml and made available at https://github.com/david-rundel/tabpfn_iml.
|
[
"['David Rundel' 'Julius Kobialka' 'Constantin von Crailsheim'\n 'Matthias Feurer' 'Thomas Nagler' 'David Rügamer']"
] |
null | null |
2403.10927
| null | null |
http://arxiv.org/pdf/2403.10927v1
|
2024-03-16T13:50:31Z
|
2024-03-16T13:50:31Z
|
Distributed Multi-Objective Dynamic Offloading Scheduling for Air-Ground
Cooperative MEC
|
Utilizing unmanned aerial vehicles (UAVs) with edge server to assist terrestrial mobile edge computing (MEC) has attracted tremendous attention. Nevertheless, state-of-the-art schemes based on deterministic optimizations or single-objective reinforcement learning (RL) cannot reduce the backlog of task bits and simultaneously improve energy efficiency in highly dynamic network environments, where the design problem amounts to a sequential decision-making problem. In order to address the aforementioned problems, as well as the curses of dimensionality introduced by the growing number of terrestrial terrestrial users, this paper proposes a distributed multi-objective (MO) dynamic trajectory planning and offloading scheduling scheme, integrated with MORL and the kernel method. The design of n-step return is also applied to average fluctuations in the backlog. Numerical results reveal that the n-step return can benefit the proposed kernel-based approach, achieving significant improvement in the long-term average backlog performance, compared to the conventional 1-step return design. Due to such design and the kernel-based neural network, to which decision-making features can be continuously added, the kernel-based approach can outperform the approach based on fully-connected deep neural network, yielding improvement in energy consumption and the backlog performance, as well as a significant reduction in decision-making and online learning time.
|
[
"['Yang Huang' 'Miaomiao Dong' 'Yijie Mao' 'Wenqiang Liu' 'Zhen Gao']"
] |
null | null |
2403.10929
| null | null |
http://arxiv.org/pdf/2403.10929v1
|
2024-03-16T14:00:04Z
|
2024-03-16T14:00:04Z
|
Function-space Parameterization of Neural Networks for Sequential
Learning
|
Sequential learning paradigms pose challenges for gradient-based deep learning due to difficulties incorporating new data and retaining prior knowledge. While Gaussian processes elegantly tackle these problems, they struggle with scalability and handling rich inputs, such as images. To address these issues, we introduce a technique that converts neural networks from weight space to function space, through a dual parameterization. Our parameterization offers: (i) a way to scale function-space methods to large data sets via sparsification, (ii) retention of prior knowledge when access to past data is limited, and (iii) a mechanism to incorporate new data without retraining. Our experiments demonstrate that we can retain knowledge in continual learning and incorporate new data efficiently. We further show its strengths in uncertainty quantification and guiding exploration in model-based RL. Further information and code is available on the project website.
|
[
"['Aidan Scannell' 'Riccardo Mereu' 'Paul Chang' 'Ella Tamir'\n 'Joni Pajarinen' 'Arno Solin']"
] |
null | null |
2403.10937
| null | null |
http://arxiv.org/pdf/2403.10937v1
|
2024-03-16T14:34:31Z
|
2024-03-16T14:34:31Z
|
Initial Decoding with Minimally Augmented Language Model for Improved
Lattice Rescoring in Low Resource ASR
|
This paper addresses the problem of improving speech recognition accuracy with lattice rescoring in low-resource languages where the baseline language model is insufficient for generating inclusive lattices. We minimally augment the baseline language model with word unigram counts that are present in a larger text corpus of the target language but absent in the baseline. The lattices generated after decoding with such an augmented baseline language model are more comprehensive. We obtain 21.8% (Telugu) and 41.8% (Kannada) relative word error reduction with our proposed method. This reduction in word error rate is comparable to 21.5% (Telugu) and 45.9% (Kannada) relative word error reduction obtained by decoding with full Wikipedia text augmented language mode while our approach consumes only 1/8th the memory. We demonstrate that our method is comparable with various text selection-based language model augmentation and also consistent for data sets of different sizes. Our approach is applicable for training speech recognition systems under low resource conditions where speech data and compute resources are insufficient, while there is a large text corpus that is available in the target language. Our research involves addressing the issue of out-of-vocabulary words of the baseline in general and does not focus on resolving the absence of named entities. Our proposed method is simple and yet computationally less expensive.
|
[
"['Savitha Murthy' 'Dinkar Sitaram']"
] |
null | null |
2403.10940
| null | null |
http://arxiv.org/pdf/2403.10940v1
|
2024-03-16T14:52:26Z
|
2024-03-16T14:52:26Z
|
ViSaRL: Visual Reinforcement Learning Guided by Human Saliency
|
Training robots to perform complex control tasks from high-dimensional pixel input using reinforcement learning (RL) is sample-inefficient, because image observations are comprised primarily of task-irrelevant information. By contrast, humans are able to visually attend to task-relevant objects and areas. Based on this insight, we introduce Visual Saliency-Guided Reinforcement Learning (ViSaRL). Using ViSaRL to learn visual representations significantly improves the success rate, sample efficiency, and generalization of an RL agent on diverse tasks including DeepMind Control benchmark, robot manipulation in simulation and on a real robot. We present approaches for incorporating saliency into both CNN and Transformer-based encoders. We show that visual representations learned using ViSaRL are robust to various sources of visual perturbations including perceptual noise and scene variations. ViSaRL nearly doubles success rate on the real-robot tasks compared to the baseline which does not use saliency.
|
[
"['Anthony Liang' 'Jesse Thomason' 'Erdem Bıyık']"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.