categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
sequence
null
null
2406.08070
null
null
http://arxiv.org/pdf/2406.08070v1
2024-06-12T10:40:10Z
2024-06-12T10:40:10Z
CFG++: Manifold-constrained Classifier Free Guidance for Diffusion Models
Classifier-free guidance (CFG) is a fundamental tool in modern diffusion models for text-guided generation. Although effective, CFG has notable drawbacks. For instance, DDIM with CFG lacks invertibility, complicating image editing; furthermore, high guidance scales, essential for high-quality outputs, frequently result in issues like mode collapse. Contrary to the widespread belief that these are inherent limitations of diffusion models, this paper reveals that the problems actually stem from the off-manifold phenomenon associated with CFG, rather than the diffusion models themselves. More specifically, inspired by the recent advancements of diffusion model-based inverse problem solvers (DIS), we reformulate text-guidance as an inverse problem with a text-conditioned score matching loss, and develop CFG++, a novel approach that tackles the off-manifold challenges inherent in traditional CFG. CFG++ features a surprisingly simple fix to CFG, yet it offers significant improvements, including better sample quality for text-to-image generation, invertibility, smaller guidance scales, reduced mode collapse, etc. Furthermore, CFG++ enables seamless interpolation between unconditional and conditional sampling at lower guidance scales, consistently outperforming traditional CFG at all scales. Experimental results confirm that our method significantly enhances performance in text-to-image generation, DDIM inversion, editing, and solving inverse problems, suggesting a wide-ranging impact and potential applications in various fields that utilize text guidance. Project Page: https://cfgpp-diffusion.github.io/.
[ "['Hyungjin Chung' 'Jeongsol Kim' 'Geon Yeong Park' 'Hyelin Nam'\n 'Jong Chul Ye']" ]
null
null
2406.08074
null
null
http://arxiv.org/pdf/2406.08074v1
2024-06-12T10:48:53Z
2024-06-12T10:48:53Z
A Concept-Based Explainability Framework for Large Multimodal Models
Large multimodal models (LMMs) combine unimodal encoders and large language models (LLMs) to perform multimodal tasks. Despite recent advancements towards the interpretability of these models, understanding internal representations of LMMs remains largely a mystery. In this paper, we present a novel framework for the interpretation of LMMs. We propose a dictionary learning based approach, applied to the representation of tokens. The elements of the learned dictionary correspond to our proposed concepts. We show that these concepts are well semantically grounded in both vision and text. Thus we refer to these as "multi-modal concepts". We qualitatively and quantitatively evaluate the results of the learnt concepts. We show that the extracted multimodal concepts are useful to interpret representations of test samples. Finally, we evaluate the disentanglement between different concepts and the quality of grounding concepts visually and textually. We will publicly release our code.
[ "['Jayneel Parekh' 'Pegah Khayatan' 'Mustafa Shukor' 'Alasdair Newson'\n 'Matthieu Cord']" ]
null
null
2406.08075
null
null
http://arxiv.org/pdf/2406.08075v1
2024-06-12T10:51:00Z
2024-06-12T10:51:00Z
Balancing Molecular Information and Empirical Data in the Prediction of Physico-Chemical Properties
Predicting the physico-chemical properties of pure substances and mixtures is a central task in thermodynamics. Established prediction methods range from fully physics-based ab-initio calculations, which are only feasible for very simple systems, over descriptor-based methods that use some information on the molecules to be modeled together with fitted model parameters (e.g., quantitative-structure-property relationship methods or classical group contribution methods), to representation-learning methods, which may, in extreme cases, completely ignore molecular descriptors and extrapolate only from existing data on the property to be modeled (e.g., matrix completion methods). In this work, we propose a general method for combining molecular descriptors with representation learning using the so-called expectation maximization algorithm from the probabilistic machine learning literature, which uses uncertainty estimates to trade off between the two approaches. The proposed hybrid model exploits chemical structure information using graph neural networks, but it automatically detects cases where structure-based predictions are unreliable, in which case it corrects them by representation-learning based predictions that can better specialize to unusual cases. The effectiveness of the proposed method is demonstrated using the prediction of activity coefficients in binary mixtures as an example. The results are compelling, as the method significantly improves predictive accuracy over the current state of the art, showcasing its potential to advance the prediction of physico-chemical properties in general.
[ "['Johannes Zenn' 'Dominik Gond' 'Fabian Jirasek' 'Robert Bamler']" ]
null
null
2406.08093
null
null
http://arxiv.org/pdf/2406.08093v1
2024-06-12T11:17:11Z
2024-06-12T11:17:11Z
Learnable & Interpretable Model Combination in Dynamic Systems Modeling
One of the core concepts in science, and something that happens intuitively in every-day dynamic systems modeling, is the combination of models or methods. Especially in dynamical systems modeling, often two or more structures are combined to obtain a more powerful or efficient architecture regarding a specific application (area). Further, even physical simulations are combined with machine learning architectures, to increase prediction accuracy or optimize the computational performance. In this work, we shortly discuss, which types of models are usually combined and propose a model interface that is capable of expressing a width variety of mixed algebraic, discrete and differential equation based models. Further, we examine different established, as well as new ways of combining these models from a system theoretical point of view and highlight two challenges - algebraic loops and local event affect functions in discontinuous models - that require a special approach. Finally, we propose a new wildcard topology, that is capable of describing the generic connection between two combined models in an easy to interpret fashion that can be learned as part of a gradient based optimization procedure. The contributions of this paper are highlighted at a proof of concept: Different connection topologies between two models are learned, interpreted and compared applying the proposed methodology and software implementation.
[ "['Tobias Thummerer' 'Lars Mikelsons']" ]
null
null
2406.08097
null
null
http://arxiv.org/pdf/2406.08097v1
2024-06-12T11:22:27Z
2024-06-12T11:22:27Z
Inductive Global and Local Manifold Approximation and Projection
Nonlinear dimensional reduction with the manifold assumption, often called manifold learning, has proven its usefulness in a wide range of high-dimensional data analysis. The significant impact of t-SNE and UMAP has catalyzed intense research interest, seeking further innovations toward visualizing not only the local but also the global structure information of the data. Moreover, there have been consistent efforts toward generalizable dimensional reduction that handles unseen data. In this paper, we first propose GLoMAP, a novel manifold learning method for dimensional reduction and high-dimensional data visualization. GLoMAP preserves locally and globally meaningful distance estimates and displays a progression from global to local formation during the course of optimization. Furthermore, we extend GLoMAP to its inductive version, iGLoMAP, which utilizes a deep neural network to map data to its lower-dimensional representation. This allows iGLoMAP to provide lower-dimensional embeddings for unseen points without needing to re-train the algorithm. iGLoMAP is also well-suited for mini-batch learning, enabling large-scale, accelerated gradient calculations. We have successfully applied both GLoMAP and iGLoMAP to the simulated and real-data settings, with competitive experiments against the state-of-the-art methods.
[ "['Jungeum Kim' 'Xiao Wang']" ]
null
null
2406.08099
null
null
http://arxiv.org/pdf/2406.08099v1
2024-06-12T11:26:29Z
2024-06-12T11:26:29Z
Confidence Interval Estimation of Predictive Performance in the Context of AutoML
Any supervised machine learning analysis is required to provide an estimate of the out-of-sample predictive performance. However, it is imperative to also provide a quantification of the uncertainty of this performance in the form of a confidence or credible interval (CI) and not just a point estimate. In an AutoML setting, estimating the CI is challenging due to the ``winner's curse", i.e., the bias of estimation due to cross-validating several machine learning pipelines and selecting the winning one. In this work, we perform a comparative evaluation of 9 state-of-the-art methods and variants in CI estimation in an AutoML setting on a corpus of real and simulated datasets. The methods are compared in terms of inclusion percentage (does a 95% CI include the true performance at least 95% of the time), CI tightness (tighter CIs are preferable as being more informative), and execution time. The evaluation is the first one that covers most, if not all, such methods and extends previous work to imbalanced and small-sample tasks. In addition, we present a variant, called BBC-F, of an existing method (the Bootstrap Bias Correction, or BBC) that maintains the statistical properties of the BBC but is more computationally efficient. The results support that BBC-F and BBC dominate the other methods in all metrics measured.
[ "['Konstantinos Paraschakis' 'Andrea Castellani' 'Giorgos Borboudakis'\n 'Ioannis Tsamardinos']" ]
null
null
2406.08106
null
null
http://arxiv.org/pdf/2406.08106v1
2024-06-12T11:38:13Z
2024-06-12T11:38:13Z
Counterfactual-based Root Cause Analysis for Dynamical Systems
Identifying the underlying reason for a failing dynamic process or otherwise anomalous observation is a fundamental challenge, yet has numerous industrial applications. Identifying the failure-causing sub-system using causal inference, one can ask the question: "Would the observed failure also occur, if we had replaced the behaviour of a sub-system at a certain point in time with its normal behaviour?" To this end, a formal description of behaviour of the full system is needed in which such counterfactual questions can be answered. However, existing causal methods for root cause identification are typically limited to static settings and focusing on additive external influences causing failures rather than structural influences. In this paper, we address these problems by modelling the dynamic causal system using a Residual Neural Network and deriving corresponding counterfactual distributions over trajectories. We show quantitatively that more root causes are identified when an intervention is performed on the structural equation and the external influence, compared to an intervention on the external influence only. By employing an efficient approximation to a corresponding Shapley value, we also obtain a ranking between the different subsystems at different points in time being responsible for an observed failure, which is applicable in settings with large number of variables. We illustrate the effectiveness of the proposed method on a benchmark dynamic system as well as on a real world river dataset.
[ "['Juliane Weilbach' 'Sebastian Gerwinn' 'Karim Barsim' 'Martin Fränzle']" ]
null
null
2406.08128
null
null
http://arxiv.org/pdf/2406.08128v3
2024-06-14T02:37:24Z
2024-06-12T12:12:38Z
Short-Long Convolutions Help Hardware-Efficient Linear Attention to Focus on Long Sequences
To mitigate the computational complexity in the self-attention mechanism on long sequences, linear attention utilizes computation tricks to achieve linear complexity, while state space models (SSMs) popularize a favorable practice of using non-data-dependent memory pattern, i.e., emphasize the near and neglect the distant, to processing sequences. Recent studies have shown the priorities by combining them as one. However, the efficiency of linear attention remains only at the theoretical level in a causal setting, and SSMs require various designed constraints to operate effectively on specific data. Therefore, in order to unveil the true power of the hybrid design, the following two issues need to be addressed: (1) hardware-efficient implementation for linear attention and (2) stabilization of SSMs. To achieve this, we leverage the thought of tiling and hierarchy to propose CHELA (short-long Convolutions with Hardware-Efficient Linear Attention), which replaces SSMs with short-long convolutions and implements linear attention in a divide-and-conquer manner. This approach enjoys global abstraction and data-dependent selection from stable SSM and linear attention while maintaining real linear complexity. Our comprehensive experiments on the Long Range Arena benchmark and language modeling tasks demonstrate the effectiveness of the proposed method.
[ "['Zicheng Liu' 'Siyuan Li' 'Li Wang' 'Zedong Wang' 'Yunfan Liu'\n 'Stan Z. Li']" ]
null
null
2406.08148
null
null
http://arxiv.org/pdf/2406.08148v1
2024-06-12T12:37:53Z
2024-06-12T12:37:53Z
Probing Implicit Bias in Semi-gradient Q-learning: Visualizing the Effective Loss Landscapes via the Fokker--Planck Equation
Semi-gradient Q-learning is applied in many fields, but due to the absence of an explicit loss function, studying its dynamics and implicit bias in the parameter space is challenging. This paper introduces the Fokker--Planck equation and employs partial data obtained through sampling to construct and visualize the effective loss landscape within a two-dimensional parameter space. This visualization reveals how the global minima in the loss landscape can transform into saddle points in the effective loss landscape, as well as the implicit bias of the semi-gradient method. Additionally, we demonstrate that saddle points, originating from the global minima in loss landscape, still exist in the effective loss landscape under high-dimensional parameter spaces and neural network settings. This paper develop a novel approach for probing implicit bias in semi-gradient Q-learning.
[ "['Shuyu Yin' 'Fei Wen' 'Peilin Liu' 'Tao Luo']" ]
null
null
2406.08155
null
null
http://arxiv.org/pdf/2406.08155v1
2024-06-12T12:44:48Z
2024-06-12T12:44:48Z
Examining Post-Training Quantization for Mixture-of-Experts: A Benchmark
Large Language Models~(LLMs) have become foundational in the realm of natural language processing, demonstrating performance improvements as model sizes increase. The Mixture-of-Experts~(MoE) approach offers a promising way to scale LLMs more efficiently by using fewer computational FLOPs through sparse activation. However, it suffers from significant memory overheads, necessitating model compression techniques. Post-training quantization, a popular method for model compression, proves less effective when directly applied to MoE models due to MoE's overlooked inherent sparsity. This paper explores several MoE structure-aware quantization heuristics, ranging from coarse to fine granularity, from MoE block to individual linear weight. Our investigations reveal critical principles: different MoE structures (i.e., blocks, experts, linear layers) require varying numbers of weight bits for effective and efficient quantization. Conclusions are supported by extensive benchmarking across two representative MoE models and six tasks. We further introduce novel enhancements to more accurately identify the most critical weights in MoE quantization that necessitate higher bit allocations, including the linear weight outlier scorer and MoE block scorer. Additionally, subsequent experiments validate our findings in the context of both weight and activation quantization.
[ "['Pingzhi Li' 'Xiaolong Jin' 'Yu Cheng' 'Tianlong Chen']" ]
null
null
2406.08188
null
null
http://arxiv.org/abs/2406.08188v1
2024-06-12T13:19:42Z
2024-06-12T13:19:42Z
Attention-Based Learning for Fluid State Interpolation and Editing in a Time-Continuous Framework
In this work, we introduce FluidsFormer: a transformer-based approach for fluid interpolation within a continuous-time framework. By combining the capabilities of PITT and a residual neural network (RNN), we analytically predict the physical properties of the fluid state. This enables us to interpolate substep frames between simulated keyframes, enhancing the temporal smoothness and sharpness of animations. We demonstrate promising results for smoke interpolation and conduct initial experiments on liquids.
[ "['Bruno Roy']" ]
null
null
2406.08193
null
null
http://arxiv.org/pdf/2406.08193v1
2024-06-12T13:22:26Z
2024-06-12T13:22:26Z
Minimal Communication-Cost Statistical Learning
A client device which has access to $n$ training data samples needs to obtain a statistical hypothesis or model $W$ and then to send it to a remote server. The client and the server devices share some common randomness sequence as well as a prior on the hypothesis space. In this problem a suitable hypothesis or model $W$ should meet two distinct design criteria simultaneously: (i) small (population) risk during the inference phase and (ii) small 'complexity' for it to be conveyed to the server with minimum communication cost. In this paper, we propose a joint training and source coding scheme with provable in-expectation guarantees, where the expectation is over the encoder's output message. Specifically, we show that by imposing a constraint on a suitable Kullback-Leibler divergence between the conditional distribution induced by a compressed learning model $widehat{W}$ given $W$ and the prior, one guarantees simultaneously small average empirical risk (aka training loss), small average generalization error and small average communication cost. We also consider a one-shot scenario in which the guarantees on the empirical risk and generalization error are obtained for every encoder's output message.
[ "['Milad Sefidgaran' 'Abdellatif Zaidi' 'Piotr Krasnowski']" ]
null
null
2406.08205
null
null
http://arxiv.org/pdf/2406.08205v1
2024-06-12T13:38:48Z
2024-06-12T13:38:48Z
What do we know about Hugging Face? A systematic literature review and quantitative validation of qualitative claims
Background: Collaborative Software Package Registries (SPRs) are an integral part of the software supply chain. Much engineering work synthesizes SPR package into applications. Prior research has examined SPRs for traditional software, such as NPM (JavaScript) and PyPI (Python). Pre-Trained Model (PTM) Registries are an emerging class of SPR of increasing importance, because they support the deep learning supply chain. Aims: Recent empirical research has examined PTM registries in ways such as vulnerabilities, reuse processes, and evolution. However, no existing research synthesizes them to provide a systematic understanding of the current knowledge. Some of the existing research includes qualitative claims lacking quantitative analysis. Our research fills these gaps by providing a knowledge synthesis and quantitative analyses. Methods: We first conduct a systematic literature review (SLR). We then observe that some of the claims are qualitative. We identify quantifiable metrics associated with those claims, and measure in order to substantiate these claims. Results: From our SLR, we identify 12 claims about PTM reuse on the HuggingFace platform, 4 of which lack quantitative validation. We successfully test 3 of these claims through a quantitative analysis, and directly compare one with traditional software. Our findings corroborate qualitative claims with quantitative measurements. Our findings are: (1) PTMs have a much higher turnover rate than traditional software, indicating a dynamic and rapidly evolving reuse environment within the PTM ecosystem; and (2) There is a strong correlation between documentation quality and PTM popularity. Conclusions: We confirm qualitative research claims with concrete metrics, supporting prior qualitative and case study research. Our measures show further dynamics of PTM reuse, inspiring research infrastructure and new measures.
[ "['Jason Jones' 'Wenxin Jiang' 'Nicholas Synovic' 'George K. Thiruvathukal'\n 'James C. Davis']" ]
null
null
2406.08206
null
null
http://arxiv.org/pdf/2406.08206v1
2024-06-12T13:39:32Z
2024-06-12T13:39:32Z
Sources of Gain: Decomposing Performance in Conditional Average Dose Response Estimation
Estimating conditional average dose responses (CADR) is an important but challenging problem. Estimators must correctly model the potentially complex relationships between covariates, interventions, doses, and outcomes. In recent years, the machine learning community has shown great interest in developing tailored CADR estimators that target specific challenges. Their performance is typically evaluated against other methods on (semi-) synthetic benchmark datasets. Our paper analyses this practice and shows that using popular benchmark datasets without further analysis is insufficient to judge model performance. Established benchmarks entail multiple challenges, whose impacts must be disentangled. Therefore, we propose a novel decomposition scheme that allows the evaluation of the impact of five distinct components contributing to CADR estimator performance. We apply this scheme to eight popular CADR estimators on four widely-used benchmark datasets, running nearly 1,500 individual experiments. Our results reveal that most established benchmarks are challenging for reasons different from their creators' claims. Notably, confounding, the key challenge tackled by most estimators, is not an issue in any of the considered datasets. We discuss the major implications of our findings and present directions for future research.
[ "['Christopher Bockel-Rickermann' 'Toon Vanderschueren' 'Tim Verdonck'\n 'Wouter Verbeke']" ]
null
null
2406.08207
null
null
http://arxiv.org/pdf/2406.08207v1
2024-06-12T13:39:44Z
2024-06-12T13:39:44Z
Transformer-based Model for ASR N-Best Rescoring and Rewriting
Voice assistants increasingly use on-device Automatic Speech Recognition (ASR) to ensure speed and privacy. However, due to resource constraints on the device, queries pertaining to complex information domains often require further processing by a search engine. For such applications, we propose a novel Transformer based model capable of rescoring and rewriting, by exploring full context of the N-best hypotheses in parallel. We also propose a new discriminative sequence training objective that can work well for both rescore and rewrite tasks. We show that our Rescore+Rewrite model outperforms the Rescore-only baseline, and achieves up to an average 8.6% relative Word Error Rate (WER) reduction over the ASR system by itself.
[ "['Iwen E. Kang' 'Christophe Van Gysel' 'Man-Hung Siu']" ]
null
null
2406.08209
null
null
http://arxiv.org/pdf/2406.08209v1
2024-06-12T13:40:47Z
2024-06-12T13:40:47Z
Forward-Euler time-discretization for Wasserstein gradient flows can be wrong
In this note, we examine the forward-Euler discretization for simulating Wasserstein gradient flows. We provide two counter-examples showcasing the failure of this discretization even for a simple case where the energy functional is defined as the KL divergence against some nicely structured probability densities. A simple explanation of this failure is also discussed.
[ "['Yewei Xu' 'Qin Li']" ]
null
null
2406.08210
null
null
http://arxiv.org/pdf/2406.08210v1
2024-06-12T13:41:07Z
2024-06-12T13:41:07Z
Expressivity and Generalization: Fragment-Biases for Molecular GNNs
Although recent advances in higher-order Graph Neural Networks (GNNs) improve the theoretical expressiveness and molecular property predictive performance, they often fall short of the empirical performance of models that explicitly use fragment information as inductive bias. However, for these approaches, there exists no theoretic expressivity study. In this work, we propose the Fragment-WL test, an extension to the well-known Weisfeiler & Leman (WL) test, which enables the theoretic analysis of these fragment-biased GNNs. Building on the insights gained from the Fragment-WL test, we develop a new GNN architecture and a fragmentation with infinite vocabulary that significantly boosts expressiveness. We show the effectiveness of our model on synthetic and real-world data where we outperform all GNNs on Peptides and have 12% lower error than all GNNs on ZINC and 34% lower error than other fragment-biased models. Furthermore, we show that our model exhibits superior generalization capabilities compared to the latest transformer-based architectures, positioning it as a robust solution for a range of molecular modeling tasks.
[ "['Tom Wollschläger' 'Niklas Kemper' 'Leon Hetzel' 'Johanna Sommer'\n 'Stephan Günnemann']" ]
null
null
2406.08217
null
null
http://arxiv.org/pdf/2406.08217v1
2024-06-12T13:45:47Z
2024-06-12T13:45:47Z
Runtime Freezing: Dynamic Class Loss for Multi-Organ 3D Segmentation
Segmentation has become a crucial pre-processing step to many refined downstream tasks, and particularly so in the medical domain. Even with recent improvements in segmentation models, many segmentation tasks remain difficult. When multiple organs are segmented simultaneously, difficulties are due not only to the limited availability of labelled data, but also to class imbalance. In this work we propose dynamic class-based loss strategies to mitigate the effects of highly imbalanced training data. We show how our approach improves segmentation performance on a challenging Multi-Class 3D Abdominal Organ dataset.
[ "['James Willoughby' 'Irina Voiculescu']" ]
null
null
2406.08226
null
null
http://arxiv.org/pdf/2406.08226v1
2024-06-12T13:55:12Z
2024-06-12T13:55:12Z
DistilDoc: Knowledge Distillation for Visually-Rich Document Applications
This work explores knowledge distillation (KD) for visually-rich document (VRD) applications such as document layout analysis (DLA) and document image classification (DIC). While VRD research is dependent on increasingly sophisticated and cumbersome models, the field has neglected to study efficiency via model compression. Here, we design a KD experimentation methodology for more lean, performant models on document understanding (DU) tasks that are integral within larger task pipelines. We carefully selected KD strategies (response-based, feature-based) for distilling knowledge to and from backbones with different architectures (ResNet, ViT, DiT) and capacities (base, small, tiny). We study what affects the teacher-student knowledge gap and find that some methods (tuned vanilla KD, MSE, SimKD with an apt projector) can consistently outperform supervised student training. Furthermore, we design downstream task setups to evaluate covariate shift and the robustness of distilled DLA models on zero-shot layout-aware document visual question answering (DocVQA). DLA-KD experiments result in a large mAP knowledge gap, which unpredictably translates to downstream robustness, accentuating the need to further explore how to efficiently obtain more semantic document layout awareness.
[ "['Jordy Van Landeghem' 'Subhajit Maity' 'Ayan Banerjee' 'Matthew Blaschko'\n 'Marie-Francine Moens' 'Josep Lladós' 'Sanket Biswas']" ]
null
null
2406.08229
null
null
http://arxiv.org/abs/2406.08229v2
2024-07-11T14:33:23Z
2024-06-12T13:59:31Z
GPT4Rec: Graph Prompt Tuning for Streaming Recommendation
In the realm of personalized recommender systems, the challenge of adapting to evolving user preferences and the continuous influx of new users and items is paramount. Conventional models, typically reliant on a static training-test approach, struggle to keep pace with these dynamic demands. Streaming recommendation, particularly through continual graph learning, has emerged as a novel solution. However, existing methods in this area either rely on historical data replay, which is increasingly impractical due to stringent data privacy regulations; or are inability to effectively address the over-stability issue; or depend on model-isolation and expansion strategies. To tackle these difficulties, we present GPT4Rec, a Graph Prompt Tuning method for streaming Recommendation. Given the evolving user-item interaction graph, GPT4Rec first disentangles the graph patterns into multiple views. After isolating specific interaction patterns and relationships in different views, GPT4Rec utilizes lightweight graph prompts to efficiently guide the model across varying interaction patterns within the user-item graph. Firstly, node-level prompts are employed to instruct the model to adapt to changes in the attributes or properties of individual nodes within the graph. Secondly, structure-level prompts guide the model in adapting to broader patterns of connectivity and relationships within the graph. Finally, view-level prompts are innovatively designed to facilitate the aggregation of information from multiple disentangled views. These prompt designs allow GPT4Rec to synthesize a comprehensive understanding of the graph, ensuring that all vital aspects of the user-item interactions are considered and effectively integrated. Experiments on four diverse real-world datasets demonstrate the effectiveness and efficiency of our proposal.
[ "['Peiyan Zhang' 'Yuchen Yan' 'Xi Zhang' 'Liying Kang' 'Chaozhuo Li'\n 'Feiran Huang' 'Senzhang Wang' 'Sunghun Kim']" ]
null
null
2406.08234
null
null
http://arxiv.org/pdf/2406.08234v1
2024-06-12T14:01:12Z
2024-06-12T14:01:12Z
MaIL: Improving Imitation Learning with Mamba
This work introduces Mamba Imitation Learning (MaIL), a novel imitation learning (IL) architecture that offers a computationally efficient alternative to state-of-the-art (SoTA) Transformer policies. Transformer-based policies have achieved remarkable results due to their ability in handling human-recorded data with inherently non-Markovian behavior. However, their high performance comes with the drawback of large models that complicate effective training. While state space models (SSMs) have been known for their efficiency, they were not able to match the performance of Transformers. Mamba significantly improves the performance of SSMs and rivals against Transformers, positioning it as an appealing alternative for IL policies. MaIL leverages Mamba as a backbone and introduces a formalism that allows using Mamba in the encoder-decoder structure. This formalism makes it a versatile architecture that can be used as a standalone policy or as part of a more advanced architecture, such as a diffuser in the diffusion process. Extensive evaluations on the LIBERO IL benchmark and three real robot experiments show that MaIL: i) outperforms Transformers in all LIBERO tasks, ii) achieves good performance even with small datasets, iii) is able to effectively process multi-modal sensory inputs, iv) is more robust to input noise compared to Transformers.
[ "['Xiaogang Jia' 'Qian Wang' 'Atalay Donat' 'Bowen Xing' 'Ge Li'\n 'Hongyi Zhou' 'Onur Celik' 'Denis Blessing' 'Rudolf Lioutikov'\n 'Gerhard Neumann']" ]
null
null
2406.08238
null
null
http://arxiv.org/pdf/2406.08238v1
2024-06-12T14:04:26Z
2024-06-12T14:04:26Z
Residual Learning and Context Encoding for Adaptive Offline-to-Online Reinforcement Learning
Offline reinforcement learning (RL) allows learning sequential behavior from fixed datasets. Since offline datasets do not cover all possible situations, many methods collect additional data during online fine-tuning to improve performance. In general, these methods assume that the transition dynamics remain the same during both the offline and online phases of training. However, in many real-world applications, such as outdoor construction and navigation over rough terrain, it is common for the transition dynamics to vary between the offline and online phases. Moreover, the dynamics may vary during the online fine-tuning. To address this problem of changing dynamics from offline to online RL we propose a residual learning approach that infers dynamics changes to correct the outputs of the offline solution. At the online fine-tuning phase, we train a context encoder to learn a representation that is consistent inside the current online learning environment while being able to predict dynamic transitions. Experiments in D4RL MuJoCo environments, modified to support dynamics' changes upon environment resets, show that our approach can adapt to these dynamic changes and generalize to unseen perturbations in a sample-efficient way, whilst comparison methods cannot.
[ "['Mohammadreza Nakhaei' 'Aidan Scannell' 'Joni Pajarinen']" ]
null
null
2406.08246
null
null
http://arxiv.org/pdf/2406.08246v1
2024-06-12T14:15:15Z
2024-06-12T14:15:15Z
Leveraging Large Language Models for Web Scraping
Large Language Models (LLMs) demonstrate remarkable capabilities in replicating human tasks and boosting productivity. However, their direct application for data extraction presents limitations due to a prioritisation of fluency over factual accuracy and a restricted ability to manipulate specific information. Therefore to overcome these limitations, this research leverages the knowledge representation power of pre-trained LLMs and the targeted information access enabled by RAG models, this research investigates a general-purpose accurate data scraping recipe for RAG models designed for language generation. To capture knowledge in a more modular and interpretable way, we use pre trained language models with a latent knowledge retriever, which allows the model to retrieve and attend over documents from a large corpus. We utilised RAG model architecture and did an in-depth analysis of their capabilities under three tasks: (i) Semantic Classification of HTML elements, (ii) Chunking HTML text for effective understanding, and (iii) comparing results from different LLMs and ranking algorithms. While previous work has developed dedicated architectures and training procedures for HTML understanding and extraction, we show that LLMs pre-trained on standard natural language with an addition of effective chunking, searching and ranking algorithms, can prove to be efficient data scraping tool to extract complex data from unstructured text. Future research directions include addressing the challenges of provenance tracking and dynamic knowledge updates within the proposed RAG-based data extraction framework. By overcoming these limitations, this approach holds the potential to revolutionise data extraction from vast repositories of textual information.
[ "['Aman Ahluwalia' 'Suhrud Wani']" ]
null
null
2406.08249
null
null
http://arxiv.org/pdf/2406.08249v1
2024-06-12T14:18:07Z
2024-06-12T14:18:07Z
Dataset Enhancement with Instance-Level Augmentations
We present a method for expanding a dataset by incorporating knowledge from the wide distribution of pre-trained latent diffusion models. Data augmentations typically incorporate inductive biases about the image formation process into the training (e.g. translation, scaling, colour changes, etc.). Here, we go beyond simple pixel transformations and introduce the concept of instance-level data augmentation by repainting parts of the image at the level of object instances. The method combines a conditional diffusion model with depth and edge maps control conditioning to seamlessly repaint individual objects inside the scene, being applicable to any segmentation or detection dataset. Used as a data augmentation method, it improves the performance and generalization of the state-of-the-art salient object detection, semantic segmentation and object detection models. By redrawing all privacy-sensitive instances (people, license plates, etc.), the method is also applicable for data anonymization. We also release fully synthetic and anonymized expansions for popular datasets: COCO, Pascal VOC and DUTS.
[ "['Orest Kupyn' 'Christian Rupprecht']" ]
null
null
2406.08267
null
null
http://arxiv.org/pdf/2406.08267v1
2024-06-12T14:35:13Z
2024-06-12T14:35:13Z
A deep cut into Split Federated Self-supervised Learning
Collaborative self-supervised learning has recently become feasible in highly distributed environments by dividing the network layers between client devices and a central server. However, state-of-the-art methods, such as MocoSFL, are optimized for network division at the initial layers, which decreases the protection of the client data and increases communication overhead. In this paper, we demonstrate that splitting depth is crucial for maintaining privacy and communication efficiency in distributed training. We also show that MocoSFL suffers from a catastrophic quality deterioration for the minimal communication overhead. As a remedy, we introduce Momentum-Aligned contrastive Split Federated Learning (MonAcoSFL), which aligns online and momentum client models during training procedure. Consequently, we achieve state-of-the-art accuracy while significantly reducing the communication overhead, making MonAcoSFL more practical in real-world scenarios.
[ "['Marcin Przewięźlikowski' 'Marcin Osial' 'Bartosz Zieliński'\n 'Marek Śmieja']" ]
null
null
2406.08269
null
null
http://arxiv.org/pdf/2406.08269v2
2024-06-15T04:00:54Z
2024-06-12T14:35:19Z
Analyzing constrained LLM through PDFA-learning
We define a congruence that copes with null next-symbol probabilities that arise when the output of a language model is constrained by some means during text generation. We develop an algorithm for efficiently learning the quotient with respect to this congruence and evaluate it on case studies for analyzing statistical properties of LLM.
[ "['Matías Carrasco' 'Franz Mayr' 'Sergio Yovine' 'Johny Kidd'\n 'Martín Iturbide' 'Juan Pedro da Silva' 'Alejo Garat']" ]
null
null
2406.08272
null
null
http://arxiv.org/pdf/2406.08272v1
2024-06-12T14:37:29Z
2024-06-12T14:37:29Z
The Importance of Positional Encoding Initialization in Transformers for Relational Reasoning
Relational reasoning refers to the ability to infer and understand the relations between multiple entities. In humans, this ability underpins many higher cognitive functions, such as problem solving and decision-making, and has been reliably linked to fluid intelligence. Despite machine learning models making impressive advances across various domains, such as natural language processing and vision, the extent to which such models can perform relational reasoning tasks remains unclear. Here we study the importance of positional encoding (PE) for relational reasoning in the Transformer, and find that a learnable PE outperforms all other commonly-used PEs (e.g., absolute, relative, rotary, etc.). Moreover, we find that when using a PE with a learnable parameter, the choice of initialization greatly influences the learned representations and its downstream generalization performance. Specifically, we find that a learned PE initialized from a small-norm distribution can 1) uncover ground-truth position information, 2) generalize in the presence of noisy inputs, and 3) produce behavioral patterns that are consistent with human performance. Our results shed light on the importance of learning high-performing and robust PEs during relational reasoning tasks, which will prove useful for tasks in which ground truth positions are not provided or not known.
[ "['Takuya Ito' 'Luca Cocchi' 'Tim Klinger' 'Parikshit Ram'\n 'Murray Campbell' 'Luke Hearne']" ]
null
null
2406.08281
null
null
http://arxiv.org/pdf/2406.08281v1
2024-06-12T14:47:27Z
2024-06-12T14:47:27Z
Conformal Load Prediction with Transductive Graph Autoencoders
Predicting edge weights on graphs has various applications, from transportation systems to social networks. This paper describes a Graph Neural Network (GNN) approach for edge weight prediction with guaranteed coverage. We leverage conformal prediction to calibrate the GNN outputs and produce valid prediction intervals. We handle data heteroscedasticity through error reweighting and Conformalized Quantile Regression (CQR). We compare the performance of our method against baseline techniques on real-world transportation datasets. Our approach has better coverage and efficiency than all baselines and showcases robustness and adaptability.
[ "['Rui Luo' 'Nicolo Colombo']" ]
null
null
2406.08287
null
null
http://arxiv.org/pdf/2406.08287v2
2024-06-14T06:25:36Z
2024-06-12T14:53:23Z
Pre-Training Identification of Graph Winning Tickets in Adaptive Spatial-Temporal Graph Neural Networks
In this paper, we present a novel method to significantly enhance the computational efficiency of Adaptive Spatial-Temporal Graph Neural Networks (ASTGNNs) by introducing the concept of the Graph Winning Ticket (GWT), derived from the Lottery Ticket Hypothesis (LTH). By adopting a pre-determined star topology as a GWT prior to training, we balance edge reduction with efficient information propagation, reducing computational demands while maintaining high model performance. Both the time and memory computational complexity of generating adaptive spatial-temporal graphs is significantly reduced from $mathcal{O}(N^2)$ to $mathcal{O}(N)$. Our approach streamlines the ASTGNN deployment by eliminating the need for exhaustive training, pruning, and retraining cycles, and demonstrates empirically across various datasets that it is possible to achieve comparable performance to full models with substantially lower computational costs. Specifically, our approach enables training ASTGNNs on the largest scale spatial-temporal dataset using a single A6000 equipped with 48 GB of memory, overcoming the out-of-memory issue encountered during original training and even achieving state-of-the-art performance. Furthermore, we delve into the effectiveness of the GWT from the perspective of spectral graph theory, providing substantial theoretical support. This advancement not only proves the existence of efficient sub-networks within ASTGNNs but also broadens the applicability of the LTH in resource-constrained settings, marking a significant step forward in the field of graph neural networks. Code is available at https://anonymous.4open.science/r/paper-1430.
[ "['Wenying Duan' 'Tianxiang Fang' 'Hong Rao' 'Xiaoxi He']" ]
null
null
2406.08288
null
null
http://arxiv.org/pdf/2406.08288v2
2024-06-16T13:07:49Z
2024-06-12T14:53:30Z
Decoupling the Class Label and the Target Concept in Machine Unlearning
Machine unlearning as an emerging research topic for data regulations, aims to adjust a trained model to approximate a retrained one that excludes a portion of training data. Previous studies showed that class-wise unlearning is successful in forgetting the knowledge of a target class, through gradient ascent on the forgetting data or fine-tuning with the remaining data. However, while these methods are useful, they are insufficient as the class label and the target concept are often considered to coincide. In this work, we decouple them by considering the label domain mismatch and investigate three problems beyond the conventional all matched forgetting, e.g., target mismatch, model mismatch, and data mismatch forgetting. We systematically analyze the new challenges in restrictively forgetting the target concept and also reveal crucial forgetting dynamics in the representation level to realize these tasks. Based on that, we propose a general framework, namely, TARget-aware Forgetting (TARF). It enables the additional tasks to actively forget the target concept while maintaining the rest part, by simultaneously conducting annealed gradient ascent on the forgetting data and selected gradient descent on the hard-to-affect remaining data. Empirically, various experiments under the newly introduced settings are conducted to demonstrate the effectiveness of our TARF.
[ "['Jianing Zhu' 'Bo Han' 'Jiangchao Yao' 'Jianliang Xu' 'Gang Niu'\n 'Masashi Sugiyama']" ]
null
null
2406.08294
null
null
http://arxiv.org/pdf/2406.08294v1
2024-06-12T14:57:37Z
2024-06-12T14:57:37Z
Vessel Re-identification and Activity Detection in Thermal Domain for Maritime Surveillance
Maritime surveillance is vital to mitigate illegal activities such as drug smuggling, illegal fishing, and human trafficking. Vision-based maritime surveillance is challenging mainly due to visibility issues at night, which results in failures in re-identifying vessels and detecting suspicious activities. In this paper, we introduce a thermal, vision-based approach for maritime surveillance with object tracking, vessel re-identification, and suspicious activity detection capabilities. For vessel re-identification, we propose a novel viewpoint-independent algorithm which compares features of the sides of the vessel separately (separate side-spaces) leveraging shape information in the absence of color features. We propose techniques to adapt tracking and activity detection algorithms for the thermal domain and train them using a thermal dataset we created. This dataset will be the first publicly available benchmark dataset for thermal maritime surveillance. Our system is capable of re-identifying vessels with an 81.8% Top1 score and identifying suspicious activities with a 72.4% frame mAP score; a new benchmark for each task in the thermal domain.
[ "['Yasod Ginige' 'Ransika Gunasekara' 'Darsha Hewavitharana'\n 'Manjula Ariyarathne' 'Ranga Rodrigo' 'Peshala Jayasekara']" ]
null
null
2406.08307
null
null
http://arxiv.org/pdf/2406.08307v1
2024-06-12T15:08:15Z
2024-06-12T15:08:15Z
Measuring model variability using robust non-parametric testing
Training a deep neural network often involves stochastic optimization, meaning each run will produce a different model. The seed used to initialize random elements of the optimization procedure heavily influences the quality of a trained model, which may be obscure from many commonly reported summary statistics, like accuracy. However, random seed is often not included in hyper-parameter optimization, perhaps because the relationship between seed and model quality is hard to describe. This work attempts to describe the relationship between deep net models trained with different random seeds and the behavior of the expected model. We adopt robust hypothesis testing to propose a novel summary statistic for network similarity, referred to as the $alpha$-trimming level. We use the $alpha$-trimming level to show that the empirical cumulative distribution function of an ensemble model created from a collection of trained models with different random seeds approximates the average of these functions as the number of models in the collection grows large. This insight provides guidance for how many random seeds should be sampled to ensure that an ensemble of these trained models is a reliable representative. We also show that the $alpha$-trimming level is more expressive than different performance metrics like validation accuracy, churn, or expected calibration error when taken alone and may help with random seed selection in a more principled fashion. We demonstrate the value of the proposed statistic in real experiments and illustrate the advantage of fine-tuning over random seed with an experiment in transfer learning.
[ "['Sinjini Banerjee' 'Tim Marrinan' 'Reilly Cannon' 'Tony Chiang'\n 'Anand D. Sarwate']" ]
null
null
2406.08310
null
null
http://arxiv.org/pdf/2406.08310v2
2024-06-14T15:36:00Z
2024-06-12T15:10:44Z
GraphFM: A Comprehensive Benchmark for Graph Foundation Model
Foundation Models (FMs) serve as a general class for the development of artificial intelligence systems, offering broad potential for generalization across a spectrum of downstream tasks. Despite extensive research into self-supervised learning as the cornerstone of FMs, several outstanding issues persist in Graph Foundation Models that rely on graph self-supervised learning, namely: 1) Homogenization. The extent of generalization capability on downstream tasks remains unclear. 2) Scalability. It is unknown how effectively these models can scale to large datasets. 3) Efficiency. The training time and memory usage of these models require evaluation. 4) Training Stop Criteria. Determining the optimal stopping strategy for pre-training across multiple tasks to maximize performance on downstream tasks. To address these questions, we have constructed a rigorous benchmark that thoroughly analyzes and studies the generalization and scalability of self-supervised Graph Neural Network (GNN) models. Regarding generalization, we have implemented and compared the performance of various self-supervised GNN models, trained to generate node representations, across tasks such as node classification, link prediction, and node clustering. For scalability, we have compared the performance of various models after training using full-batch and mini-batch strategies. Additionally, we have assessed the training efficiency of these models by conducting experiments to test their GPU memory usage and throughput. Through these experiments, we aim to provide insights to motivate future research. The code for this benchmark is publicly available at https://github.com/NYUSHCS/GraphFM.
[ "['Yuhao Xu' 'Xinqi Liu' 'Keyu Duan' 'Yi Fang' 'Yu-Neng Chuang'\n 'Daochen Zha' 'Qiaoyu Tan']" ]
null
null
2406.08311
null
null
http://arxiv.org/pdf/2406.08311v2
2024-07-05T06:44:33Z
2024-06-12T15:12:49Z
Causality for Tabular Data Synthesis: A High-Order Structure Causal Benchmark Framework
Tabular synthesis models remain ineffective at capturing complex dependencies, and the quality of synthetic data is still insufficient for comprehensive downstream tasks, such as prediction under distribution shifts, automated decision-making, and cross-table understanding. A major challenge is the lack of prior knowledge about underlying structures and high-order relationships in tabular data. We argue that a systematic evaluation on high-order structural information for tabular data synthesis is the first step towards solving the problem. In this paper, we introduce high-order structural causal information as natural prior knowledge and provide a benchmark framework for the evaluation of tabular synthesis models. The framework allows us to generate benchmark datasets with a flexible range of data generation processes and to train tabular synthesis models using these datasets for further evaluation. We propose multiple benchmark tasks, high-order metrics, and causal inference tasks as downstream tasks for evaluating the quality of synthetic data generated by the trained models. Our experiments demonstrate to leverage the benchmark framework for evaluating the model capability of capturing high-order structural causal information. Furthermore, our benchmarking results provide an initial assessment of state-of-the-art tabular synthesis models. They have clearly revealed significant gaps between ideal and actual performance and how baseline methods differ. Our benchmark framework is available at URL https://github.com/TURuibo/CauTabBench.
[ "['Ruibo Tu' 'Zineb Senane' 'Lele Cao' 'Cheng Zhang' 'Hedvig Kjellström'\n 'Gustav Eje Henter']" ]
null
null
2406.08315
null
null
http://arxiv.org/pdf/2406.08315v1
2024-06-12T15:16:26Z
2024-06-12T15:16:26Z
Improving Policy Optimization via $\varepsilon$-Retrain
We present $varepsilon$-retrain, an exploration strategy designed to encourage a behavioral preference while optimizing policies with monotonic improvement guarantees. To this end, we introduce an iterative procedure for collecting retrain areas -- parts of the state space where an agent did not follow the behavioral preference. Our method then switches between the typical uniform restart state distribution and the retrain areas using a decaying factor $varepsilon$, allowing agents to retrain on situations where they violated the preference. Experiments over hundreds of seeds across locomotion, navigation, and power network tasks show that our method yields agents that exhibit significant performance and sample efficiency improvements. Moreover, we employ formal verification of neural networks to provably quantify the degree to which agents adhere to behavioral preferences.
[ "['Luca Marzari' 'Changliu Liu' 'Priya L. Donti' 'Enrico Marchesini']" ]
null
null
2406.08316
null
null
http://arxiv.org/pdf/2406.08316v2
2024-06-13T12:59:06Z
2024-06-12T15:16:40Z
Is Programming by Example solved by LLMs?
Programming-by-Examples (PBE) aims to generate an algorithm from input-output examples. Such systems are practically and theoretically important: from an end-user perspective, they are deployed to millions of people, and from an AI perspective, PBE corresponds to a very general form of few-shot inductive inference. Given the success of Large Language Models (LLMs) in code-generation tasks, we investigate here the extent to which LLMs can be said to have `solved' PBE. We experiment on classic domains such as lists and strings, and an uncommon graphics programming domain not well represented in typical pretraining data. We find that pretrained models are not effective at PBE, but that they can be fine-tuned for much higher performance, provided the test problems are in-distribution. We analyze empirically what causes these models to succeed and fail, and take steps toward understanding how to achieve better out-of-distribution generalization. Collectively these results suggest that LLMs make strong progress toward solving the typical suite of PBE tasks, potentially increasing the flexibility and applicability of PBE systems, while also identifying ways in which LLMs still fall short.
[ "['Wen-Ding Li' 'Kevin Ellis']" ]
null
null
2406.08318
null
null
http://arxiv.org/pdf/2406.08318v1
2024-06-12T15:19:25Z
2024-06-12T15:19:25Z
Invariant multiscale neural networks for data-scarce scientific applications
Success of machine learning (ML) in the modern world is largely determined by abundance of data. However at many industrial and scientific problems, amount of data is limited. Application of ML methods to data-scarce scientific problems can be made more effective via several routes, one of them is equivariant neural networks possessing knowledge of symmetries. Here we suggest that combination of symmetry-aware invariant architectures and stacks of dilated convolutions is a very effective and easy to implement receipt allowing sizable improvements in accuracy over standard approaches. We apply it to representative physical problems from different realms: prediction of bandgaps of photonic crystals, and network approximations of magnetic ground states. The suggested invariant multiscale architectures increase expressibility of networks, which allow them to perform better in all considered cases.
[ "['I. Schurov' 'D. Alforov' 'M. Katsnelson' 'A. Bagrov' 'A. Itin']" ]
null
null
2406.08321
null
null
http://arxiv.org/pdf/2406.08321v1
2024-06-12T15:21:51Z
2024-06-12T15:21:51Z
Deep learning from strongly mixing observations: Sparse-penalized regularization and minimax optimality
The explicit regularization and optimality of deep neural networks estimators from independent data have made considerable progress recently. The study of such properties on dependent data is still a challenge. In this paper, we carry out deep learning from strongly mixing observations, and deal with the squared and a broad class of loss functions. We consider sparse-penalized regularization for deep neural network predictor. For a general framework that includes, regression estimation, classification, time series prediction,$cdots$, oracle inequality for the expected excess risk is established and a bound on the class of H"older smooth functions is provided. For nonparametric regression from strong mixing data and sub-exponentially error, we provide an oracle inequality for the $L_2$ error and investigate an upper bound of this error on a class of H"older composition functions. For the specific case of nonparametric autoregression with Gaussian and Laplace errors, a lower bound of the $L_2$ error on this H"older composition class is established. Up to logarithmic factor, this bound matches its upper bound; so, the deep neural network estimator attains the minimax optimal rate.
[ "['William Kengne' 'Modou Wade']" ]
null
null
2406.08322
null
null
http://arxiv.org/pdf/2406.08322v1
2024-06-12T15:22:56Z
2024-06-12T15:22:56Z
MMIL: A novel algorithm for disease associated cell type discovery
Single-cell datasets often lack individual cell labels, making it challenging to identify cells associated with disease. To address this, we introduce Mixture Modeling for Multiple Instance Learning (MMIL), an expectation maximization method that enables the training and calibration of cell-level classifiers using patient-level labels. Our approach can be used to train e.g. lasso logistic regression models, gradient boosted trees, and neural networks. When applied to clinically-annotated, primary patient samples in Acute Myeloid Leukemia (AML) and Acute Lymphoblastic Leukemia (ALL), our method accurately identifies cancer cells, generalizes across tissues and treatment timepoints, and selects biologically relevant features. In addition, MMIL is capable of incorporating cell labels into model training when they are known, providing a powerful framework for leveraging both labeled and unlabeled data simultaneously. Mixture Modeling for MIL offers a novel approach for cell classification, with significant potential to advance disease understanding and management, especially in scenarios with unknown gold-standard labels and high dimensionality.
[ "['Erin Craig' 'Timothy Keyes' 'Jolanda Sarno' 'Maxim Zaslavsky'\n 'Garry Nolan' 'Kara Davis' 'Trevor Hastie' 'Robert Tibshirani']" ]
null
null
2406.08330
null
null
http://arxiv.org/pdf/2406.08330v1
2024-06-12T15:34:28Z
2024-06-12T15:34:28Z
It's all about PR -- Smart Benchmarking AI Accelerators using Performance Representatives
Statistical models are widely used to estimate the performance of commercial off-the-shelf (COTS) AI hardware accelerators. However, training of statistical performance models often requires vast amounts of data, leading to a significant time investment and can be difficult in case of limited hardware availability. To alleviate this problem, we propose a novel performance modeling methodology that significantly reduces the number of training samples while maintaining good accuracy. Our approach leverages knowledge of the target hardware architecture and initial parameter sweeps to identify a set of Performance Representatives (PR) for deep neural network (DNN) layers. These PRs are then used for benchmarking, building a statistical performance model, and making estimations. This targeted approach drastically reduces the number of training samples needed, opposed to random sampling, to achieve a better estimation accuracy. We achieve a Mean Absolute Percentage Error (MAPE) of as low as 0.02% for single-layer estimations and 0.68% for whole DNN estimations with less than 10000 training samples. The results demonstrate the superiority of our method for single-layer estimations compared to models trained with randomly sampled datasets of the same size.
[ "['Alexander Louis-Ferdinand Jung' 'Jannik Steinmetz' 'Jonathan Gietz'\n 'Konstantin Lübeck' 'Oliver Bringmann']" ]
null
null
2406.08331
null
null
http://arxiv.org/pdf/2406.08331v1
2024-06-12T15:34:42Z
2024-06-12T15:34:42Z
Genetic Column Generation for Computing Lower Bounds for Adversarial Classification
Recent theoretical results on adversarial multi-class classification showed a similarity to the multi-marginal formulation of Wasserstein-barycenter in optimal transport. Unfortunately, both problems suffer from the curse of dimension, making it hard to exploit the nice linear program structure of the problems for numerical calculations. We investigate how ideas from Genetic Column Generation for multi-marginal optimal transport can be used to overcome the curse of dimension in computing the minimal adversarial risk in multi-class classification.
[ "['Maximilian Penka']" ]
null
null
2406.08334
null
null
http://arxiv.org/pdf/2406.08334v1
2024-06-12T15:40:06Z
2024-06-12T15:40:06Z
ProTrain: Efficient LLM Training via Memory-Aware Techniques
It is extremely memory-hungry to train Large Language Models (LLM). To solve this problem, existing work exploits the combination of CPU and GPU for the training process, such as ZeRO-Offload. Such a technique largely democratizes billion-scale model training, making it possible to train with few consumer graphics cards. However, based on our observation, existing frameworks often provide coarse-grained memory management and require experienced experts in configuration tuning, leading to suboptimal hardware utilization and performance. This paper proposes ProTrain, a novel training system that intelligently balances memory usage and performance by coordinating memory, computation, and IO. ProTrain achieves adaptive memory management through Chunk-Based Model State Management and Block-Wise Activation Management, guided by a Memory-Aware Runtime Profiler without user intervention. ProTrain does not change the training algorithm and thus does not compromise accuracy. Experiments show that ProTrain improves training throughput by 1.43$times$ to 2.71$times$ compared to the SOTA training systems.
[ "['Hanmei Yang' 'Jin Zhou' 'Yao Fu' 'Xiaoqun Wang' 'Ramine Roane'\n 'Hui Guan' 'Tongping Liu']" ]
null
null
2406.08335
null
null
http://arxiv.org/pdf/2406.08335v1
2024-06-12T15:41:06Z
2024-06-12T15:41:06Z
A Survey of Pipeline Tools for Data Engineering
Currently, a variety of pipeline tools are available for use in data engineering. Data scientists can use these tools to resolve data wrangling issues associated with data and accomplish some data engineering tasks from data ingestion through data preparation to utilization as input for machine learning (ML). Some of these tools have essential built-in components or can be combined with other tools to perform desired data engineering operations. While some tools are wholly or partly commercial, several open-source tools are available to perform expert-level data engineering tasks. This survey examines the broad categories and examples of pipeline tools based on their design and data engineering intentions. These categories are Extract Transform Load/Extract Load Transform (ETL/ELT), pipelines for Data Integration, Ingestion, and Transformation, Data Pipeline Orchestration and Workflow Management, and Machine Learning Pipelines. The survey also provides a broad outline of the utilization with examples within these broad groups and finally, a discussion is presented with case studies indicating the usage of pipeline tools for data engineering. The studies present some first-user application experiences with sample data, some complexities of the applied pipeline, and a summary note of approaches to using these tools to prepare data for machine learning.
[ "['Anthony Mbata' 'Yaji Sripada' 'Mingjun Zhong']" ]
null
null
2406.08354
null
null
http://arxiv.org/pdf/2406.08354v1
2024-06-12T16:00:16Z
2024-06-12T16:00:16Z
DocSynthv2: A Practical Autoregressive Modeling for Document Generation
While the generation of document layouts has been extensively explored, comprehensive document generation encompassing both layout and content presents a more complex challenge. This paper delves into this advanced domain, proposing a novel approach called DocSynthv2 through the development of a simple yet effective autoregressive structured model. Our model, distinct in its integration of both layout and textual cues, marks a step beyond existing layout-generation approaches. By focusing on the relationship between the structural elements and the textual content within documents, we aim to generate cohesive and contextually relevant documents without any reliance on visual components. Through experimental studies on our curated benchmark for the new task, we demonstrate the ability of our model combining layout and textual information in enhancing the generation quality and relevance of documents, opening new pathways for research in document creation and automated design. Our findings emphasize the effectiveness of autoregressive models in handling complex document generation tasks.
[ "['Sanket Biswas' 'Rajiv Jain' 'Vlad I. Morariu' 'Jiuxiang Gu'\n 'Puneet Mathur' 'Curtis Wigington' 'Tong Sun' 'Josep Lladós']" ]
null
null
2406.08373
null
null
http://arxiv.org/pdf/2406.08373v1
2024-06-12T16:21:11Z
2024-06-12T16:21:11Z
Deep Learning Based Joint Multi-User MISO Power Allocation and Beamforming Design
The evolution of fifth generation (5G) wireless communication networks has led to an increased need for wireless resource management solutions that provide higher data rates, wide coverage, low latency, and power efficiency. Yet, many of existing traditional approaches remain non-practical due to computational limitations, and unrealistic presumptions of static network conditions and algorithm initialization dependencies. This creates an important gap between theoretical analysis and real-time processing of algorithms. To bridge this gap, deep learning based techniques offer promising solutions with their representational capabilities for universal function approximation. We propose a novel unsupervised deep learning based joint power allocation and beamforming design for multi-user multiple-input single-output (MU-MISO) system. The objective is to enhance the spectral efficiency by maximizing the sum-rate with the proposed joint design framework, NNBF-P while also offering computationally efficient solution in contrast to conventional approaches. We conduct experiments for diverse settings to compare the performance of NNBF-P with zero-forcing beamforming (ZFBF), minimum mean square error (MMSE) beamforming, and NNBF, which is also our deep learning based beamforming design without joint power allocation scheme. Experiment results demonstrate the superiority of NNBF-P compared to ZFBF, and MMSE while NNBF can have lower performances than MMSE and ZFBF in some experiment settings. It can also demonstrate the effectiveness of joint design framework with respect to NNBF.
[ "['Cemil Vahapoglu' \"Timothy J. O'Shea\" 'Tamoghna Roy' 'Sennur Ulukus']" ]
null
null
2406.08391
null
null
http://arxiv.org/pdf/2406.08391v1
2024-06-12T16:41:31Z
2024-06-12T16:41:31Z
Large Language Models Must Be Taught to Know What They Don't Know
When using large language models (LLMs) in high-stakes applications, we need to know when we can trust their predictions. Some works argue that prompting high-performance LLMs is sufficient to produce calibrated uncertainties, while others introduce sampling methods that can be prohibitively expensive. In this work, we first argue that prompting on its own is insufficient to achieve good calibration and then show that fine-tuning on a small dataset of correct and incorrect answers can create an uncertainty estimate with good generalization and small computational overhead. We show that a thousand graded examples are sufficient to outperform baseline methods and that training through the features of a model is necessary for good performance and tractable for large open-source models when using LoRA. We also investigate the mechanisms that enable reliable LLM uncertainty estimation, finding that many models can be used as general-purpose uncertainty estimators, applicable not just to their own uncertainties but also the uncertainty of other models. Lastly, we show that uncertainty estimates inform human use of LLMs in human-AI collaborative settings through a user study.
[ "['Sanyam Kapoor' 'Nate Gruver' 'Manley Roberts' 'Katherine Collins'\n 'Arka Pal' 'Umang Bhatt' 'Adrian Weller' 'Samuel Dooley' 'Micah Goldblum'\n 'Andrew Gordon Wilson']" ]
null
null
2406.08395
null
null
http://arxiv.org/pdf/2406.08395v1
2024-06-12T16:45:09Z
2024-06-12T16:45:09Z
Time-Constrained Robust MDPs
Robust reinforcement learning is essential for deploying reinforcement learning algorithms in real-world scenarios where environmental uncertainty predominates. Traditional robust reinforcement learning often depends on rectangularity assumptions, where adverse probability measures of outcome states are assumed to be independent across different states and actions. This assumption, rarely fulfilled in practice, leads to overly conservative policies. To address this problem, we introduce a new time-constrained robust MDP (TC-RMDP) formulation that considers multifactorial, correlated, and time-dependent disturbances, thus more accurately reflecting real-world dynamics. This formulation goes beyond the conventional rectangularity paradigm, offering new perspectives and expanding the analytical framework for robust RL. We propose three distinct algorithms, each using varying levels of environmental information, and evaluate them extensively on continuous control benchmarks. Our results demonstrate that these algorithms yield an efficient tradeoff between performance and robustness, outperforming traditional deep robust RL methods in time-constrained environments while preserving robustness in classical benchmarks. This study revisits the prevailing assumptions in robust RL and opens new avenues for developing more practical and realistic RL applications.
[ "['Adil Zouitine' 'David Bertoin' 'Pierre Clavier' 'Matthieu Geist'\n 'Emmanuel Rachelson']" ]
null
null
2406.08398
null
null
http://arxiv.org/pdf/2406.08398v1
2024-06-12T16:46:12Z
2024-06-12T16:46:12Z
cPAPERS: A Dataset of Situated and Multimodal Interactive Conversations in Scientific Papers
An emerging area of research in situated and multimodal interactive conversations (SIMMC) includes interactions in scientific papers. Since scientific papers are primarily composed of text, equations, figures, and tables, SIMMC methods must be developed specifically for each component to support the depth of inquiry and interactions required by research scientists. This work introduces Conversational Papers (cPAPERS), a dataset of conversational question-answer pairs from reviews of academic papers grounded in these paper components and their associated references from scientific documents available on arXiv. We present a data collection strategy to collect these question-answer pairs from OpenReview and associate them with contextual information from LaTeX source files. Additionally, we present a series of baseline approaches utilizing Large Language Models (LLMs) in both zero-shot and fine-tuned configurations to address the cPAPERS dataset.
[ "['Anirudh Sundar' 'Jin Xu' 'William Gay' 'Christopher Richardson'\n 'Larry Heck']" ]
null
null
2406.08399
null
null
http://arxiv.org/pdf/2406.08399v1
2024-06-12T16:47:54Z
2024-06-12T16:47:54Z
Differentiable Cost-Parameterized Monge Map Estimators
Within the field of optimal transport (OT), the choice of ground cost is crucial to ensuring that the optimality of a transport map corresponds to usefulness in real-world applications. It is therefore desirable to use known information to tailor cost functions and hence learn OT maps which are adapted to the problem at hand. By considering a class of neural ground costs whose Monge maps have a known form, we construct a differentiable Monge map estimator which can be optimized to be consistent with known information about an OT map. In doing so, we simultaneously learn both an OT map estimator and a corresponding adapted cost function. Through suitable choices of loss function, our method provides a general approach for incorporating prior information about the Monge map itself when learning adapted OT maps and cost functions.
[ "['Samuel Howard' 'George Deligiannidis' 'Patrick Rebeschini'\n 'James Thornton']" ]
null
null
2406.08401
null
null
http://arxiv.org/pdf/2406.08401v1
2024-06-12T16:50:12Z
2024-06-12T16:50:12Z
Nyström Kernel Stein Discrepancy
Kernel methods underpin many of the most successful approaches in data science and statistics, and they allow representing probability measures as elements of a reproducing kernel Hilbert space without loss of information. Recently, the kernel Stein discrepancy (KSD), which combines Stein's method with kernel techniques, gained considerable attention. Through the Stein operator, KSD allows the construction of powerful goodness-of-fit tests where it is sufficient to know the target distribution up to a multiplicative constant. However, the typical U- and V-statistic-based KSD estimators suffer from a quadratic runtime complexity, which hinders their application in large-scale settings. In this work, we propose a Nystr"om-based KSD acceleration -- with runtime $mathcal O!left(mn+m^3right)$ for $n$ samples and $mll n$ Nystr"om points -- , show its $sqrt{n}$-consistency under the null with a classical sub-Gaussian assumption, and demonstrate its applicability for goodness-of-fit testing on a suite of benchmarks.
[ "['Florian Kalinke' 'Zoltan Szabo' 'Bharath K. Sriperumbudur']" ]
null
null
2406.08402
null
null
http://arxiv.org/pdf/2406.08402v1
2024-06-12T16:51:54Z
2024-06-12T16:51:54Z
Understanding Sounds, Missing the Questions: The Challenge of Object Hallucination in Large Audio-Language Models
Large audio-language models (LALMs) enhance traditional large language models by integrating audio perception capabilities, allowing them to tackle audio-related tasks. Previous research has primarily focused on assessing the performance of LALMs across various tasks, yet overlooking their reliability, particularly concerning issues like object hallucination. In our study, we introduce methods to assess the extent of object hallucination of publicly available LALMs. Our findings reveal that LALMs are comparable to specialized audio captioning models in their understanding of audio content, but struggle to answer discriminative questions, specifically those requiring the identification of the presence of particular object sounds within an audio clip. This limitation highlights a critical weakness in current LALMs: their inadequate understanding of discriminative queries. Moreover, we explore the potential of prompt engineering to enhance LALMs' performance on discriminative questions.
[ "['Chun-Yi Kuan' 'Wei-Ping Huang' 'Hung-yi Lee']" ]
null
null
2406.08404
null
null
http://arxiv.org/pdf/2406.08404v1
2024-06-12T16:52:54Z
2024-06-12T16:52:54Z
Scaling Value Iteration Networks to 5000 Layers for Extreme Long-Term Planning
The Value Iteration Network (VIN) is an end-to-end differentiable architecture that performs value iteration on a latent MDP for planning in reinforcement learning (RL). However, VINs struggle to scale to long-term and large-scale planning tasks, such as navigating a $100times 100$ maze -- a task which typically requires thousands of planning steps to solve. We observe that this deficiency is due to two issues: the representation capacity of the latent MDP and the planning module's depth. We address these by augmenting the latent MDP with a dynamic transition kernel, dramatically improving its representational capacity, and, to mitigate the vanishing gradient problem, introducing an "adaptive highway loss" that constructs skip connections to improve gradient flow. We evaluate our method on both 2D maze navigation environments and the ViZDoom 3D navigation benchmark. We find that our new method, named Dynamic Transition VIN (DT-VIN), easily scales to 5000 layers and casually solves challenging versions of the above tasks. Altogether, we believe that DT-VIN represents a concrete step forward in performing long-term large-scale planning in RL environments.
[ "['Yuhui Wang' 'Qingyuan Wu' 'Weida Li' 'Dylan R. Ashley'\n 'Francesco Faccio' 'Chao Huang' 'Jürgen Schmidhuber']" ]
null
null
2406.08406
null
null
http://arxiv.org/pdf/2406.08406v1
2024-06-12T16:53:51Z
2024-06-12T16:53:51Z
RRLS : Robust Reinforcement Learning Suite
Robust reinforcement learning is the problem of learning control policies that provide optimal worst-case performance against a span of adversarial environments. It is a crucial ingredient for deploying algorithms in real-world scenarios with prevalent environmental uncertainties and has been a long-standing object of attention in the community, without a standardized set of benchmarks. This contribution endeavors to fill this gap. We introduce the Robust Reinforcement Learning Suite (RRLS), a benchmark suite based on Mujoco environments. RRLS provides six continuous control tasks with two types of uncertainty sets for training and evaluation. Our benchmark aims to standardize robust reinforcement learning tasks, facilitating reproducible and comparable experiments, in particular those from recent state-of-the-art contributions, for which we demonstrate the use of RRLS. It is also designed to be easily expandable to new environments. The source code is available at href{https://github.com/SuReLI/RRLS}{https://github.com/SuReLI/RRLS}.
[ "['Adil Zouitine' 'David Bertoin' 'Pierre Clavier' 'Matthieu Geist'\n 'Emmanuel Rachelson']" ]
null
null
2406.08413
null
null
http://arxiv.org/pdf/2406.08413v1
2024-06-12T16:57:58Z
2024-06-12T16:57:58Z
Memory Is All You Need: An Overview of Compute-in-Memory Architectures for Accelerating Large Language Model Inference
Large language models (LLMs) have recently transformed natural language processing, enabling machines to generate human-like text and engage in meaningful conversations. This development necessitates speed, efficiency, and accessibility in LLM inference as the computational and memory requirements of these systems grow exponentially. Meanwhile, advancements in computing and memory capabilities are lagging behind, exacerbated by the discontinuation of Moore's law. With LLMs exceeding the capacity of single GPUs, they require complex, expert-level configurations for parallel processing. Memory accesses become significantly more expensive than computation, posing a challenge for efficient scaling, known as the memory wall. Here, compute-in-memory (CIM) technologies offer a promising solution for accelerating AI inference by directly performing analog computations in memory, potentially reducing latency and power consumption. By closely integrating memory and compute elements, CIM eliminates the von Neumann bottleneck, reducing data movement and improving energy efficiency. This survey paper provides an overview and analysis of transformer-based models, reviewing various CIM architectures and exploring how they can address the imminent challenges of modern AI computing systems. We discuss transformer-related operators and their hardware acceleration schemes and highlight challenges, trends, and insights in corresponding CIM designs.
[ "['Christopher Wolters' 'Xiaoxuan Yang' 'Ulf Schlichtmann'\n 'Toyotaro Suzumura']" ]
null
null
2406.08414
null
null
http://arxiv.org/pdf/2406.08414v1
2024-06-12T16:58:41Z
2024-06-12T16:58:41Z
Discovering Preference Optimization Algorithms with and for Large Language Models
Offline preference optimization is a key method for enhancing and controlling the quality of Large Language Model (LLM) outputs. Typically, preference optimization is approached as an offline supervised learning task using manually-crafted convex loss functions. While these methods are based on theoretical insights, they are inherently constrained by human creativity, so the large search space of possible loss functions remains under explored. We address this by performing LLM-driven objective discovery to automatically discover new state-of-the-art preference optimization algorithms without (expert) human intervention. Specifically, we iteratively prompt an LLM to propose and implement new preference optimization loss functions based on previously-evaluated performance metrics. This process leads to the discovery of previously-unknown and performant preference optimization algorithms. The best performing of these we call Discovered Preference Optimization (DiscoPOP), a novel algorithm that adaptively blends logistic and exponential losses. Experiments demonstrate the state-of-the-art performance of DiscoPOP and its successful transfer to held-out tasks.
[ "['Chris Lu' 'Samuel Holt' 'Claudio Fanconi' 'Alex J. Chan'\n 'Jakob Foerster' 'Mihaela van der Schaar' 'Robert Tjarko Lange']" ]
null
null
2406.08423
null
null
http://arxiv.org/pdf/2406.08423v1
2024-06-12T17:06:07Z
2024-06-12T17:06:07Z
State Soup: In-Context Skill Learning, Retrieval and Mixing
A new breed of gated-linear recurrent neural networks has reached state-of-the-art performance on a range of sequence modeling problems. Such models naturally handle long sequences efficiently, as the cost of processing a new input is independent of sequence length. Here, we explore another advantage of these stateful sequence models, inspired by the success of model merging through parameter interpolation. Building on parallels between fine-tuning and in-context learning, we investigate whether we can treat internal states as task vectors that can be stored, retrieved, and then linearly combined, exploiting the linearity of recurrence. We study this form of fast model merging on Mamba-2.8b, a pretrained recurrent model, and present preliminary evidence that simple linear state interpolation methods suffice to improve next-token perplexity as well as downstream in-context learning task performance.
[ "['Maciej Pióro' 'Maciej Wołczyk' 'Razvan Pascanu' 'Johannes von Oswald'\n 'João Sacramento']" ]
null
null
2406.08428
null
null
http://arxiv.org/pdf/2406.08428v1
2024-06-12T17:14:44Z
2024-06-12T17:14:44Z
Improving Noise Robustness through Abstractions and its Impact on Machine Learning
Noise is a fundamental problem in learning theory with huge effects in the application of Machine Learning (ML) methods, due to real world data tendency to be noisy. Additionally, introduction of malicious noise can make ML methods fail critically, as is the case with adversarial attacks. Thus, finding and developing alternatives to improve robustness to noise is a fundamental problem in ML. In this paper, we propose a method to deal with noise: mitigating its effect through the use of data abstractions. The goal is to reduce the effect of noise over the model's performance through the loss of information produced by the abstraction. However, this information loss comes with a cost: it can result in an accuracy reduction due to the missing information. First, we explored multiple methodologies to create abstractions, using the training dataset, for the specific case of numerical data and binary classification tasks. We also tested how these abstractions can affect robustness to noise with several experiments that explore the robustness of an Artificial Neural Network to noise when trained using raw data emph{vs} when trained using abstracted data. The results clearly show that using abstractions is a viable approach for developing noise robust ML methods.
[ "['Alfredo Ibias' 'Karol Capala' 'Varun Ravi Varma' 'Anna Drozdz'\n 'Jose Sousa']" ]
null
null
2406.08431
null
null
http://arxiv.org/pdf/2406.08431v1
2024-06-12T17:16:16Z
2024-06-12T17:16:16Z
Diffusion Soup: Model Merging for Text-to-Image Diffusion Models
We present Diffusion Soup, a compartmentalization method for Text-to-Image Generation that averages the weights of diffusion models trained on sharded data. By construction, our approach enables training-free continual learning and unlearning with no additional memory or inference costs, since models corresponding to data shards can be added or removed by re-averaging. We show that Diffusion Soup samples from a point in weight space that approximates the geometric mean of the distributions of constituent datasets, which offers anti-memorization guarantees and enables zero-shot style mixing. Empirically, Diffusion Soup outperforms a paragon model trained on the union of all data shards and achieves a 30% improvement in Image Reward (.34 $to$ .44) on domain sharded data, and a 59% improvement in IR (.37 $to$ .59) on aesthetic data. In both cases, souping also prevails in TIFA score (respectively, 85.5 $to$ 86.5 and 85.6 $to$ 86.8). We demonstrate robust unlearning -- removing any individual domain shard only lowers performance by 1% in IR (.45 $to$ .44) -- and validate our theoretical insights on anti-memorization using real data. Finally, we showcase Diffusion Soup's ability to blend the distinct styles of models finetuned on different shards, resulting in the zero-shot generation of hybrid styles.
[ "['Benjamin Biggs' 'Arjun Seshadri' 'Yang Zou' 'Achin Jain'\n 'Aditya Golatkar' 'Yusheng Xie' 'Alessandro Achille' 'Ashwin Swaminathan'\n 'Stefano Soatto']" ]
null
null
2406.08440
null
null
http://arxiv.org/pdf/2406.08440v1
2024-06-12T17:26:54Z
2024-06-12T17:26:54Z
Adaptive Swarm Mesh Refinement using Deep Reinforcement Learning with Local Rewards
Simulating physical systems is essential in engineering, but analytical solutions are limited to straightforward problems. Consequently, numerical methods like the Finite Element Method (FEM) are widely used. However, the FEM becomes computationally expensive as problem complexity and accuracy demands increase. Adaptive Mesh Refinement (AMR) improves the FEM by dynamically allocating mesh elements on the domain, balancing computational speed and accuracy. Classical AMR depends on heuristics or expensive error estimators, limiting its use in complex simulations. While learning-based AMR methods are promising, they currently only scale to simple problems. In this work, we formulate AMR as a system of collaborating, homogeneous agents that iteratively split into multiple new agents. This agent-wise perspective enables a spatial reward formulation focused on reducing the maximum mesh element error. Our approach, Adaptive Swarm Mesh Refinement (ASMR), offers efficient, stable optimization and generates highly adaptive meshes at user-defined resolution during inference. Extensive experiments, including volumetric meshes and Neumann boundary conditions, demonstrate that ASMR exceeds heuristic approaches and learned baselines, matching the performance of expensive error-based oracle AMR strategies. ASMR additionally generalizes to different domains during inference, and produces meshes that simulate up to 2 orders of magnitude faster than uniform refinements in more demanding settings.
[ "['Niklas Freymuth' 'Philipp Dahlinger' 'Tobias Würth' 'Simon Reisch'\n 'Luise Kärger' 'Gerhard Neumann']" ]
null
null
2406.08443
null
null
http://arxiv.org/pdf/2406.08443v1
2024-06-12T17:31:36Z
2024-06-12T17:31:36Z
Transformation-Dependent Adversarial Attacks
We introduce transformation-dependent adversarial attacks, a new class of threats where a single additive perturbation can trigger diverse, controllable mis-predictions by systematically transforming the input (e.g., scaling, blurring, compression). Unlike traditional attacks with static effects, our perturbations embed metamorphic properties to enable different adversarial attacks as a function of the transformation parameters. We demonstrate the transformation-dependent vulnerability across models (e.g., convolutional networks and vision transformers) and vision tasks (e.g., image classification and object detection). Our proposed geometric and photometric transformations enable a range of targeted errors from one crafted input (e.g., higher than 90% attack success rate for classifiers). We analyze effects of model architecture and type/variety of transformations on attack effectiveness. This work forces a paradigm shift by redefining adversarial inputs as dynamic, controllable threats. We highlight the need for robust defenses against such multifaceted, chameleon-like perturbations that current techniques are ill-prepared for.
[ "['Yaoteng Tan' 'Zikui Cai' 'M. Salman Asif']" ]
null
null
2406.08445
null
null
http://arxiv.org/pdf/2406.08445v1
2024-06-12T17:37:09Z
2024-06-12T17:37:09Z
SVSNet+: Enhancing Speaker Voice Similarity Assessment Models with Representations from Speech Foundation Models
Representations from pre-trained speech foundation models (SFMs) have shown impressive performance in many downstream tasks. However, the potential benefits of incorporating pre-trained SFM representations into speaker voice similarity assessment have not been thoroughly investigated. In this paper, we propose SVSNet+, a model that integrates pre-trained SFM representations to improve performance in assessing speaker voice similarity. Experimental results on the Voice Conversion Challenge 2018 and 2020 datasets show that SVSNet+ incorporating WavLM representations shows significant improvements compared to baseline models. In addition, while fine-tuning WavLM with a small dataset of the downstream task does not improve performance, using the same dataset to learn a weighted-sum representation of WavLM can substantially improve performance. Furthermore, when WavLM is replaced by other SFMs, SVSNet+ still outperforms the baseline models and exhibits strong generalization ability.
[ "['Chun Yin' 'Tai-Shih Chi' 'Yu Tsao' 'Hsin-Min Wang']" ]
null
null
2406.08447
null
null
http://arxiv.org/pdf/2406.08447v1
2024-06-12T17:38:20Z
2024-06-12T17:38:20Z
The Impact of Initialization on LoRA Finetuning Dynamics
In this paper, we study the role of initialization in Low Rank Adaptation (LoRA) as originally introduced in Hu et al. (2021). Essentially, to start from the pretrained model as initialization for finetuning, one can either initialize B to zero and A to random (default initialization in PEFT package), or vice-versa. In both cases, the product BA is equal to zero at initialization, which makes finetuning starts from the pretrained model. These two initialization schemes are seemingly similar. They should in-principle yield the same performance and share the same optimal learning rate. We demonstrate that this is an incorrect intuition and that the first scheme (initializing B to zero and A to random) on average yields better performance compared to the other scheme. Our theoretical analysis shows that the reason behind this might be that the first initialization allows the use of larger learning rates (without causing output instability) compared to the second initialization, resulting in more efficient learning of the first scheme. We validate our results with extensive experiments on LLMs.
[ "['Soufiane Hayou' 'Nikhil Ghosh' 'Bin Yu']" ]
null
null
2406.08465
null
null
http://arxiv.org/pdf/2406.08465v1
2024-06-12T17:53:28Z
2024-06-12T17:53:28Z
Nonconvex Federated Learning on Compact Smooth Submanifolds With Heterogeneous Data
Many machine learning tasks, such as principal component analysis and low-rank matrix completion, give rise to manifold optimization problems. Although there is a large body of work studying the design and analysis of algorithms for manifold optimization in the centralized setting, there are currently very few works addressing the federated setting. In this paper, we consider nonconvex federated learning over a compact smooth submanifold in the setting of heterogeneous client data. We propose an algorithm that leverages stochastic Riemannian gradients and a manifold projection operator to improve computational efficiency, uses local updates to improve communication efficiency, and avoids client drift. Theoretically, we show that our proposed algorithm converges sub-linearly to a neighborhood of a first-order optimal solution by using a novel analysis that jointly exploits the manifold structure and properties of the loss functions. Numerical experiments demonstrate that our algorithm has significantly smaller computational and communication overhead than existing methods.
[ "['Jiaojiao Zhang' 'Jiang Hu' 'Anthony Man-Cho So' 'Mikael Johansson']" ]
null
null
2406.08466
null
null
http://arxiv.org/pdf/2406.08466v1
2024-06-12T17:53:29Z
2024-06-12T17:53:29Z
Scaling Laws in Linear Regression: Compute, Parameters, and Data
Empirically, large-scale deep learning models often satisfy a neural scaling law: the test error of the trained model improves polynomially as the model size and data size grow. However, conventional wisdom suggests the test error consists of approximation, bias, and variance errors, where the variance error increases with model size. This disagrees with the general form of neural scaling laws, which predict that increasing model size monotonically improves performance. We study the theory of scaling laws in an infinite dimensional linear regression setup. Specifically, we consider a model with $M$ parameters as a linear function of sketched covariates. The model is trained by one-pass stochastic gradient descent (SGD) using $N$ data. Assuming the optimal parameter satisfies a Gaussian prior and the data covariance matrix has a power-law spectrum of degree $a>1$, we show that the reducible part of the test error is $Theta(M^{-(a-1)} + N^{-(a-1)/a})$. The variance error, which increases with $M$, is dominated by the other errors due to the implicit regularization of SGD, thus disappearing from the bound. Our theory is consistent with the empirical neural scaling laws and verified by numerical simulation.
[ "['Licong Lin' 'Jingfeng Wu' 'Sham M. Kakade' 'Peter L. Bartlett'\n 'Jason D. Lee']" ]
null
null
2406.08467
null
null
http://arxiv.org/pdf/2406.08467v1
2024-06-12T17:53:31Z
2024-06-12T17:53:31Z
DafnyBench: A Benchmark for Formal Software Verification
We introduce DafnyBench, the largest benchmark of its kind for training and evaluating machine learning systems for formal software verification. We test the ability of LLMs such as GPT-4 and Claude 3 to auto-generate enough hints for the Dafny formal verification engine to successfully verify over 750 programs with about 53,000 lines of code. The best model and prompting scheme achieved 68% success rate, and we quantify how this rate improves when retrying with error message feedback and how it deteriorates with the amount of required code and hints. We hope that DafnyBench will enable rapid improvements from this baseline as LLMs and verification techniques grow in quality.
[ "['Chloe Loughridge' 'Qinyi Sun' 'Seth Ahrenbach' 'Federico Cassano'\n 'Chuyue Sun' 'Ying Sheng' 'Anish Mudide' 'Md Rakib Hossain Misu'\n 'Nada Amin' 'Max Tegmark']" ]
null
null
2406.08469
null
null
http://arxiv.org/pdf/2406.08469v1
2024-06-12T17:54:54Z
2024-06-12T17:54:54Z
PAL: Pluralistic Alignment Framework for Learning from Heterogeneous Preferences
Large foundation models pretrained on raw web-scale data are not readily deployable without additional step of extensive alignment to human preferences. Such alignment is typically done by collecting large amounts of pairwise comparisons from humans ("Do you prefer output A or B?") and learning a reward model or a policy with the Bradley-Terry-Luce (BTL) model as a proxy for a human's underlying implicit preferences. These methods generally suffer from assuming a universal preference shared by all humans, which lacks the flexibility of adapting to plurality of opinions and preferences. In this work, we propose PAL, a framework to model human preference complementary to existing pretraining strategies, which incorporates plurality from the ground up. We propose using the ideal point model as a lens to view alignment using preference comparisons. Together with our novel reformulation and using mixture modeling, our framework captures the plurality of population preferences while simultaneously learning a common preference latent space across different preferences, which can few-shot generalize to new, unseen users. Our approach enables us to use the penultimate-layer representation of large foundation models and simple MLP layers to learn reward functions that are on-par with the existing large state-of-the-art reward models, thereby enhancing efficiency of reward modeling significantly. We show that PAL achieves competitive reward model accuracy compared to strong baselines on 1) Language models with Summary dataset ; 2) Image Generative models with Pick-a-Pic dataset ; 3) A new semisynthetic heterogeneous dataset generated using Anthropic Personas. Finally, our experiments also highlight the shortcoming of current preference datasets that are created using rigid rubrics which wash away heterogeneity, and call for more nuanced data collection approaches.
[ "['Daiwei Chen' 'Yi Chen' 'Aniket Rege' 'Ramya Korlakai Vinayak']" ]
null
null
2406.08472
null
null
http://arxiv.org/pdf/2406.08472v1
2024-06-12T17:56:31Z
2024-06-12T17:56:31Z
RILe: Reinforced Imitation Learning
Reinforcement Learning has achieved significant success in generating complex behavior but often requires extensive reward function engineering. Adversarial variants of Imitation Learning and Inverse Reinforcement Learning offer an alternative by learning policies from expert demonstrations via a discriminator. Employing discriminators increases their data- and computational efficiency over the standard approaches; however, results in sensitivity to imperfections in expert data. We propose RILe, a teacher-student system that achieves both robustness to imperfect data and efficiency. In RILe, the student learns an action policy while the teacher dynamically adjusts a reward function based on the student's performance and its alignment with expert demonstrations. By tailoring the reward function to both performance of the student and expert similarity, our system reduces dependence on the discriminator and, hence, increases robustness against data imperfections. Experiments show that RILe outperforms existing methods by 2x in settings with limited or noisy expert data.
[ "['Mert Albaba' 'Sammy Christen' 'Christoph Gebhardt' 'Thomas Langarek'\n 'Michael J. Black' 'Otmar Hilliges']" ]
null
null
2406.08473
null
null
http://arxiv.org/pdf/2406.08473v1
2024-06-12T17:56:46Z
2024-06-12T17:56:46Z
Strategies for Pretraining Neural Operators
Pretraining for partial differential equation (PDE) modeling has recently shown promise in scaling neural operators across datasets to improve generalizability and performance. Despite these advances, our understanding of how pretraining affects neural operators is still limited; studies generally propose tailored architectures and datasets that make it challenging to compare or examine different pretraining frameworks. To address this, we compare various pretraining methods without optimizing architecture choices to characterize pretraining dynamics on different models and datasets as well as to understand its scaling and generalization behavior. We find that pretraining is highly dependent on model and dataset choices, but in general transfer learning or physics-based pretraining strategies work best. In addition, pretraining performance can be further improved by using data augmentations. Lastly, pretraining is additionally beneficial when fine-tuning in scarce data regimes or when generalizing to downstream data similar to the pretraining distribution. Through providing insights into pretraining neural operators for physics prediction, we hope to motivate future work in developing and evaluating pretraining methods for PDEs.
[ "['Anthony Zhou' 'Cooper Lorsung' 'AmirPouya Hemmasian'\n 'Amir Barati Farimani']" ]
null
null
2406.08474
null
null
http://arxiv.org/pdf/2406.08474v2
2024-06-13T17:38:12Z
2024-06-12T17:57:06Z
Real2Code: Reconstruct Articulated Objects via Code Generation
We present Real2Code, a novel approach to reconstructing articulated objects via code generation. Given visual observations of an object, we first reconstruct its part geometry using an image segmentation model and a shape completion model. We then represent the object parts with oriented bounding boxes, which are input to a fine-tuned large language model (LLM) to predict joint articulation as code. By leveraging pre-trained vision and language models, our approach scales elegantly with the number of articulated parts, and generalizes from synthetic training data to real world objects in unstructured environments. Experimental results demonstrate that Real2Code significantly outperforms previous state-of-the-art in reconstruction accuracy, and is the first approach to extrapolate beyond objects' structural complexity in the training set, and reconstructs objects with up to 10 articulated parts. When incorporated with a stereo reconstruction model, Real2Code also generalizes to real world objects from a handful of multi-view RGB images, without the need for depth or camera information.
[ "['Zhao Mandi' 'Yijia Weng' 'Dominik Bauer' 'Shuran Song']" ]
null
null
2406.08488
null
null
http://arxiv.org/pdf/2406.08488v1
2024-06-12T17:59:52Z
2024-06-12T17:59:52Z
ICE-G: Image Conditional Editing of 3D Gaussian Splats
Recently many techniques have emerged to create high quality 3D assets and scenes. When it comes to editing of these objects, however, existing approaches are either slow, compromise on quality, or do not provide enough customization. We introduce a novel approach to quickly edit a 3D model from a single reference view. Our technique first segments the edit image, and then matches semantically corresponding regions across chosen segmented dataset views using DINO features. A color or texture change from a particular region of the edit image can then be applied to other views automatically in a semantically sensible manner. These edited views act as an updated dataset to further train and re-style the 3D scene. The end-result is therefore an edited 3D model. Our framework enables a wide variety of editing tasks such as manual local edits, correspondence based style transfer from any example image, and a combination of different styles from multiple example images. We use Gaussian Splats as our primary 3D representation due to their speed and ease of local editing, but our technique works for other methods such as NeRFs as well. We show through multiple examples that our method produces higher quality results while offering fine-grained control of editing. Project page: ice-gaussian.github.io
[ "['Vishnu Jaganathan' 'Hannah Hanyun Huang' 'Muhammad Zubair Irshad'\n 'Varun Jampani' 'Amit Raj' 'Zsolt Kira']" ]
null
null
2406.08506
null
null
http://arxiv.org/pdf/2406.08506v1
2024-06-01T13:11:11Z
2024-06-01T13:11:11Z
RGFN: Synthesizable Molecular Generation Using GFlowNets
Generative models hold great promise for small molecule discovery, significantly increasing the size of search space compared to traditional in silico screening libraries. However, most existing machine learning methods for small molecule generation suffer from poor synthesizability of candidate compounds, making experimental validation difficult. In this paper we propose Reaction-GFlowNet (RGFN), an extension of the GFlowNet framework that operates directly in the space of chemical reactions, thereby allowing out-of-the-box synthesizability while maintaining comparable quality of generated candidates. We demonstrate that with the proposed set of reactions and building blocks, it is possible to obtain a search space of molecules orders of magnitude larger than existing screening libraries coupled with low cost of synthesis. We also show that the approach scales to very large fragment libraries, further increasing the number of potential molecules. We demonstrate the effectiveness of the proposed approach across a range of oracle models, including pretrained proxy models and GPU-accelerated docking.
[ "['Michał Koziarski' 'Andrei Rekesh' 'Dmytro Shevchuk'\n 'Almer van der Sloot' 'Piotr Gaiński' 'Yoshua Bengio' 'Cheng-Hao Liu'\n 'Mike Tyers' 'Robert A. Batey']" ]
null
null
2406.08511
null
null
http://arxiv.org/pdf/2406.08511v1
2024-06-07T06:34:13Z
2024-06-07T06:34:13Z
Diffusion Models in $\textit{De Novo}$ Drug Design
Diffusion models have emerged as powerful tools for molecular generation, particularly in the context of 3D molecular structures. Inspired by non-equilibrium statistical physics, these models can generate 3D molecular structures with specific properties or requirements crucial to drug discovery. Diffusion models were particularly successful at learning 3D molecular geometries' complex probability distributions and their corresponding chemical and physical properties through forward and reverse diffusion processes. This review focuses on the technical implementation of diffusion models tailored for 3D molecular generation. It compares the performance, evaluation methods, and implementation details of various diffusion models used for molecular generation tasks. We cover strategies for atom and bond representation, architectures of reverse diffusion denoising networks, and challenges associated with generating stable 3D molecular structures. This review also explores the applications of diffusion models in $textit{de novo}$ drug design and related areas of computational chemistry, such as structure-based drug design, including target-specific molecular generation, molecular docking, and molecular dynamics of protein-ligand complexes. We also cover conditional generation on physical properties, conformation generation, and fragment-based drug design. By summarizing the state-of-the-art diffusion models for 3D molecular generation, this review sheds light on their role in advancing drug discovery as well as their current limitations.
[ "['Amira Alakhdar' 'Barnabas Poczos' 'Newell Washburn']" ]
null
null
2406.08516
null
null
http://arxiv.org/pdf/2406.08516v1
2024-06-11T12:41:24Z
2024-06-11T12:41:24Z
Enhanced Anomaly Detection in Automotive Systems Using SAAD: Statistical Aggregated Anomaly Detection
This paper presents a novel anomaly detection methodology termed Statistical Aggregated Anomaly Detection (SAAD). The SAAD approach integrates advanced statistical techniques with machine learning, and its efficacy is demonstrated through validation on real sensor data from a Hardware-in-the-Loop (HIL) environment within the automotive domain. The key innovation of SAAD lies in its ability to significantly enhance the accuracy and robustness of anomaly detection when combined with Fully Connected Networks (FCNs) augmented by dropout layers. Comprehensive experimental evaluations indicate that the standalone statistical method achieves an accuracy of 72.1%, whereas the deep learning model alone attains an accuracy of 71.5%. In contrast, the aggregated method achieves a superior accuracy of 88.3% and an F1 score of 0.921, thereby outperforming the individual models. These results underscore the effectiveness of SAAD, demonstrating its potential for broad application in various domains, including automotive systems.
[ "['Dacian Goina' 'Eduard Hogea' 'George Maties']" ]
null
null
2406.08521
null
null
http://arxiv.org/pdf/2406.08521v1
2024-06-11T22:19:14Z
2024-06-11T22:19:14Z
Embedding-based Multimodal Learning on Pan-Squamous Cell Carcinomas for Improved Survival Outcomes
Cancer clinics capture disease data at various scales, from genetic to organ level. Current bioinformatic methods struggle to handle the heterogeneous nature of this data, especially with missing modalities. We propose PARADIGM, a Graph Neural Network (GNN) framework that learns from multimodal, heterogeneous datasets to improve clinical outcome prediction. PARADIGM generates embeddings from multi-resolution data using foundation models, aggregates them into patient-level representations, fuses them into a unified graph, and enhances performance for tasks like survival analysis. We train GNNs on pan-Squamous Cell Carcinomas and validate our approach on Moffitt Cancer Center lung SCC data. Multimodal GNN outperforms other models in patient survival prediction. Converging individual data modalities across varying scales provides a more insightful disease view. Our solution aims to understand the patient's circumstances comprehensively, offering insights on heterogeneous data integration and the benefits of converging maximum data views.
[ "['Asim Waqas' 'Aakash Tripathi' 'Paul Stewart' 'Mia Naeini'\n 'Ghulam Rasool']" ]
null
null
2406.08522
null
null
http://arxiv.org/pdf/2406.08522v1
2024-06-12T02:34:24Z
2024-06-12T02:34:24Z
Predicting Cascading Failures with a Hyperparametric Diffusion Model
In this paper, we study cascading failures in power grids through the lens of information diffusion models. Similar to the spread of rumors or influence in an online social network, it has been observed that failures (outages) in a power grid can spread contagiously, driven by viral spread mechanisms. We employ a stochastic diffusion model that is Markovian (memoryless) and local (the activation of one node, i.e., transmission line, can only be caused by its neighbors). Our model integrates viral diffusion principles with physics-based concepts, by correlating the diffusion weights (contagion probabilities between transmission lines) with the hyperparametric Information Cascades (IC) model. We show that this diffusion model can be learned from traces of cascading failures, enabling accurate modeling and prediction of failure propagation. This approach facilitates actionable information through well-understood and efficient graph analysis methods and graph diffusion simulations. Furthermore, by leveraging the hyperparametric model, we can predict diffusion and mitigate the risks of cascading failures even in unseen grid configurations, whereas existing methods falter due to a lack of training data. Extensive experiments based on a benchmark power grid and simulations therein show that our approach effectively captures the failure diffusion phenomena and guides decisions to strengthen the grid, reducing the risk of large-scale cascading failures. Additionally, we characterize our model's sample complexity, improving upon the existing bound.
[ "['Bin Xiang' 'Bogdan Cautis' 'Xiaokui Xiao' 'Olga Mula' 'Dusit Niyato'\n 'Laks V. S. Lakshmanan']" ]
null
null
2406.08524
null
null
http://arxiv.org/pdf/2406.08524v1
2024-06-12T07:16:00Z
2024-06-12T07:16:00Z
Federated Incomplete Multi-View Clustering with Heterogeneous Graph Neural Networks
Federated multi-view clustering offers the potential to develop a global clustering model using data distributed across multiple devices. However, current methods face challenges due to the absence of label information and the paramount importance of data privacy. A significant issue is the feature heterogeneity across multi-view data, which complicates the effective mining of complementary clustering information. Additionally, the inherent incompleteness of multi-view data in a distributed setting can further complicate the clustering process. To address these challenges, we introduce a federated incomplete multi-view clustering framework with heterogeneous graph neural networks (FIM-GNNs). In the proposed FIM-GNNs, autoencoders built on heterogeneous graph neural network models are employed for feature extraction of multi-view data at each client site. At the server level, heterogeneous features from overlapping samples of each client are aggregated into a global feature representation. Global pseudo-labels are generated at the server to enhance the handling of incomplete view data, where these labels serve as a guide for integrating and refining the clustering process across different data views. Comprehensive experiments have been conducted on public benchmark datasets to verify the performance of the proposed FIM-GNNs in comparison with state-of-the-art algorithms.
[ "['Xueming Yan' 'Ziqi Wang' 'Yaochu Jin']" ]
null
null
2406.08525
null
null
http://arxiv.org/pdf/2406.08525v1
2024-06-12T07:33:38Z
2024-06-12T07:33:38Z
A Mathematical Certification for Positivity Conditions in Neural Networks with Applications to Partial Monotonicity and Ethical AI
Artificial Neural Networks (ANNs) have become a powerful tool for modeling complex relationships in large-scale datasets. However, their black-box nature poses ethical challenges. In certain situations, ensuring ethical predictions might require following specific partial monotonic constraints. However, certifying if an already-trained ANN is partially monotonic is challenging. Therefore, ANNs are often disregarded in some critical applications, such as credit scoring, where partial monotonicity is required. To address this challenge, this paper presents a novel algorithm (LipVor) that certifies if a black-box model, such as an ANN, is positive based on a finite number of evaluations. Therefore, as partial monotonicity can be stated as a positivity condition of the partial derivatives, the LipVor Algorithm can certify whether an already trained ANN is partially monotonic. To do so, for every positively evaluated point, the Lipschitzianity of the black-box model is used to construct a specific neighborhood where the function remains positive. Next, based on the Voronoi diagram of the evaluated points, a sufficient condition is stated to certify if the function is positive in the domain. Compared to prior methods, our approach is able to mathematically certify if an ANN is partially monotonic without needing constrained ANN's architectures or piece-wise linear activation functions. Therefore, LipVor could open up the possibility of using unconstrained ANN in some critical fields. Moreover, some other properties of an ANN, such as convexity, can be posed as positivity conditions, and therefore, LipVor could also be applied.
[ "['Alejandro Polo-Molina' 'David Alfaya' 'Jose Portela']" ]
null
null
2406.08526
null
null
http://arxiv.org/pdf/2406.08526v1
2024-06-12T07:47:22Z
2024-06-12T07:47:22Z
IMFL-AIGC: Incentive Mechanism Design for Federated Learning Empowered by Artificial Intelligence Generated Content
Federated learning (FL) has emerged as a promising paradigm that enables clients to collaboratively train a shared global model without uploading their local data. To alleviate the heterogeneous data quality among clients, artificial intelligence-generated content (AIGC) can be leveraged as a novel data synthesis technique for FL model performance enhancement. Due to various costs incurred by AIGC-empowered FL (e.g., costs of local model computation and data synthesis), however, clients are usually reluctant to participate in FL without adequate economic incentives, which leads to an unexplored critical issue for enabling AIGC-empowered FL. To fill this gap, we first devise a data quality assessment method for data samples generated by AIGC and rigorously analyze the convergence performance of FL model trained using a blend of authentic and AI-generated data samples. We then propose a data quality-aware incentive mechanism to encourage clients' participation. In light of information asymmetry incurred by clients' private multi-dimensional attributes, we investigate clients' behavior patterns and derive the server's optimal incentive strategies to minimize server's cost in terms of both model accuracy loss and incentive payments for both complete and incomplete information scenarios. Numerical results demonstrate that our proposed mechanism exhibits highest training accuracy and reduces up to 53.34% of the server's cost with real-world datasets, compared with existing benchmark mechanisms.
[ "['Guangjing Huang' 'Qiong Wu' 'Jingyi Li' 'Xu Chen']" ]
null
null
2406.08527
null
null
http://arxiv.org/pdf/2406.08527v1
2024-06-12T08:31:34Z
2024-06-12T08:31:34Z
Optimized Feature Generation for Tabular Data via LLMs with Decision Tree Reasoning
Learning effective representations from raw data is crucial for the success of deep learning methods. However, in the tabular domain, practitioners often prefer augmenting raw column features over using learned representations, as conventional tree-based algorithms frequently outperform competing approaches. As a result, feature engineering methods that automatically generate candidate features have been widely used. While these approaches are often effective, there remains ambiguity in defining the space over which to search for candidate features. Moreover, they often rely solely on validation scores to select good features, neglecting valuable feedback from past experiments that could inform the planning of future experiments. To address the shortcomings, we propose a new tabular learning framework based on large language models (LLMs), coined Optimizing Column feature generator with decision Tree reasoning (OCTree). Our key idea is to leverage LLMs' reasoning capabilities to find good feature generation rules without manually specifying the search space and provide language-based reasoning information highlighting past experiments as feedback for iterative rule improvements. Here, we choose a decision tree as reasoning as it can be interpreted in natural language, effectively conveying knowledge of past experiments (i.e., the prediction models trained with the generated features) to the LLM. Our empirical results demonstrate that this simple framework consistently enhances the performance of various prediction models across diverse tabular benchmarks, outperforming competing automatic feature engineering methods.
[ "['Jaehyun Nam' 'Kyuyoung Kim' 'Seunghyuk Oh' 'Jihoon Tack' 'Jaehyung Kim'\n 'Jinwoo Shin']" ]
null
null
2406.08528
null
null
http://arxiv.org/pdf/2406.08528v2
2024-06-14T08:19:28Z
2024-06-12T08:51:08Z
Adaptive Teaching with Shared Classifier for Knowledge Distillation
Knowledge distillation (KD) is a technique used to transfer knowledge from an overparameterized teacher network to a less-parameterized student network, thereby minimizing the incurred performance loss. KD methods can be categorized into offline and online approaches. Offline KD leverages a powerful pretrained teacher network, while online KD allows the teacher network to be adjusted dynamically to enhance the learning effectiveness of the student network. Recently, it has been discovered that sharing the classifier of the teacher network can significantly boost the performance of the student network with only a minimal increase in the number of network parameters. Building on these insights, we propose adaptive teaching with a shared classifier (ATSC). In ATSC, the pretrained teacher network self-adjusts to better align with the learning needs of the student network based on its capabilities, and the student network benefits from the shared classifier, enhancing its performance. Additionally, we extend ATSC to environments with multiple teachers. We conduct extensive experiments, demonstrating the effectiveness of the proposed KD method. Our approach achieves state-of-the-art results on the CIFAR-100 and ImageNet datasets in both single-teacher and multiteacher scenarios, with only a modest increase in the number of required model parameters. The source code is publicly available at https://github.com/random2314235/ATSC.
[ "['Jaeyeon Jang' 'Young-Ik Kim' 'Jisu Lim' 'Hyeonseong Lee']" ]
null
null
2406.08569
null
null
http://arxiv.org/pdf/2406.08569v1
2024-06-12T18:11:24Z
2024-06-12T18:11:24Z
Noise-Aware Differentially Private Regression via Meta-Learning
Many high-stakes applications require machine learning models that protect user privacy and provide well-calibrated, accurate predictions. While Differential Privacy (DP) is the gold standard for protecting user privacy, standard DP mechanisms typically significantly impair performance. One approach to mitigating this issue is pre-training models on simulated data before DP learning on the private data. In this work we go a step further, using simulated data to train a meta-learning model that combines the Convolutional Conditional Neural Process (ConvCNP) with an improved functional DP mechanism of Hall et al. [2013] yielding the DPConvCNP. DPConvCNP learns from simulated data how to map private data to a DP predictive model in one forward pass, and then provides accurate, well-calibrated predictions. We compare DPConvCNP with a DP Gaussian Process (GP) baseline with carefully tuned hyperparameters. The DPConvCNP outperforms the GP baseline, especially on non-Gaussian data, yet is much faster at test time and requires less tuning.
[ "['Ossi Räisä' 'Stratis Markou' 'Matthew Ashman' 'Wessel P. Bruinsma'\n 'Marlon Tobaben' 'Antti Honkela' 'Richard E. Turner']" ]
null
null
2406.08570
null
null
http://arxiv.org/pdf/2406.08570v1
2024-06-12T18:11:32Z
2024-06-12T18:11:32Z
HDNet: Physics-Inspired Neural Network for Flow Estimation based on Helmholtz Decomposition
Flow estimation problems are ubiquitous in scientific imaging. Often, the underlying flows are subject to physical constraints that can be exploited in the flow estimation; for example, incompressible (divergence-free) flows are expected for many fluid experiments, while irrotational (curl-free) flows arise in the analysis of optical distortions and wavefront sensing. In this work, we propose a Physics- Inspired Neural Network (PINN) named HDNet, which performs a Helmholtz decomposition of an arbitrary flow field, i.e., it decomposes the input flow into a divergence-only and a curl-only component. HDNet can be trained exclusively on synthetic data generated by reverse Helmholtz decomposition, which we call Helmholtz synthesis. As a PINN, HDNet is fully differentiable and can easily be integrated into arbitrary flow estimation problems.
[ "['Miao Qi' 'Ramzi Idoughi' 'Wolfgang Heidrich']" ]
null
null
2406.08575
null
null
http://arxiv.org/pdf/2406.08575v1
2024-06-12T18:26:42Z
2024-06-12T18:26:42Z
Using Quality Attribute Scenarios for ML Model Test Case Generation
Testing of machine learning (ML) models is a known challenge identified by researchers and practitioners alike. Unfortunately, current practice for ML model testing prioritizes testing for model performance, while often neglecting the requirements and constraints of the ML-enabled system that integrates the model. This limited view of testing leads to failures during integration, deployment, and operations, contributing to the difficulties of moving models from development to production. This paper presents an approach based on quality attribute (QA) scenarios to elicit and define system- and model-relevant test cases for ML models. The QA-based approach described in this paper has been integrated into MLTE, a process and tool to support ML model test and evaluation. Feedback from users of MLTE highlights its effectiveness in testing beyond model performance and identifying failures early in the development process.
[ "['Rachel Brower-Sinning' 'Grace A. Lewis' 'Sebastían Echeverría'\n 'Ipek Ozkaya']" ]
null
null
2406.08587
null
null
http://arxiv.org/pdf/2406.08587v1
2024-06-12T18:47:28Z
2024-06-12T18:47:28Z
CS-Bench: A Comprehensive Benchmark for Large Language Models towards Computer Science Mastery
Computer Science (CS) stands as a testament to the intricacies of human intelligence, profoundly advancing the development of artificial intelligence and modern society. However, the current community of large language models (LLMs) overly focuses on benchmarks for analyzing specific foundational skills (e.g. mathematics and code generation), neglecting an all-round evaluation of the computer science field. To bridge this gap, we introduce CS-Bench, the first bilingual (Chinese-English) benchmark dedicated to evaluating the performance of LLMs in computer science. CS-Bench comprises approximately 5K meticulously curated test samples, covering 26 subfields across 4 key areas of computer science, encompassing various task forms and divisions of knowledge and reasoning. Utilizing CS-Bench, we conduct a comprehensive evaluation of over 30 mainstream LLMs, revealing the relationship between CS performance and model scales. We also quantitatively analyze the reasons for failures in existing LLMs and highlight directions for improvements, including knowledge supplementation and CS-specific reasoning. Further cross-capability experiments show a high correlation between LLMs' capabilities in computer science and their abilities in mathematics and coding. Moreover, expert LLMs specialized in mathematics and coding also demonstrate strong performances in several CS subfields. Looking ahead, we envision CS-Bench serving as a cornerstone for LLM applications in the CS field and paving new avenues in assessing LLMs' diverse reasoning capabilities. The CS-Bench data and evaluation code are available at https://github.com/csbench/csbench.
[ "['Xiaoshuai Song' 'Muxi Diao' 'Guanting Dong' 'Zhengyang Wang' 'Yujia Fu'\n 'Runqi Qiao' 'Zhexu Wang' 'Dayuan Fu' 'Huangxuan Wu' 'Bin Liang'\n 'Weihao Zeng' 'Yejie Wang' 'Zhuoma GongQue' 'Jianing Yu' 'Qiuna Tan'\n 'Weiran Xu']" ]
null
null
2406.08591
null
null
http://arxiv.org/pdf/2406.08591v2
2024-06-14T19:07:16Z
2024-06-12T18:54:22Z
MEMO-QCD: Quantum Density Estimation through Memetic Optimisation for Quantum Circuit Design
This paper presents a strategy for efficient quantum circuit design for density estimation. The strategy is based on a quantum-inspired algorithm for density estimation and a circuit optimisation routine based on memetic algorithms. The model maps a training dataset to a quantum state represented by a density matrix through a quantum feature map. This training state encodes the probability distribution of the dataset in a quantum state, such that the density of a new sample can be estimated by projecting its corresponding quantum state onto the training state. We propose the application of a memetic algorithm to find the architecture and parameters of a variational quantum circuit that implements the quantum feature map, along with a variational learning strategy to prepare the training state. Demonstrations of the proposed strategy show an accurate approximation of the Gaussian kernel density estimation method through shallow quantum circuits illustrating the feasibility of the algorithm for near-term quantum hardware.
[ "['Juan E. Ardila-García' 'Vladimir Vargas-Calderón' 'Fabio A. González'\n 'Diego H. Useche' 'Herbert Vinck-Posada']" ]
null
null
2406.08593
null
null
http://arxiv.org/pdf/2406.08593v1
2024-06-12T18:59:01Z
2024-06-12T18:59:01Z
Intelligent Multi-View Test Time Augmentation
In this study, we introduce an intelligent Test Time Augmentation (TTA) algorithm designed to enhance the robustness and accuracy of image classification models against viewpoint variations. Unlike traditional TTA methods that indiscriminately apply augmentations, our approach intelligently selects optimal augmentations based on predictive uncertainty metrics. This selection is achieved via a two-stage process: the first stage identifies the optimal augmentation for each class by evaluating uncertainty levels, while the second stage implements an uncertainty threshold to determine when applying TTA would be advantageous. This methodological advancement ensures that augmentations contribute to classification more effectively than a uniform application across the dataset. Experimental validation across several datasets and neural network architectures validates our approach, yielding an average accuracy improvement of 1.73% over methods that use single-view images. This research underscores the potential of adaptive, uncertainty-aware TTA in improving the robustness of image classification in the presence of viewpoint variations, paving the way for further exploration into intelligent augmentation strategies.
[ "['Efe Ozturk' 'Mohit Prabhushankar' 'Ghassan AlRegib']" ]
null
null
2406.08603
null
null
http://arxiv.org/pdf/2406.08603v1
2024-06-12T19:14:58Z
2024-06-12T19:14:58Z
FakeInversion: Learning to Detect Images from Unseen Text-to-Image Models by Inverting Stable Diffusion
Due to the high potential for abuse of GenAI systems, the task of detecting synthetic images has recently become of great interest to the research community. Unfortunately, existing image-space detectors quickly become obsolete as new high-fidelity text-to-image models are developed at blinding speed. In this work, we propose a new synthetic image detector that uses features obtained by inverting an open-source pre-trained Stable Diffusion model. We show that these inversion features enable our detector to generalize well to unseen generators of high visual fidelity (e.g., DALL-E 3) even when the detector is trained only on lower fidelity fake images generated via Stable Diffusion. This detector achieves new state-of-the-art across multiple training and evaluation setups. Moreover, we introduce a new challenging evaluation protocol that uses reverse image search to mitigate stylistic and thematic biases in the detector evaluation. We show that the resulting evaluation scores align well with detectors' in-the-wild performance, and release these datasets as public benchmarks for future research.
[ "['George Cazenavette' 'Avneesh Sud' 'Thomas Leung' 'Ben Usman']" ]
null
null
2406.08604
null
null
http://arxiv.org/pdf/2406.08604v1
2024-06-12T19:17:17Z
2024-06-12T19:17:17Z
GRU-Net for breast histopathology image segmentation
Breast cancer is a major global health concern. Pathologists face challenges in analyzing complex features from pathological images, which is a time-consuming and labor-intensive task. Therefore, efficient computer-based diagnostic tools are needed for early detection and treatment planning. This paper presents a modified version of MultiResU-Net for histopathology image segmentation, which is selected as the backbone for its ability to analyze and segment complex features at multiple scales and ensure effective feature flow via skip connections. The modified version also utilizes the Gaussian distribution-based Attention Module (GdAM) to incorporate histopathology-relevant text information in a Gaussian distribution. The sampled features from the Gaussian text feature-guided distribution highlight specific spatial regions based on prior knowledge. Finally, using the Controlled Dense Residual Block (CDRB) on skip connections of MultiResU-Net, the information is transferred from the encoder layers to the decoder layers in a controlled manner using a scaling parameter derived from the extracted spatial features. We validate our approach on two diverse breast cancer histopathology image datasets: TNBC and MonuSeg, demonstrating superior segmentation performance compared to state-of-the-art methods. The code for our proposed model is available on https://github.com/AyushRoy2001/GRU-Net.
[ "['Ayush Roy' 'Payel Pramanik' 'Sohom Ghosal' 'Daria Valenkova'\n 'Dmitrii Kaplun' 'Ram Sarkar']" ]
null
null
2406.08610
null
null
http://arxiv.org/pdf/2406.08610v1
2024-06-12T19:41:01Z
2024-06-12T19:41:01Z
LayeredDoc: Domain Adaptive Document Restoration with a Layer Separation Approach
The rapid evolution of intelligent document processing systems demands robust solutions that adapt to diverse domains without extensive retraining. Traditional methods often falter with variable document types, leading to poor performance. To overcome these limitations, this paper introduces a text-graphic layer separation approach that enhances domain adaptability in document image restoration (DIR) systems. We propose LayeredDoc, which utilizes two layers of information: the first targets coarse-grained graphic components, while the second refines machine-printed textual content. This hierarchical DIR framework dynamically adjusts to the characteristics of the input document, facilitating effective domain adaptation. We evaluated our approach both qualitatively and quantitatively using a new real-world dataset, LayeredDocDB, developed for this study. Initially trained on a synthetically generated dataset, our model demonstrates strong generalization capabilities for the DIR task, offering a promising solution for handling variability in real-world data. Our code is accessible on GitHub.
[ "['Maria Pilligua' 'Nil Biescas' 'Javier Vazquez-Corral' 'Josep Lladós'\n 'Ernest Valveny' 'Sanket Biswas']" ]
null
null
2406.08619
null
null
http://arxiv.org/pdf/2406.08619v1
2024-06-12T20:04:44Z
2024-06-12T20:04:44Z
Self-Supervised Speech Representations are More Phonetic than Semantic
Self-supervised speech models (S3Ms) have become an effective backbone for speech applications. Various analyses suggest that S3Ms encode linguistic properties. In this work, we seek a more fine-grained analysis of the word-level linguistic properties encoded in S3Ms. Specifically, we curate a novel dataset of near homophone (phonetically similar) and synonym (semantically similar) word pairs and measure the similarities between S3M word representation pairs. Our study reveals that S3M representations consistently and significantly exhibit more phonetic than semantic similarity. Further, we question whether widely used intent classification datasets such as Fluent Speech Commands and Snips Smartlights are adequate for measuring semantic abilities. Our simple baseline, using only the word identity, surpasses S3M-based models. This corroborates our findings and suggests that high scores on these datasets do not necessarily guarantee the presence of semantic content.
[ "['Kwanghee Choi' 'Ankita Pasad' 'Tomohiko Nakamura' 'Satoru Fukayama'\n 'Karen Livescu' 'Shinji Watanabe']" ]
null
null
2406.08623
null
null
http://arxiv.org/pdf/2406.08623v1
2024-06-12T20:12:29Z
2024-06-12T20:12:29Z
Emotion Manipulation Through Music -- A Deep Learning Interactive Visual Approach
Music evokes emotion in many people. We introduce a novel way to manipulate the emotional content of a song using AI tools. Our goal is to achieve the desired emotion while leaving the original melody as intact as possible. For this, we create an interactive pipeline capable of shifting an input song into a diametrically opposed emotion and visualize this result through Russel's Circumplex model. Our approach is a proof-of-concept for Semantic Manipulation of Music, a novel field aimed at modifying the emotional content of existing music. We design a deep learning model able to assess the accuracy of our modifications to key, SoundFont instrumentation, and other musical features. The accuracy of our model is in-line with the current state of the art techniques on the 4Q Emotion dataset. With further refinement, this research may contribute to on-demand custom music generation, the automated remixing of existing work, and music playlists tuned for emotional progression.
[ "['Adel N. Abdalla' 'Jared Osborne' 'Razvan Andonie']" ]
null
null
2406.08627
null
null
http://arxiv.org/pdf/2406.08627v1
2024-06-12T20:20:09Z
2024-06-12T20:20:09Z
Time-MMD: A New Multi-Domain Multimodal Dataset for Time Series Analysis
Time series data are ubiquitous across a wide range of real-world domains. While real-world time series analysis (TSA) requires human experts to integrate numerical series data with multimodal domain-specific knowledge, most existing TSA models rely solely on numerical data, overlooking the significance of information beyond numerical series. This oversight is due to the untapped potential of textual series data and the absence of a comprehensive, high-quality multimodal dataset. To overcome this obstacle, we introduce Time-MMD, the first multi-domain, multimodal time series dataset covering 9 primary data domains. Time-MMD ensures fine-grained modality alignment, eliminates data contamination, and provides high usability. Additionally, we develop MM-TSFlib, the first multimodal time-series forecasting (TSF) library, seamlessly pipelining multimodal TSF evaluations based on Time-MMD for in-depth analyses. Extensive experiments conducted on Time-MMD through MM-TSFlib demonstrate significant performance enhancements by extending unimodal TSF to multimodality, evidenced by over 15% mean squared error reduction in general, and up to 40% in domains with rich textual data. More importantly, our datasets and library revolutionize broader applications, impacts, research topics to advance TSA. The dataset and library are available at https://github.com/AdityaLab/Time-MMD and https://github.com/AdityaLab/MM-TSFlib.
[ "['Haoxin Liu' 'Shangqing Xu' 'Zhiyuan Zhao' 'Lingkai Kong'\n 'Harshavardhan Kamarthi' 'Aditya B. Sasanur' 'Megha Sharma' 'Jiaming Cui'\n 'Qingsong Wen' 'Chao Zhang' 'B. Aditya Prakash']" ]
null
null
2406.08632
null
null
http://arxiv.org/pdf/2406.08632v1
2024-06-12T20:29:14Z
2024-06-12T20:29:14Z
Coupled Ocean-Atmosphere Dynamics in a Machine Learning Earth System Model
Seasonal climate forecasts are socioeconomically important for managing the impacts of extreme weather events and for planning in sectors like agriculture and energy. Climate predictability on seasonal timescales is tied to boundary effects of the ocean on the atmosphere and coupled interactions in the ocean-atmosphere system. We present the Ocean-linked-atmosphere (Ola) model, a high-resolution (0.25{deg}) Artificial Intelligence/ Machine Learning (AI/ML) coupled earth-system model which separately models the ocean and atmosphere dynamics using an autoregressive Spherical Fourier Neural Operator architecture, with a view towards enabling fast, accurate, large ensemble forecasts on the seasonal timescale. We find that Ola exhibits learned characteristics of ocean-atmosphere coupled dynamics including tropical oceanic waves with appropriate phase speeds, and an internally generated El Ni~no/Southern Oscillation (ENSO) having realistic amplitude, geographic structure, and vertical structure within the ocean mixed layer. We present initial evidence of skill in forecasting the ENSO which compares favorably to the SPEAR model of the Geophysical Fluid Dynamics Laboratory.
[ "['Chenggong Wang' 'Michael S. Pritchard' 'Noah Brenowitz' 'Yair Cohen'\n 'Boris Bonev' 'Thorsten Kurth' 'Dale Durran' 'Jaideep Pathak']" ]
null
null
2406.08634
null
null
http://arxiv.org/pdf/2406.08634v1
2024-06-12T20:35:16Z
2024-06-12T20:35:16Z
Unveiling Incomplete Modality Brain Tumor Segmentation: Leveraging Masked Predicted Auto-Encoder and Divergence Learning
Brain tumor segmentation remains a significant challenge, particularly in the context of multi-modal magnetic resonance imaging (MRI) where missing modality images are common in clinical settings, leading to reduced segmentation accuracy. To address this issue, we propose a novel strategy, which is called masked predicted pre-training, enabling robust feature learning from incomplete modality data. Additionally, in the fine-tuning phase, we utilize a knowledge distillation technique to align features between complete and missing modality data, simultaneously enhancing model robustness. Notably, we leverage the Holder pseudo-divergence instead of the KLD for distillation loss, offering improve mathematical interpretability and properties. Extensive experiments on the BRATS2018 and BRATS2020 datasets demonstrate significant performance enhancements compared to existing state-of-the-art methods.
[ "['Zhongao Sun' 'Jiameng Li' 'Yuhan Wang' 'Jiarong Cheng' 'Qing Zhou'\n 'Chun Li']" ]
null
null
2406.08636
null
null
http://arxiv.org/pdf/2406.08636v1
2024-06-12T20:47:17Z
2024-06-12T20:47:17Z
Towards Integrating Personal Knowledge into Test-Time Predictions
Machine learning (ML) models can make decisions based on large amounts of data, but they can be missing personal knowledge available to human users about whom predictions are made. For example, a model trained to predict psychiatric outcomes may know nothing about a patient's social support system, and social support may look different for different patients. In this work, we introduce the problem of human feature integration, which provides a way to incorporate important personal-knowledge from users without domain expertise into ML predictions. We characterize this problem through illustrative user stories and comparisons to existing approaches; we formally describe this problem in a way that paves the ground for future technical solutions; and we provide a proof-of-concept study of a simple version of a solution to this problem in a semi-realistic setting.
[ "['Isaac Lage' 'Sonali Parbhoo' 'Finale Doshi-Velez']" ]
null
null
2406.08638
null
null
http://arxiv.org/pdf/2406.08638v1
2024-06-12T20:53:07Z
2024-06-12T20:53:07Z
Conditional Similarity Triplets Enable Covariate-Informed Representations of Single-Cell Data
Single-cell technologies enable comprehensive profiling of diverse immune cell-types through the measurement of multiple genes or proteins per cell. In order to translate data from immune profiling assays into powerful diagnostics, machine learning approaches are used to compute per-sample immunological summaries, or featurizations that can be used as inputs to models for outcomes of interest. Current supervised learning approaches for computing per-sample representations are optimized based only on the outcome variable to be predicted and do not take into account clinically-relevant covariates that are likely to also be measured. Here we expand the optimization problem to also take into account such additional patient covariates to directly inform the learned per-sample representations. To do this, we introduce CytoCoSet, a set-based encoding method, which formulates a loss function with an additional triplet term penalizing samples with similar covariates from having disparate embedding results in per-sample representations. Overall, incorporating clinical covariates leads to improved prediction of clinical phenotypes.
[ "['Chi-Jane Chen' 'Haidong Yi' 'Natalie Stanley']" ]
null
null
2406.08649
null
null
http://arxiv.org/pdf/2406.08649v1
2024-06-12T21:18:14Z
2024-06-12T21:18:14Z
MOTI$\mathcal{VE}$: A Drug-Target Interaction Graph For Inductive Link Prediction
Drug-target interaction (DTI) prediction is crucial for identifying new therapeutics and detecting mechanisms of action. While structure-based methods accurately model physical interactions between a drug and its protein target, cell-based assays such as Cell Painting can better capture complex DTI interactions. This paper introduces MOTI$mathcal{VE}$, a Morphological cOmpound Target Interaction Graph dataset that comprises Cell Painting features for $11,000$ genes and $3,600$ compounds along with their relationships extracted from seven publicly available databases. We provide random, cold-source (new drugs), and cold-target (new genes) data splits to enable rigorous evaluation under realistic use cases. Our benchmark results show that graph neural networks that use Cell Painting features consistently outperform those that learn from graph structure alone, feature-based models, and topological heuristics. MOTI$mathcal{VE}$ accelerates both graph ML research and drug discovery by promoting the development of more reliable DTI prediction models. MOTI$mathcal{VE}$ resources are available at https://github.com/carpenter-singh-lab/motive.
[ "['John Arevalo' 'Ellen Su' 'Anne E Carpenter' 'Shantanu Singh']" ]
null
null
2406.08654
null
null
http://arxiv.org/pdf/2406.08654v2
2024-06-26T18:40:57Z
2024-06-12T21:33:22Z
Large Stepsize Gradient Descent for Non-Homogeneous Two-Layer Networks: Margin Improvement and Fast Optimization
The typical training of neural networks using large stepsize gradient descent (GD) under the logistic loss often involves two distinct phases, where the empirical risk oscillates in the first phase but decreases monotonically in the second phase. We investigate this phenomenon in two-layer networks that satisfy a near-homogeneity condition. We show that the second phase begins once the empirical risk falls below a certain threshold, dependent on the stepsize. Additionally, we show that the normalized margin grows nearly monotonically in the second phase, demonstrating an implicit bias of GD in training non-homogeneous predictors. If the dataset is linearly separable and the derivative of the activation function is bounded away from zero, we show that the average empirical risk decreases, implying that the first phase must stop in finite steps. Finally, we demonstrate that by choosing a suitably large stepsize, GD that undergoes this phase transition is more efficient than GD that monotonically decreases the risk. Our analysis applies to networks of any width, beyond the well-known neural tangent kernel and mean-field regimes.
[ "['Yuhang Cai' 'Jingfeng Wu' 'Song Mei' 'Michael Lindsey'\n 'Peter L. Bartlett']" ]
null
null
2406.08658
null
null
http://arxiv.org/pdf/2406.08658v1
2024-06-12T21:43:12Z
2024-06-12T21:43:12Z
Pruning is Optimal for Learning Sparse Features in High-Dimensions
While it is commonly observed in practice that pruning networks to a certain level of sparsity can improve the quality of the features, a theoretical explanation of this phenomenon remains elusive. In this work, we investigate this by demonstrating that a broad class of statistical models can be optimally learned using pruned neural networks trained with gradient descent, in high-dimensions. We consider learning both single-index and multi-index models of the form $y = sigma^*(boldsymbol{V}^{top} boldsymbol{x}) + epsilon$, where $sigma^*$ is a degree-$p$ polynomial, and $boldsymbol{V} in mathbbm{R}^{d times r}$ with $r ll d$, is the matrix containing relevant model directions. We assume that $boldsymbol{V}$ satisfies a certain $ell_q$-sparsity condition for matrices and show that pruning neural networks proportional to the sparsity level of $boldsymbol{V}$ improves their sample complexity compared to unpruned networks. Furthermore, we establish Correlational Statistical Query (CSQ) lower bounds in this setting, which take the sparsity level of $boldsymbol{V}$ into account. We show that if the sparsity level of $boldsymbol{V}$ exceeds a certain threshold, training pruned networks with a gradient descent algorithm achieves the sample complexity suggested by the CSQ lower bound. In the same scenario, however, our results imply that basis-independent methods such as models trained via standard gradient descent initialized with rotationally invariant random weights can provably achieve only suboptimal sample complexity.
[ "['Nuri Mert Vural' 'Murat A. Erdogdu']" ]
null
null
2406.08666
null
null
http://arxiv.org/pdf/2406.08666v1
2024-06-12T22:12:03Z
2024-06-12T22:12:03Z
Interventional Causal Discovery in a Mixture of DAGs
Causal interactions among a group of variables are often modeled by a single causal graph. In some domains, however, these interactions are best described by multiple co-existing causal graphs, e.g., in dynamical systems or genomics. This paper addresses the hitherto unknown role of interventions in learning causal interactions among variables governed by a mixture of causal systems, each modeled by one directed acyclic graph (DAG). Causal discovery from mixtures is fundamentally more challenging than single-DAG causal discovery. Two major difficulties stem from (i) inherent uncertainty about the skeletons of the component DAGs that constitute the mixture and (ii) possibly cyclic relationships across these component DAGs. This paper addresses these challenges and aims to identify edges that exist in at least one component DAG of the mixture, referred to as true edges. First, it establishes matching necessary and sufficient conditions on the size of interventions required to identify the true edges. Next, guided by the necessity results, an adaptive algorithm is designed that learns all true edges using ${cal O}(n^2)$ interventions, where $n$ is the number of nodes. Remarkably, the size of the interventions is optimal if the underlying mixture model does not contain cycles across its components. More generally, the gap between the intervention size used by the algorithm and the optimal size is quantified. It is shown to be bounded by the cyclic complexity number of the mixture model, defined as the size of the minimal intervention that can break the cycles in the mixture, which is upper bounded by the number of cycles among the ancestors of a node.
[ "['Burak Varıcı' 'Dmitriy Katz-Rogozhnikov' 'Dennis Wei'\n 'Prasanna Sattigeri' 'Ali Tajer']" ]